Back to NetNewsWire?

I’ve been trying various RSS readers over the last couple of months. Initial Vienna (which felt dated), then Cappuccino (buggy), and now Evergreen… thus far has been lovely.

As noted here, Evergreen is going to be the new NetNewsWire 5.0:

You probably know that I’ve been working on a free and open source reader named Evergreen. Evergreen 1.0 will be renamed NetNewsWire 5.0 — in other words, I’ve been working on NetNewsWire 5.0 all this time without knowing it!

This move back to RSS or feeds in general was a side-effect of getting away from Twitter. Wanting to get away from Twitter was… it’s not important. The end result is that i send less time being interrupted driven, but also miss more things that might be interesting.

My move to Mastodon is a rather lonely affair – without millions of users it lacks the essential “reach” that motivates people’s engagement with Twitter. In order to keep up with friends on Twitter, but without the Twitter “drama”, i’m using TwitRSS.me to scrape RSS feeds.

The RSS returned is a little funky, but Evergreen is doing a reasonable job of rendering. There seem to be alternatives (RSSHub, FetchRSS, etc), although none of them seems to be focused on keeping Twitter at arms length.

Anyway, it’s nice the NetNewsWire is coming back, and if we’re lucky some renewed interest in RSS in general.

Advertisements

Mastodon & Whalebird

There is very little going on (for me) on Mastodon, but it’s still interesting to see how it develops.

My latest thing is leaving a a desktop client called Whalebird open. Another Electron app… which obviously isn’t ideal, but becoming a trend – it’s quick to throw together a standalone app wrapped around the existing browser interfaces. Unfortunately it’s essentially running a full browser for every app. That probably isn’t as bad as it sounds (modern o/s have on demand page loading, etc) but it makes every Electron app an easy target for accusations of bloat.. one of those mind-numbing discussion killers, which is increasingly at odds with the amount of memory available (and generally sitting unused).

Meanwhile there is more discussion, beyond “It’s federated! It’s good!”, focusing on the practical reasons for avoiding Twitter:

The Why: Twitter Is in the Outrage Business; Mastodon Isn’t a Business

Though i don’t think it will happen, it would be interesting to see if Mastodon could survive, in it’s current rather open / tolerant form, an influx of Twitter’s current user base. Perhaps it doesn’t need to, and multiple communities can form on unconnected instances. Do we really need to suffer brands? Will instance owners actively manage their communities, and resist the “freeze peach” pressure to which Twitter has so easily yielded?

When all is said and done, i don’t think Mastodon is ever going to escape it’s niche and challenge Twitter. It might be that it’s too similar for most people’s level on engagement, and a more radical / gimmicky alternative is needed. Until that happens i’m stuck between worlds, with a curated set of Twitter friends followed via RSS!

Edit:

Facebook Blitz

IMG_3180

Determine what others see
For us F stands for a Facebook in which you have more control of your private sphere.
Therefore we’re now providing a clearer overview in one place.

I’m kind of fascinated, in a “car crash” kind of way, with the effort Facebook is putting into improving it’s image in Germany.

Leaving aside the banality of this particular pitch (“we’ve moved all the privacy controls that you need to understand, and obviously don’t, into a single place for you to better not understand!) what on earth are they thinking? Do they really believe that the perceived issue is with what others can see, and not understand that people have become concerned with the entirety of what Facebook itself sees?

Maybe this is part of a longer strategy to attempt to undermine the GDPR regulations, which you’d have to assume would be devastating to the FB “sell out the users to highest bidder” business model. If they can get out in front of it, making the claim that they are already protecting users data, and using it appropriately (“look at all the control we give them!”) then maybe politicians won’t feel that suing them out of existence will be popular?

They’ve lost a million users to GDPR issues… which seems peanuts compared to the total ~365 million European users. And the stock price, despite the historic fluctuations, is still higher than it was at the start of 2018.

It all makes me wonder what news they are not yet out in front of…

The State of “Social”

Post from HN: How Does Mastodon Work?

Answer from me so far: it doesn’t.

Sadly i’m not sure it ever will. Leaving Twitter means leaving behind all the network effect that made the Twitter experience work.

One day an inflection point is going to come and people will migrate to a new platform. However, my guess is that will not happen for technical or organisational reasons, but just because. As much as it would be good to sell people on a messaging platform that is, among other things, virtuous, privacy respecting, user supported, censorship resistant (as Mastodon may well be…) if it doesn’t have the magic combination of simplicity and cool which periodically captures the zeitgeist… it doesn’t work.

I’ve managed to move the majority of my IM traffic to Signal… there are a few stragglers, and it wouldn’t surprise me to hear that it’s role in the communication of others is limited to catering to my diva-like demands! Why has that worked for personal communications, but failed for the broader, more scattershot social networking case?

The use case is different – on a platform like Twitter you curate followers / friends and accept that whatever to say will be broadcast to them all. This obviously means that you adapt your communication style to be less personal. In most cases a Twitter / Facebook / Instagram account becomes a simple means of promotion. Followers, if not the direct audience, act as a means of propagating or amplifying your message. If that network doesn’t not exist to fan out reaching a much wider “market” it’s not really fit for purpose.

It seems that more private (in many cases not in the cryptographic sense) interactions between natural groups (family, close friends, small teams) are migrating to closed chat rooms, most likely on platforms like WhatsApp, Facebook, etc. And Twitter / Facebook / Instagram feeds are slowly dissolving into a market place for both goods, ideas and attention, with the whole thing swirling around a sink hole of advertising intelligence / surveillance.

Sadly, i suspect that the implementation of Circles in Google+ came very close to synthesising something that captured a good balance. Fortunately people read the sociopathic writing on the wall – as bad a custodian of a social graph as Facebook has turned out to be, the only people that i can see giving them a run for their money in the ‘Totalitarian Information Megacorp’ / ‘Grim Meathook Future’ stakes are big G… and Amazon.

[A lot of this musing was brought to mind by seeing a Facebook ad on German TV that, and i kid you not, starts out with “F steht für unsere Fehler” (F stands for our failure), and then gets weirder.]

End-of-central-directory signature not found

You download a zip file only to find that it barfs when you try to un-archive it:

$ unzip -t test.zip 
Archive:  test.zip
  End-of-central-directory signature not found.  Either this file is not
  a zipfile, or it constitutes one disk of a multi-part archive.  In the
  latter case the central directory and zipfile comment will be found on
  the last disk(s) of this archive.

What a nightmare.

 

Now you’re left with the prospect of attempting to download again, or see if you can salvage what you have. Bandwidth is cheap, but the machine at the other end is no longer responding. Great… time to learn how to extract a partially downloaded / corrupt zip file!

It’s actually a lot easier than you might think… which makes me wonder why i’ve never learnt it before. First try a little force:

$ zip -F test.zip --out partial.zip
Fix archive (-F) - assume mostly intact archive
	zip warning: bad archive - missing end signature
	zip warning: (If downloaded, was binary mode used?  If not, the
	zip warning:  archive may be scrambled and not recoverable)
	zip warning: Can't use -F to fix (try -FF)

zip error: Zip file structure invalid (test.zip)

Nope. Now a little more force:

$ zip -FF test.zip --out partial.zip
Fix archive (-FF) - salvage what can
zip warning: Missing end (EOCDR) signature - either this archive
is not readable or the end is damaged
Is this a single-disk archive? (y/n): y
Assuming single-disk archive
Scanning for entries...
copying: selected exported/3 monkeys.jpg (2629234 bytes)
...
copying: selected exported/worried and walking.jpg (21563355 bytes)
Central Directory found...
zip warning: reading central directory: Undefined error: 0
zip warning: bad archive - error reading central directory
zip warning: skipping this entry...

Good to go?

$ unzip -qt partial.zip 
No errors detected in compressed data of partial.zip.

Good to go!

Pi(e) Holing Facebook

It all started with a click. While reading the newspaper i clicked on a link to Facebook and was shocked when it opened.

The reason for my surprise was that in my /etc/hosts i had the following entry:

# Block Facebook
127.0.0.1   www.facebook.com
127.0.0.1   facebook.com

a rather blunt instrument, but one that until now had been effective at shitcanning any links. So why had it stopped working? After some confused poking around it became obvious that my new ISP provided way more IPv6 routing than the old ISP, and macOS was now favouring IPv6 traffic. As a consequence the hack in my /etc/hosts grew to include entries for IPv6:

fe80::1%lo0 www.facebook.com
fe80::1%lo0 facebook.com

And once more Facebook was back in the shitcan.

Note: adding hosts to /etc/hosts is obviously tedious – you can’t wildcard and blocking the root domain doesn’t block sub-domains. In order to get rid of all Facebook servers (just the obvious ones) takes over ten entries, all of which need to now be repeated for IPv6.

At this point any rational person would conclude that this is not a sane thing to be doing. Obviously it’s time to be running my own DNS server and sinkhole and shitcanning domains with wildcards!

Fortunately there are still plenty of people on the internet who haven’t given up, for example, Pi-hole. By installing Pi-hole on a Raspberry PI hanging off the back of my router, and updating clients to use it as a DNS, i have a place where it is possible to wildcard block entire domains.

As a well as providing DNS Pi-hole also maintains a (partial) list of domains that serve ads. This means that devices on your home network that aren’t running ad blocking now has a good chance of not being served ads. This was a partially solved problem, as the Raspberry PI also runs Privoxy  which also blocks a good percentage of ads.

As an aside, the war between ad blockers and ad pushers has been quietly escalating and i’ve been starting to notice that a few news sites are managing to execute Javascript that blocks uBlock Origin. Sites that employ such measures are still blocked from displaying ads by Pi-hole and / or Privoxy.

While installing Pi-hole it was necessary to make some decisions about what to use as a DNS authority. There are some obvious answers like 8.8.8.8 (Google), 9.9.9.9 (IBM and some shady law enforcement types), OpenDNS, OpenNIC, etc. None of which seem ideal.

You probably won’t be surprised to hear that all your DNS queries are sent, unencrypted, over port 53. Which initially sounds like a really bad thing – it would provide your ISP with an easy way to know every site that you looked up. However, in all likelihood they aren’t doing that… mostly because they have stronger, government mandated, requirements to meet, such as tracking every site that you actually visit and when you visited it, not just the ones that you happen to lookup, and then subsequently visit via a cached lookup. If all you had to do was run your own DNS to avoid tracking… yeah, not going to happen.

Despite the above rational, there exists a parallel DNS infrastructure called DNSCrypt, mostly volunteer run, that proxies encrypted access to DNS. Assuming that you can trust that they aren’t logging (something you’re already doing with the DNS providers listed above…) then you can effectively block any visibility of your DNS activity to your ISP… not that they’ll care. If your traffic isn’t leaving your machine via an encrypted tunnel (think VPN, Tor, etc) then you can assume that it is being inspected and logged at the packet level.

In terms of increasing privacy DNSCrypt doesn’t seem to offer very much. It does offer some other protections against DNS spoofing attacks, but i’m not sure how widespread those are in the wild. I’d also guess that the other major providers of DNS are taking countermeasures as they are needed… and are maybe more effective than the volunteer force behind DNSCrypt.

I’ll probably end up installing the dnscrypt-proxy on the Raspberry PI and using it as the resolver for Pi-hole. In the end it’s just going to be an encrypted proxy for OpenNIC, which if given a choice is where i’d want my DNS to be resolved.

I’d recommend looking into Pi-hole it’s a really nice of tools to have a better understanding and control of what devices on your network are actually doing. Oh, and keep in mind that IPv6 is now a thing, running in parallel to the IPv4 internet for which you probably had some reasonable mental model… learning about RA, SLAAC and it’s Privacy Extensions) DAD, etc. was an eye opener for me!

Youtube… ffs

For the longest time i’ve been using a Safari Extension called ClickToPlugin, which replaced Youtube’s video player with the native Safari video player. There were a couple of reason for this, the biggest of which was the horrendous amount of CPU that the YouTube HTML5 player uses. It also disabled autoplay, another scourge of the ad-supported web. Oh, and it never played ad.

The recent re-design broke all this, and it doesn’t look like it’ll be repaired. Time to find another solution… <sigh>

There are other Youtube focused extension out there for Safari, but none of them seem to exactly what i want. Firefox has a few plugins to allow downloading, or copying the video URL, which gives you a way to choose the player. There doesn’t, however, seem to be anything that does exactly what ClickToPlugin managed.

For a few weeks i’ve been using a Firefox plugin to copy the video URL, pasting that into Safari, and letting it play it with the native player. But it means opening Firefox, and switching between browsers, etc.

More recently i started playing with youtube-dl. If i’m going to copy and pasting URLs why not give them to a script, and have it spawn a Quicktime player? Well, the Quicktime player doesn’t have a command line… and who wants to wait until a video has downloaded before watching? It would be better to pipe the output of youtube-dl to a player… but that will have to be something other than Quicktime.

When in doubt try ffmpeg – the true swiss army knife of video! The ffmpeg distribution includes a tool ffplay, which can play video piped into stdin. Looks like we have everything needed:

$ youtube-dl -q -f best --buffer-size 16K https://www.youtube.com/watch?v=DLzxrzFCyOs -o - | ffplay -fast -fs -loglevel quiet -

Now all i need is a dumb bash script in my path, which takes a URL, and plugs it into that command:

#!/bin/bash
if [ $# -ne 1 ]; then
    echo Usage: yt url
    exit 1
fi

url=$1

youtube-dl -q -f best --buffer-size 16K $url -o - | \
 ffplay -fs -loglevel quiet -

Yes, the amount of time and effort involved in avoiding the unavoidable smartness of the smartest people in Silicon Valley…