On Yer Bike

Like many the early pandemic period pushed me towards finding a way to stay fit(ter) at home. Our usual routine of walking every day is a good thing, but despite it’s benefits for mental health, it really only provides a floor for cardiovascular health. Given that there are essentially no hills around here, something else is required…

And so i bought an exercise / spinning bike. More specifically i bought a Schwinn IC8, which is now “updated” and called an 800IC. Over the last two years it has needed “service” several times to fix issues that seemed to caused by poor quality construction. Fingers crossed is now running smoothly. The local service provider was good about it, sending someone out during the warranty period (now ended) but it was a slow and frustrating process.

Several years later we’re both still semi-regularly using it… so that’s good! What tends to happen is that we’ll get into a routine, fall out of it, and taking a few weeks (or months depending on what is going on!) to get back in the saddle.

I’ve had a good few runs, where it was obvious that it was doing good things to my general level of health… during one of these periods a good friend (thanks again Sean!) bought me a heart monitor so i could better track my progress. I’d really like to lower my frenetic resting heart rate – in my head (and heart apparently) i’m a small, nervous, mammal.

The bike has a power meter, probably ridiculously inaccrurate, but hopefully consistent in its inaccuracy. One of my first requirements was to (privately) track some basic infomation about my rides… and, oh my, that’s a nightmare! Eventually i found an app called Kinetic, which would at least record the power meter, heart monitor, and send the data to the Health app on my phone.

After some more coaxing from Sean, it looks like i’ve finally got a stable (and in hindsight) and entirelly obvious setup: connect the heart monitor to the bike (bluetooth); connect the bike to Kinetic; cycle; end the ride and let Kinetic transfer the data to connected apps (currently Health and Strava).

Now i’d like to keep a routine through the winter and emerge from my cocoon in spring as a beautiful, big legged, healthy, butterfly… try not to dwell too long on that mental image!

MacBook Pro 2021 Review

Back in 2018 work gave me a new MacBook Pro. One would struggle to say that it was “well received“. That said, it was just a work machine. Plugging in an external screen and keyboard made it sort of tolerable… not really, just moving the mouse is enough to spin up the fans.

Work is work, but my main home machine was a late 2013 MacBook Pro that had none of the issues of the 2018 model… except it was old and slooow. It also stopped being able to upgrade to the latest o/s versions. The writing was on the wall…

<several weeks later>

The new machine is fine. The screen is amazing, the speakers impressive. Performance is kind of unbelievable – tasks like compiling ffmpeg which used to peg all cores for minutes are over before you’ve even noticed.

The case feels flimsy / thin. The bottom panel is especially flexible, the screen housing is better, but overall the case is a step down in quality / ruggedness. I know i’m not supposed to be trying to abuse it, but it feels too fragile!

The keyboard is only “okay”. It’s still too loud in a kind of “clunky” way that doesn’t inspire confidence in its longevity. The lack of contrast between the keys and the bed makes it harder to use in lowlight. Yes, i could turn on backlighting… but, no. Oh, and the TouchID “key” looks and feels out of place – not that the one on the 2018 model is anyway good / better. At least i’m using it on this machine and enjoying not having to type my password as often.

Unlike the 2016 debacle the above issues are more qualms / quibbles. If i’d been able to walk into the store and see / feel before buying, on balance, i’d probably still have bought one.

In short: still in mourning, and not sure i should have started dating again so soon.

“Abandoned”

Today i remembered the time i had to learn to reverse a articulated truck as a teenager. This was likely before getting a conventional driving license. All of which set me off on a search to see it was possible to find the related patent that my father was granted… and it is!

It’s Patent US4784066A, which is handily available on Google with all the diagrams, one of which is below.

Don’t remember much about truck drivng beyond the unfeasibly numerous low gears, ridiculously long clutch, and how it was impossible to see anything at the back. It might be that someone else lined the thing up and my job was only to slowly reverse onto the railway tracks to couple with the boogie (technical talk!)

Out there somewhere is a short segment on the project that appeared on Tomorrow’s World. Periodically i go looking for an archive, but the BBC gave up adding things long ago… having written all that, it seems terribly familiar, and makes me worry that i’m repeating myself!

PaulStretch on Apple Silicon in 2022

It’s become something of a tradition – will the PaulStretch code still work n years later? The answer invariably is, yes!

Clone from github:

% git clone https://github.com/paulnasca/paulstretch_cpp.git
Cloning into 'paulstretch_cpp'…
remote: Enumerating objects: 166, done.
remote: Total 166 (delta 0), reused 0 (delta 0), pack-reused 166
Receiving objects: 100% (166/166), 92.98 KiB | 2.58 MiB/s, done.
Resolving deltas: 100% (101/101), done.

Look at previous notes and install dependencies:

% sudo port install fltk
% sudo port install audiofile
% sudo port install libmad
% sudo port install portaudio
% sudo port install fftw-3-single
% sudo port install mxml

Apply the following patch to XMLwrapper.cpp

% git diff XMLwrapper.cpp
diff --git a/XMLwrapper.cpp b/XMLwrapper.cpp
index 1efb66e..8fe17ad 100644
--- a/XMLwrapper.cpp
+++ b/XMLwrapper.cpp
@@ -29,7 +29,7 @@ int xml_k=0;
 char tabs[STACKSIZE+2];
 
 const char *XMLwrapper_whitespace_callback(mxml_node_t *node,int where){
-    const char *name=node->value.element.name;
+    const char *name=mxmlGetElement(node);
 
     if ((where==MXML_WS_BEFORE_OPEN)&&(!strcmp(name,"?xml"))) return(NULL);
     if ((where==MXML_WS_BEFORE_CLOSE)&&(!strcmp(name,"string"))) return(NULL);
@@ -407,10 +407,10 @@ void XMLwrapper::getparstr(const char *name,char *par,int maxstrlen){
     node=mxmlFindElement(peek(),peek(),"string","name",name,MXML_DESCEND_FIRST);
     
     if (node==NULL) return;
-    if (node->child==NULL) return;
-    if (node->child->type!=MXML_OPAQUE) return;
+    if (mxmlGetFirstChild(node)==NULL) return;
+    if (mxmlGetType(mxmlGetFirstChild(node))!=MXML_OPAQUE) return;
     
-    snprintf(par,maxstrlen,"%s",node->child->value.element.name);
+    snprintf(par,maxstrlen,"%s",mxmlGetText(mxmlGetFirstChild(node), NULL));
     
 };

Generate the UI:

% fluid -c GUI.fl
% fluid -c FreeEditUI.fl

And compile:

% g++ -ggdb -Wno-return-type GUI.cxx FreeEditUI.cxx *.cpp Input/*.cpp Output/*.cpp `fltk-config --cflags` \
 `fltk-config --ldflags`  -laudiofile -lfftw3f -lz -logg -lvorbis -lvorbisenc -lvorbisfile -lportaudio -lpthread -lmad -lmxml -o paulstretch
GUI.cxx:661:23: warning: object backing the pointer will be destroyed at the end of the full-expression [-Wdangling-gsl]
        const char *outstr=control.Render(control.get_input_filename(),outfilename,type,intype,
                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 warning generated.

Ignore the warning and run the thing!

% ./paulstretch &

Rajma Redux

Not like “indigestion” but part of the “pre-digestion”. I mentioned in the Rajma post that SEO (Search Engine Optimization) was making recipe sites trash. Today at the top of the orange site is an app that removes the clutter. No endorsement / recommendation implied – i’ve not even opened the link.

Take a brief review of the top comments to get a flavour (see what i did there…) of how the big ole goofy search monopoly google is making the web a better place.

Repeating myself at this point, but google is likely the biggest misallocation of engineering resources the world has ever seen. The greatest computer science minds of several generations, all grist for a trillion dollar advertising mill. Possibilities ground to dust under the wheels of silicon valley “progress”.

[With all due respect / apologies to my friends that work there… but come on, you know it’s mostly true!]

Averages

Some time before Xmas i sat down and listened to Stuart Russell‘s Reith Lecture “Living With Artificial Intelligence“. Over four parts it covers a lot of the ethical, economic, and moral issues arising from the (apparently inevitable) development of AGI.

It’s good that it starts out making the point that what we have now with efforts like DeepMind beating a Go champion are, while technically impressive, not at all intelligent. They are complex models, fed with large amounts of data. They may make interesting decisions but they are doing so from a position of statistical analysis.

A more general intelligence would be something altogether different… quite how it would be different is interesting to think about in its own right. What is intelligence? Where is the separation between mind and body? How are we creative?

It was a good thought provoking series. Presented realistically without the huckster futurism of likes of Ray Kurzweil. Russell even has a wacky anglo-californian accent mash-up that reminds me of my own struggles to find a linguistic identity while living in the bay area. The gloss of BBC infotainment is a little off-putting, as is the Q&A process, but overall it’s worth a listen.

I’m not convinced that we’ll actually create AGI in the foreseeable future. It’s hard to tell if any real progress is being made while the field is dominated by the ML boom. And, the questions that the series prompted for me are kind of orthogonal to the theme… oops.

1) During one of the (interminable) Q&A sessions Russell made a flippant remark along the lines of “you really think the human brain is the most complex thing in the universe?!” I’ve never given it much thought, but there are “popular science” facts about the number of possible connections in the brain being a huge (universe scale) number. Maybe it doesn’t make sense as a question – the universe contains the brain, which implies that at best we could say that brains are local maxima of complexity. More complex things could exist, but if they’re driven by similar evolutionary processes, over similar timescales… maybe they’d reach similar limits?

2) Let us say that a true AGI is possible. If that’s the case then it would be possible in other periods of the universe. We’ve only been around for a blink of an eye. If other civilizations were around for longer blinks, they too could have developed AGI. If we further imbue the AGI with characteristics that we might consider “intelligent”, one might be that it would seek to continue to exist.

Therefore it seems reasonable that an AGI with some degree of autonomy / agency would try to ensure it’s continued survival. A simple way to do this is to build redundancy. Driven by the changing nature of the universe, such thinking would most likely result in ever increasing (data-center / continent / planet / solar system / galaxy) levels of redundancy.

If such a system could reach a point of being self-sustaining it’s possibility for growth would be limited only by time. Over billions of years (the universe is roughly 13 billion years old, earth, if we assume it is an average planet for intelligent life, for ~6 billion – there are billions of years to play with!) it could spread itself pretty widely.

And yet, we don’t see it. We don’t see any sign of it.

There are all sort of reasons (see Fermi Paradox, etc) why this might be the case. Or perhaps it would be smart not to be seen? Maybe a suitably smart AGI works out how to communicate via quantum entanglement / spooky at a distance, leading to weird discoveries of the quantum realm. Perhaps entanglement works because the distances between the particles isn’t large in higher dimensions, dimensions with which we don’t know how to interact? Being smart enough over a long enough period of time might let you leave behind the limits of our current level of understanding of spacetime.

At the end of such flights of fantasy (putting the fiction into the science of sci-fi!) we’re back in a universe that is unfathomably large and empty. Aliens or intelligent machines? Not much has changed: we’re isolated in time and space.

I’ve no idea why the Fermi Paradox interests me so much. The more i think about it, the more likely it seems that there are two fundamental truths: i) c is the law; ii) the universe is as big as it is old, and it’s getting bigger at an ever increasing rate!

In which case, yes, there probably is life all over the place, but it’s still unlikely enough that it doesn’t occur in clumps very often. Out here on our average planet, in our average arm, of an average galaxy, in an average supercluster, in an average area of the universe, it could be very lonely!

[Thanks to Sven for listening to an early version of these thoughts. They are no doubt embarrassingly simplistic. Unfortunately i dont really have the time / motivation to go back to school and get to a place of serious study of the details… and, er, get quicky out of my depth!]

A (Chaotic) Reffective Update

Things got a little chaotic for a while. Now that the number of active and (until recently) new infections is much lower, any outbreaks (like the recent Tönnies case in Gütersloh) have a dramatic effect on R.

[There was a paper showing how the sensitivity of R increased as it was lower, how small changes had large impacts… but now i cant find it. Will update if i come across it again!]

Makes sense, and didn’t seem to bother anyone. The overall trend in Germany is still positive. Locally there are some issues (Neuköln, Berlin, etc) but it looks mostly under control.

Here in Hamburg, over the last month or so, the average is 20 – 30 cases every 7 days. Not great. Not terrible.

Unscrupulous DM-SMR Shenanigans

At the end of last year i put together a Synology NAS containing Western Digital Red (NAS) drives. My goal was to pull all of the data scattered across multiple aging machines and external drives into one place. All of that worked out just fine.

Earlier this year a “scandal” broke where WD was found to have started shipping DM-SMR drives in part of a lineup where CMR was expected. In most cases this would be invisible to the user. However, in use cases such as NAS, certain operations would degenerate and become stupidly slow.

The original table showed that drives smaller than 8TB were now being shipped as SMR:

Not good – my new drives were 4TB – right in the middle of the bad range. An additional table showed the SKUs of drives which were effected:

Hmm. That is not the SKU that appears on my invoice. The parts supplied are WD40EFRX, perhaps i got lucky? Having pulled the drives from the NAS to check, it seems that i did indeed get lucky! There is a good write-up and extensive benchmark on Serve The Home which compares the performance of WD40EFAX and WD40EFRX labelled drives. Wasn’t looking forward to fighting the good fight with WD over having being mis-sold.

And, that is the point here – for most cases the performance of SMR and CMR drives is indistinguishable, it’s only when you go to rebuild a array, swap out a bad drive, create a hot spare, etc. that you start to have issues. For drives that are explicitly sold for use in NAS this is an unacceptable ‘bait-and-switch’.

It seems likely that WD will be forced to replace the drives that were mis-sold, but the amount of time and effort they have put into playing down their deception is likely to cost them a lot more in the long run.

An R(effective) Update

First time we’ve been above one for a while. The numbers have been rather odd lately – several days of reporting errors. It’s possible that this is a reflection of that, but the change in R for the 7-day average is harder to explain away.

Edit: today R(effective) for Germany went up again, and currently stands at 1.20. Not good news. This has, imo, been on the cards for a while. During the initial phase of the lockdown the decline in active cases (total number of infected minus recovered minus dead) was declining fairly linearly. You can see that clearly here:

(Been too busy / lazy to plot it out myself. The above is from the worldometers site, which is different dataset than RKI, but good enough for the purposes of this discussion.)

You can see that from around second week of May the incline on the graph changes, and starts to flatten out. This indicates that new infections are now being detected at the same rate as people are recovering. In recent days things have been close enough that small errors in reporting have seen the first upticks in active cases since shortly after the peak. Consequently it should come as no surprise that the R-effective number would be around 1. The total number (around 8500) is still declining, but much more slowly.

What really rankles is that had Germany stayed on it’s existing path for just a few more weeks the number of infections would have been down in the hundreds by now, certainly at a level where Track & Trace™ would have been a sustainable strategy.

It’s not obvious what happens now. There is talk of degree of transmission being much lower outside (makes sense just with dispersion) and that warmer weather also helps. Whether this is enough to keep a lid on things until autumn is unclear. All of the cluster cases seem to be churches, or illegal indoor gatherings, which suggests that if people dont congregate indoors… there is a little hope.

Guess we’ll see… fingers-crossed.

A Little More R in R

Couple of things were bothering me:

  • the key wasn’t really a key it was a subtitle
  • the title needs a date
  • data wasn’t getting automatically downloaded
  • column naming was a mess
  • structure / formating of the ggplot was inconsistent

Sunday morning is the obvious time to fix such issues! Below is new plot, with a key, and built from data pulled from RKI.

R3

And here is my updated script:

library(readr)
library(readxl)
library(ggplot2)

download.file("https://www.rki.de/DE/Content/InfAZ/N/Neuartiges_Coronavirus/Projekte_RKI/Nowcasting_Zahlen.xlsx?__blob=publicationFile", "Nowcasting_Zahlen.xlsx")
nz_data <- read_excel("Nowcasting_Zahlen.xlsx", sheet = "Nowcast_R")
names(nz_data) <- c("date","new","new_under","new_over","new2", "new2_under", "new2_over", "R", "R_under", "R_over", "R7", "R7_under", "R7_over")
g <- ggplot(data = nz_data)
g <- g + geom_line(mapping = aes(x = date, y = R, color = "4 day"))
g <- g + geom_ribbon(mapping = aes(x = date, y = R, ymin = R_under, ymax = R_over), alpha = 0.3)
g <- g + geom_line(mapping = aes(x = date, y = R7, color = "7 day"))
g <- g + geom_ribbon(mapping = aes(x = date, y = R7, ymin = R7_under, ymax = R7_over), alpha = 0.5)
g <- g + ggtitle(label = sprintf("R-effective for Germany (%s)", format(Sys.Date(), format = "%b %d %Y")))
g <- g + ylab("R") + xlab("Date")
g <- g + scale_color_manual(values = c('4 day' = 'firebrick', '7 day' = 'darkblue'))
g <- g + labs(color = 'Average')
g

Now might be a good time to go outside and not think about this!