The Long, Slow Death of Facebook

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    “Facebook has a big problem”, the tech media breathlessly cries. Despite using it every day, I’m not a fan of Facebook, and so am drawn to these articles like a moth to a flame. Let’s all enjoy guilt-free schadenfreude at the expense of a billion-dollar business! So, what’s Facebook’s problem this week? People are sharing more web pages and news stories, but fewer “personal stories”—plain status updates that relate to their lives.

    A while back I complained of a slightly different problem: a lack of customisability of the news feed:

    Does anyone know if there are secret Facebook settings to customise the news feed? Lately it’s been 90% stuff I don’t care about:

    • $friend liked $never-heard-of-you’s photo
    • $friend shared $clickbait-article
    • $friend is going to $event-miles-away

    All I really want to see is real status updates!

    In essence, I was fed up of every day scrolling past a wall of this:

    It turns out that Facebook’s controls for the news feed are pretty terrible. If a friend of mine comments on a non-friend’s post, “likes” it, or worst of all “reacts to” it, that’s automatically considered newsworthy for me. Facebook offers no way to customise the feed to remove these kind of posts.

    You can, however, choose to hide all posts from certain people, including those not on your friends list. So based on the advice I received, I started “hiding all from” everyone I didn’t recognise who appeared in my news feed.

    I’ve done this almost every day for the last couple of weeks, and in a way, it has been very successful. Almost all the strangers’ profile pic changes and distant events have gone, there’s fewer clickbait posts and memes, and mercifully almost no Minions at all.

    But what’s left?

    Not much.

    The media was right, at least as it pertains to my Facebook friends. What remains after you’ve removed all the crap is real status updates—from about five people. Out of 200-odd friends, very few are actually posting status updates and pictures. Mostly of their kids, because I’ve reached that age. The rest of my friends either largely share stuff I didn’t care about, so I don’t see them any more, or they post so rarely that they’re drowned out by the wall of baby photos.

    Although Facebook was our LiveJournal replacement, the place we went to stay in touch with our friends’ lives once we left university for our far-flung pockets of adulthood, it looks like for us that age of constant sharing may be on the decline.

    I’m not sure if I will be happy or sad to see it go.

    Blast From the Past: Dragon’s Claw’s Chromatic Skill System

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    Back in the dim and distant past of my school days, Dreaming Awake was called “Dragon’s Claw” and was going to be a video game rather than a book. As far as I can tell from trawling the Internet Archive, not much was posted about it online, but for some reason today I remembered the design work we did on its skill system.

    To my knowledge no game since has implemented something like this — probably because it’s not a particularly great idea — but it has a certain elegance to it so I thought it worth documenting.

    I think we called the skill system “Chromatic”, or maybe “Prismatic”. Something like that. It was based on the Hue, Saturation and Luminance method of specifying colours on a video screen.


    If you’ve ever used a paint program on a computer, you’ve almost certainly encountered this style of colour picker before. There are three axes to it: Hue, which specifies the colour, Saturation, which specifies how ‘colourful’ (as opposed to grey) the colour is, and Luminance, which specifies the shade of the colour from black to white. (Some systems use Value instead of Luminance, the difference is somewhat technical.)

    In Dragon’s Claw’s system, the Hue of the colour represents the elemental association of the skill, along a continuum. So for example, Fire-based skills have a red hue (approximately 0 on the scale), Water-based skills have a blue hue (approximately 170 on a 0-255 scale). Something like the diagram below — there were more elements to fill up the remaining space, but I forget them now.

    The Saturation of the colour represents the transition between physical abilities and magical abilities, with the physical ones being less colourful and the magical ones more so. For example, the magic spell “Fireball” might be bright red, while “Flaming Sword” is still the same hue but more physical, so has a lower saturation.

    Luminance is a continuum between white and black, and represents the balance between “good” and “evil” abilities.

    If I recall correctly, while abilities were balanced fairly well across the hue and saturation spectra, the majority of abilities clustered towards the centre of the luminance spectrum as the abilities themselves could rarely be said to be good or evil.

    Each character had an innate “colour”, which represented their central position within the three-axis ability spectra. At level 1, a character would be able to use an ability only if it was within 1 integer of their colour on each spectrum. For example, if Rosa has a colour of Hue 0, Saturation 255, Luminance 128 (primary red), she can use any ability with Hue 255-1, Saturation 254-255, and Luminance 127-129. This is a pretty restricted set, although each character would have been designed such that they started with at least one ability.

    As each character increases in level, the “sphere” around their innate colour expands, and if any new abilities fall within that sphere, the character learns that ability. By level 100, each character can use a sizeable proportion of available abilities, but never the complete set.

    As the character uses abilities, their innate colour changes by a fraction towards the colour of the ability used. In this way, characters’ ability sets can be customised however the player wants, simply by practicing abilities in the right “direction”.


    I always thought this was a pretty neat way of determining which characters can use which abilities, and the fact that characters get better at certain types of ability simply by practicing them or similar ones, is definitely appealing.

    The concept of skills as a continuum also allowed for interesting benefits from the use of weapons. Swords, for example, might increase damage dealt by low-saturation (physical) skills, while staves might increase damage for high-saturation skills. A mace might benefit high-luminance (good) skills, while a dagger might benefit low-luminance (evil) skills.


    There are a couple of big disadvantages, not least that we never found a good way of representing a 3D cube of colours to the player — we were limited by the ability of 2D screens and human eyes to see all three axes at the same time.

    Certain areas of the spectrum were also rather sparse — certain hues, saturations and luminance ranges had a lot of abilities in them, while others had fewer, and characters with innate colours in those more sparse ranges would find themselves without as wide a choice of abilities as others.

    The system also scales badly with level as originally designed. Each increase in level expanded the “sphere” of a character’s potential abilities in three axes at once, so an increase of n levels results in an increase of n3 volume. The result of this is that characters’ rate of picking up abilities becomes exponential. A logarithmic increase in sphere radius would have been a better idea to control this.

    Alongside the difficulty in displaying the colour cube, it would also be difficult for players to discover the locations (i.e. colours) of new abilities that they may want. A lot of work would be needed in suggesting to the player which abilities they might want to practice, and what new things they would learn if they did.

    Exploiting Conde Nast Magazine Subscriptions (for fun and profit)

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    I have a pile of unopened subscription copies of Wired UK piling up in the hallway, so this evening I decided to try cancelling my subscription. It looks like you can only do that by email or over the phone, but for other subscription changes, such as change of address, the Condé Nast parent company offer a very helpful website. Rather too helpful.

    The login page usefully notes that “you can find your customer number on the wrapper your magazine comes in.” And indeed it does — strip the letters off the beginning of that long number (as it helpfully doesn’t tell you to) and that’s the customer number.

    The signup form asks for the customer number “(if known)”, leading me to suspect that even that may not be necessary, and all you actually need to know to manage someone’s magazine subscription is their name and address.

    I tested this with my own details. Signing up sent me an email in which high-quality HTML character code skills are demonstrated.

    After fixing the URL and pasting it into a browser, then logging in with my new details, I was given full control of my subscription account. This allows me to see my subscriptions, and to change the address to which they are sent.

    So there you have it — non-intrusively viewing the outside of any Condé Nast magazine subscription packet (possibly UK-only) gives you the ability to view all the recipient’s subscriptions, and redirect them to the address of your choice!

    Optimising for Download Size

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    If, by some vanishing small probability, you are a regular visitor to this website, you may have noticed a few subtle changes over the past few weeks. In part due to trying to access it from a slow mobile connection, and also in part due to a series of tweets courtesy of @baconmeteor which got me wondering how much data is required to load a simple page on my own website.

    The answer, apparently, is just over quarter of a megabyte.

    Not a tremendous amount in this world of 8MB rants about how web pages are too big nowadays, but still unnecessarily large given that it contains only about two kilobytes of useful text and hyperlinks. After 65ms (10% of total load time) and 1.59kB (0.5% of total data size), the content and structure of the page is done — the remaining 90% of time and 99.5% of data are largely useless.

    Over the past few days I have made a few changes to improve the performance of the site.

    Changes Made

    • Web fonts have been removed. I was using three: one for body text, one for italic body text, and one for the menu. Together they comprised over 50% of the data that browsers were expected to download, and although I do like those fonts, it’s dubious for me to impose my font choices on others, let alone make them download 100kB for the priviledge. If you happen to have Open Sans and ETBembo on your system they’ll be used, otherwise the website will appear in something reasonably close.
    • Font Awesome JavaScript is gone. I used the excellent Font Awesome to generate the menu icons on the site, but it pulls in 63kB of JavaScript to support 500+ icons, when all I use it for is static rendering of 12 of them. All major browsers now support inline SVG, so I took the relevant icons from the Font Awesome SVG set and used them instead. Aside from the Juvia commenting system and MathJax, there is no longer any JavaScript on the website.
    • Reduced image size. Although my inner egotist is quite fond of people being able to put a face to the name on all my stuff, the 28kB JPEG could be compressed to 6kB with no discernable loss of quality.

    The result has been a significant reduction in download size and load speed — the same page is now served in less than half the time and with less than 10% of the data usage.

    One extra addition was to explicitly set cache expiry times in the HTTP headers for the website and associated files. Since the CSS and image files are unlikely to change, and in any case it wouldn’t matter much if a user used old ones, setting the cache timeout to a week and a month for various file types has helped speed up loading of subsequent pages after the first. I use the Apache server’s mod_expires module, which has some example config here.

    Changes Not Made

    A couple of changes I considered, but eventually avoided making, were minifying the HTML and CSS of the site.

    The Octopress 3 minify-html gem does what it says, but unfortunately increased the build time of the website by 150% — from around two minutes to over five. I already find the build time annoyingly slow on my mid-range laptop, so have decided to skip this one.

    Another benefit would have resulted from minifying the CSS used for the site. This proved to be significantly more complex, involving configuration of the very capable jekyll-asset-pipeline module. However, the configuration seemed difficult for what would have been at most a 1kB saving, so I avoided this as well.

    Useful Tools

    Two tools were particularly useful in optimising the site for download size:

    • Google PageSpeed Insights identifies speed issues, along with user experience issues and provides a simple display of how the site appears on mobile and desktop. In the case of the image & CSS optimisations it suggests, it automatically performs the optimisation and allows you to download the result.
    • The Pingdom Website Speed Test was also useful as it picks up some issues that the Google tool doesn’t, such as the lack of explicitly-set expiry times on certain files.

    I hope this post has offered some useful hints if you are seeking to “minify” your own website, and optimise it for the download size of each page.

    Android Without Google

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    For several years, I’ve been considering whether I could—and should—dispose of my Google account. Since I wrote the linked post back in 2011, my use of Google services has declined anyway, and I no longer use GMail, Google+ or Google Calendar. At the same time, it has become apparent that users are at the whim of Google’s decision to close unprofitable services (even beloved ones like Reader), and to force us into using others against our will. “Don’t Be Evil” is starting to look hilariously naïve.

    The last hold-out in my desire to dump Google is my Android phone. Without a Google account and the closed-source “Google Play Services” blob that sits at the core of an Android phone, the experience is diminished significantly. While I like my phone’s hardware, I am not fond of the Google integration that I no longer fully trust. So, for the last few months I have been experimenting with running Android without Google.

    Here’s what I’ve learned.

    I do Depend on Some Google Apps after all.

    Aside from what’s provided in my AOSP-based CyanogenMod base software (much of which was written by Google, but is at least open source), I have two Google apps left on my phone—Maps and YouTube.

    Google Maps

    Maps features live traffic updates, a key feature when driving long distances to see friends and family. Although other apps have voice guided navigation (I also have OsmAnd installed), I’ve been unable to find a free live traffic offering that matches Google’s.

    YouTube doesn’t seem to have a decent open source client for Android, either due to legal problems or just that there’s little desire for one. To cope with my son’s regular desire to watch the music videos for obscure pop songs on my phone, I’ve had to keep this installed too.

    EDIT: Check out NewPipe for a YouTube replacement!

    That means Google Play Services stays.

    Neither Maps nor YouTube works without Google Play Services installed, and so far that’s meant I have had to keep it installed on my phone.

    However, my main trust issues with Google stem from their tracking and the amount of data they (want to) store about me. I can still prevent this another way—by removing the Google account from my phone. Play Services, Maps and YouTube continue working correctly, but my phone-based activities are no longer reported to Google in a way that connects them to me, and I can move one step closer to deleting my account.

    You can Just About Survive on Open Source Apps


    My replacement for the Play Store is F-Droid, a repository for open source apps, and in the spirit of trying to copy my laptop’s (mostly) open source software, I have decided to use it almost exclusively.

    There are other app stores available, such as Amazon’s, but the “ditching Google” exercise is about trust, and I’m not sure I trust Amazon any more than Google when it comes to their ability to monitor my phone usage and try to sell me things. Aptoide is another possibility, but the user-hosted repositories are full of software the users don’t own, are pirated, or potentially full of malware; once again the trust is lacking.


    As far as standard productivity apps go, there has either been an open source equivalent that fulfilled all my needs, or I was already using an open source app anyway and could update it direct from F-Droid.

    • K9 Mail was my default mail client anyway. It has a few failings, but in my opinion is still the best mail client on Android.
    • Firefox was my default browser, and is open source as you would expect.
    • Silence replaced TextSecure. It’s a fork that is identical in every way bar the name.
    • WeeChat’s Android client is open source.
    • VX ConnectBot is open source.
    • DAVDroid replaced CalDAV-sync and CardDAV-sync. My self-signed SSL certificate had to be added to Android itself manually, but aside from that quirk this one app replaced two.
    • For security, Google Authenticator and OpenKeyChain are open source apps that I was already using, and KeePassDroid replaced Keepass2Android with only a few little irritations.
    • Ghost Commander replaced File Explorer and Turbo FTP Client together—I find its interface annoying to use, but it seems to be the only open source file manager with SFTP and certificate-based login support.

    Web Stuff

    Here again, I didn’t find much to change from my original apps, largely because I was using non-standard software anyway.

    • OnoSendai is written by a friend of mine, and is my normal Twitter client. It has its own F-Droid repository.
    • FaceSlim is a simple wrapper around the Facebook mobile website. Again, I have used this in preference to the permission-hungry “real” Facebook app for a long time.
    • I was already using ownCloud and ownCloud News Reader for file sync and RSS reading; both are open source.
    • Slide replaced Reddit Sync as my go-to Reddit client. There’s another service I ought to ditch one of these days.

    Everything else, I already used my browser for on Android.


    Games have been the biggest difference between my phone before ditching the Play Store and my phone now. The number of games for Android—even closed source ones—outside of the Play Store is very limited. F-Droid has a variety of simple puzzle and arcade games, while past Android Humble Bundles have provided some high-quality indie titles as downloadable APKs, but on the whole the choice is bleak.

    On the plus side, the exercise has given me a great excuse to dump a number of potentially evil mobile games that I seem to have picked up since my last purge.

    Castle Clash

    How many hours have I wasted on improving pixel people and harvesting ephemeral bits?

    The Gaps

    There are a bunch of applications that I still use because there is no proper open source replacement. oandbackup is not yet a decent replacement for Titanium Backup, and try as I might, I cannot get my VPN server working in “OpenVPN for Android” while it works fine in the closed source “OpenVPN Connect”.

    And there are the proprietary services that will likely never have open source clients that offer the full functionality—Spotify, Netflix, the apps for my mobile ISP and my bank. This is where the main problem lies: these apps that I can continue using but are no longer receiving security updates via the Play store.


    The vast majority of apps on my phone either come preinstalled in AOSP or Cyanogenmod, or can be found in the F-Droid repository and successfully kept up to date there. It’s workable as an entirely open source platform (bar the separate issue of device drivers).

    But dragging a few “essential” closed-source apps into the situation is a lot more difficult than on a desktop operating system. On my laptop I can install Spotify direct from the company’s website, and even add a repository to my package manager to get automated updates. But on Android, the Play Store dominates and the majority of app writers do not allow publishing anywhere else.

    This is an unintentional lock-in that prevents users from having a choice of software sources.

    It’s an almost useless fight, but I feel that we ought to continue fighting against the new operating systems’ desire to contain our purchasing and constrain what we can and cannot do with our devices.

    The future’s good for me—I’m getting by without Google’s tracking features on my phone, which puts me in a good position for a potential switch to Ubuntu Touch, Sailfish or another phone OS that respects its users’ privacy. But not everyone would find it so easy to do without the proprietary blob at the middle of Android, and that’s worrying for the future of the general purpose computers we all could have in our pockets.

    Migrating to Octopress 3

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    Those of you who follow me on Twitter may have seen me dither about whether to re-style my website after the very appealing (to me) Tufte CSS. The sidenotes with their wide bar didn’t work particularly well with my blog format, but I’ve taken on some of the major style elements, and unless you’re reading this via RSS, you can see the results in front of you right now.

    In doing so, I decided to update the old Octopress code on which many of my websites are based. This is a long, complicated process of “merge hell” where I try to keep my own customisations to core files, theme mods, new themes, and odd plugins, while making sure nothing conflicts with the changes that have taken place within Octopress itself. With eight different Octopress sites, each with their own oddities, this was a daunting task.

    While looking for the latest minor version of Octopress 2, I discovered that Octopress 3 was released months ago… and it changes everything.

    All the problems I’ve had with Octopress over the years—updating and merging using git, managing multiple sites as branches, pulling in themes as submodules—I’d always put down to me just “not getting it”. Lots of people use this, so my problems are due to my inadequacies as a programmer, surely?

    But no, all along the developer has had the exact same problems with it. Octopress 3 reworks the whole thing, to do less rather than more, and it makes so much more sense. It’s now a gem that helps you set up a Jekyll site in a certain way, with some extra tools to help manage posts and deploy the site. Your source is no longer mashed together with Octopress’ source in the same repository, and it’s sufficiently out of the way that Jekyll’s “collections” now work properly.

    I’ve had to faff with a few links here and there to manage the eight different sites as collections under the same site (so apologies if you find any dead links), but the whole thing should be a lot more manageable!

    Vivarium Automation: Requirements and Component Spec

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    It’s a little over a month until we are getting our first pet – a crested gecko. Joseph has decided that if it’s female it will be called “Scarlet”, and Eric has decided that if it’s male it will be called “Rimbaud” after the surrealist poet, partially because it is also a homonym of “Rambo”. I almost hope we get a female as it will be easier to explain.

    In the mean time, we are getting our vivarium set up ready for our pet. We have just about everything we need, but managing the environment is a manual process — turning the lights on in the morning and off in the evening; maintaining heat and humidity.


    Vivarium shown here with simulated occupant.

    This is crying out to be an electronics project, so I’m going to make it one! In this post I’ve laid out my initial requirements and listed some suggested components. I’ll probably do one or two more covering the actual hardware build and software when the components arrive.


    My requirements for the automated vivarium system are that it must:

    1. Automatically turn the 12V DC LED light panel on and off at a defined schedule
    2. Monitor temperature and humidity inside the vivarium
    3. Automatically control the 240V AC 10W heat mat to keep the temperature within defined bounds
    4. Send email alerts if temperature and humidity exceed the defined bounds
    5. Take regular photos of the inside of the vivarium
    6. Regularly post photo, temperature, humidity and status information to another computer for display on a website
    7. Fit in a 450x80mm space next to the vivarium (except components that must go inside)
    8. Be powered from a household 240V AC supply
    9. Not expose 240V AC to the probing fingers of children.

    Initial Design

    The requirements to operate the lights at specific times of day (requiring a proper clock), to send emails, to use a camera and to send files to a computer all push the design towards one including a “proper” small form factor computer rather than a basic microcontroller. Due to my familiarity with the hardware I have chosen a Raspberry Pi for this system. The Model A should be sufficient for the system’s limited requirements.

    The Raspberry Pi’s official camera modules are easy to use and have good performance due to dedicated processing on the Pi’s GPU. I have chosen the “NoIR” camera, which lacks the IR filter of the standard camera, to improve visibility of the gecko at night. No IR illuminator is proposed as this may interfere with the lizard’s sense of time or temperature regulation.

    The proposed AM2315 thermometer and hygrometer module is comparatively expensive, but comes inside a tough enclosure with a wall mount and uses the standard I2C protocol, compared to the proprietary bit-banging protocols of the cheap sensors.

    Relays will be used to switch the lights and heat mat power on and off. A breadboard will be mounted to a Raspberry Pi case to keep the hardware neat while allowing for easy extension in future.

    Component List

    Here’s my list of the components, along with links to buy them. All but one are available on Amazon in the UK; the thermometer/hygrometer seems to be an Adafruit special and will have to be imported from the US.

    Component Choice Price / GBP    Link
    Computer Raspberry Pi Model A+ 18.00 Amazon UK
    Wifi Dongle Ralink RT5370 4.71 Amazon UK
    GPIO Breakout Pi Cobbler 10.00 Amazon UK
    SD Card Kingston 8GB 4.00 Amazon UK
    Breadboard BB400 1.15 Amazon UK
    Jumpers Generic 1.07 Amazon UK
    Power Supply Generic 6.00 Amazon UK
    Case Model A Case 4.49 Amazon UK
    Temp/Humid Sensorr AM2315 19.97 Adafruit
    Camera Raspberry Pi NoIR 19.13 Amazon UK
    Suction cups Generic 3.57 Amazon UK
    Relay Board Facilla 2-channel 1.13 Amazon UK
    Enclosure for mains relay    Generic black ABS 150x80x50    5.90 Amazon UK
    Enclosure glands Generic M12x1.5 1.59 Amazon UK
      Total: 100.71  

    Stay tuned for build photos, schematics and source code once all the components arrive!

    Update: It turns out the heat mat was bought with its own dedicated thermostat. With this in mind I’ve decided to ditch the timed control of the lights and use a standard mains plug timer instead. This will be easier for people to override if necessary, rather than depending on whatever software interface I provide.

    Since the system is therefore not controlling anything I can ditch the relay board and the requirement to use a proper glanded enclosure to protect the 240V AC switching relay. It will still take photos, monitor temperature and humidity, display them on the web, and email on important events.

    Preparing to Leave Heroku

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    An email today announced a beta test of some new features that Heroku are “excited” to introduce. New service levels are available that include a “hobby” tier that does… exactly what the old “free” tier used to do. For $7 per month per app!

    The free tier has now been downgraded so that it must “sleep” — i.e. be unavailable — for at least six hours a day.

    As a long-term abuser of Heroku’s free tier, I’ve enjoyed continuous uptime for all my sites courtesy of Heroku. A lot of sites.

    Heroku Apps

    All of which I now have to slowly migrate off Heroku as freeloaders like me are no longer welcome!

    The sites that are static HTML (including the Octopress sites) and PHP have in the main already been migrated back to my own web server over the last few hours, and I’ll continue to monitor usage statistics over the next few days to ensure it can cope with the extra load.

    Some sites using Ruby, and others that depend on HTTPS will be a little more difficult to move. Certain sites such as the SuccessWhale API that require high bandwidth and good uptime may stay on Heroku and move up to a paid tier if required.

    Hopefully none of this should impact users of the sites, but please let me know if you find a site or application is inaccessible or suffering from poor performance.

    "Archive by Year" Aside for Octopress

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    (Note: This code is designed for Octopress 2. My website now uses Octopress 3— the Jekyll plugin works just the same, but Octopress 3 sites can use the jekyll-archives gem to generate proper archive-by-year pages so should not need the bookmark ‘hack’ shown in the first code block.)

    One thing that’s annoyed me since migrating my website from WordPress to Octopress years ago has been the lack of an “archive by year” widget for the sidebar. The WordPress widget that fulfills this function lists each month and year, with the number of posts in that month, and each one is a link to a page that shows all the posts from that month.

    As you may notice on the left-hand side of each page, I decided to recreate something similar in Octopress.

    There are a couple of differences between the WordPress implementation and my Octopress implementation:

    1. Octopress doesn’t generate pages that show all posts from a particular month (or year). It does generate an “archive” page with links to all posts in order, which is what I’ve used as a destination for each link.
    2. Partly as a result of this (and partly because I’ve been blogging far too long), I decided to stick with one link per year rather than one link per month.

    My first modification was to the “archives” page. To this I simply added a named a tag to each year title (see line 12 below). This allows each year title to be used as a bookmark and linked to appropriately.


    layout: page
    title: Blog Archive
    footer: false
    <div id="blog-archives">
    {% for post in site.posts reverse %}
    {% capture this_year %}{{ | date: "%Y" }}{% endcapture %}
    {% unless year == this_year %}
      {% assign year = this_year %}
      <h2><a name="{{ year }}"></a>{{ year }}</h2>
    {% endunless %}
      {% include archive_post.html %}
    {% endfor %}

    The code that generates the widget (or “aside”, in Octopress parlance) can’t be written in a single .html file using Liquid tags as it is too complex. Thus I implemented it by defining a new Liquid tag called archive, as follows.


    module Jekyll
      class ArchiveTag < Liquid::Tag
        def render(context)
          html = ""
          yearData =
          # Get range of years for which there are posts
          posts = context.registers[:site].posts
          firstYear = posts[0].date.year
          lastYear = posts[posts.size-1].date.year
          # Build up a map of {year => number of posts that year}
          for year in firstYear..lastYear
            yearData[year] ={ |post| == year }.size
          # Build the html items
          yearData.sort.reverse_each { |year, numPosts|
            if numPosts > 0
              html << "<li class='post'><a href='/blog/archives##{year}'>#{year} (#{numPosts})</a></li>"
          # Write out the html
    Liquid::Template.register_tag('archive', Jekyll::ArchiveTag)

    The final piece of the puzzle is to create an aside to display the new tag, which is done simply as follows:


      <ul id="archive">
        {% archive %}

    Adding asides/archive.html to the default_asides section in Octopress’ _config.yml adds the new aside to each page.

    The end result is just like the one you can see in the sidebar of every page on this blog: a list of each year for which there are posts, in descending order, suffixed by the number of posts made that year. Each item in the list is a link to the main “archive” page, jumping straight to the bookmark for that year.

    This code is in the public domain so feel free to use it on your own blog and modify it however you like!

    The End of the Road for SuccessWhale’s Facebook Support?

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    My SuccessWhale application has long supported both Twitter and Facebook social networks, despite both networks’ relatively developer-hostile stances. The worst offender by far was Twitter, with it’s 100,000 user limit that has deliberately crippled many third-party clients in order to drive users to the official website and app, which make money for Twitter through adverts. While I was never under any delusion that SuccessWhale would be popular enough to reach 100,000 users, it’s not a nice thing to have hanging over your head as a developer.

    Facebook’s permissions policy, as I have ranted about before, also makes it difficult for third-party clients to deliver a useful service for their users. With the new requirement that apps migrate to API v2, they are adding the extra hassle of requiring all apps be reviewed by Facebook staff. This isn’t a problem itself — SuccessWhale has been through the somewhat scary process of manual review before when it was added to the Firefox Marketplace.

    But Facebook has now snuck something extra into the notes for some of its permissions, each of which must now be manually approved as part of the review process. Into pretty much all the permissions that are fundamental for SuccessWhale, such as read_stream:

    Facebook dialog for read_stream permission

    Yep, this permission will be denied, as a matter of policy, to apps running on Android, iOS, web, desktop, and more.

    So predictably, SuccessWhale failed its manual review and has been denied approval to use Facebook API v2.0 or above. As far as I can tell at this point, that means on May 1st all Facebook features of SuccessWhale will cease to function. Facebook, ever the proponent of the walled garden path down which Twitter has ventured as well, has struck another blow for increasing their profits and user lock-in at the expense of the open web that SuccessWhale depends on.

    It’s a sad time for the web; the “web 2.0” of mashups and free access to data is slipping away with it. And though Facebook’s change does not kill off SuccessWhale and its kin outright, the future does not look rosy for us developers that believe users should be free to access a service in a way they prefer.