Tag: Internet

    SuccessWhale.com Discontinued as of Today

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    As far as I know, SuccessWhale is not being actively used by anyone any more, so I have chosen not to renew the domain name successwhale.com when it expires today. Like most of my past web-based projects, it will continue to live on at an onlydreaming.net subdomain, in this case sw.onlydreaming.net, but will not be actively maintained there.  As well as its graphical web interface, SuccessWhale also has a back-end API that used to run on a SuccessWhale subdomain. This has now moved to https://successwhale-api.herokuapp.com/. The OnoSendai Android client already uses this address for the API as of update 479, so you may need to update.

    Thank you to all the SuccessWhale users over the years!

    Migrating from Jekyll to WordPress

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    The final, and most difficult, part of the plan to wind down some of the more complex stuff I do on the internet was the migration of this site from Jekyll and Hashover to WordPress. It’s a decision I took with some trepidation, as I well remember ditching my old WordPress site for Jekyll (via Octopress) four years ago and enjoying the speed and security it brought.

    However, the workflow is what killed it. The typical “By the Numbers” film review is a shared activity with friends around the TV, which doesn’t lend itself to being sat at a desk at the only computer of mine that can reasonably compile the Jekyll site. I switched to hosting the site on GitHub pages and just editing the pages myself in a browser window, but uploading and linking images was still a multiple step, non-WYSIWYG game of making sure the URLs are all right, followed by a 3-minute compile stage where everyone is waiting to read the finished article and I have to explain why.

    Comments were worse still. I staggered between Juvia, a discontinued Rails application that I was out of my depth maintaining by myself, Disqus which “just worked” but put visitors off commenting, and HashOver where fighting spam involved finding offending new comments via ssh.

    Back on WordPress, things may be slower and security more of a concern, but comments are natively supported, I can drag images in, and preview posts on the fly. No compile stage!

    All in all the process took about 10 hours. If you’re contemplating a similar step, here are some useful hints, as unlike the reverse move, migrating in this direction seems to be a rare activity:

    1. This post was really useful, and RSS export/import does seem to be the best way of moving the main post data across. I moved my pages (20 of them) manually, and used the RSS method for my ~1000 posts.
    2. My Jekyll site had three levels of taxonomy – Collection, Category, and Tag. I believe it’s possible to create the same taxonomies in WordPress, but I didn’t bother. I moved one collection at a time, merged Jekyll’s categories and tags into WordPress’ tags, then Jekyll collections became WordPress categories.
    3. The RSS importer can’t import tags, only categories, so I imported everything as categories and used a “Categories to Tags converter” plugin to sort the mess out.
    4. The Disqus comment importer script from the page linked above worked well for importing a Disqus export, with the exception that Disqus comment and thread IDs are now greater than 2^32. The importer uses these as keys into maps, so I had to subtract arbitrary large numbers from the IDs (which PHP is perfectly happy to do) in order to make them usable as integer keys.
    5. I had a lot of files such as pictures in arbitrary locations which Jekyll is happy to deal with, but WordPress is not. I moved everything into “wp-content/uploads/” with some .htaccess redirects so that they can still be found.
    6. Recreating the Jekyll theme wasn’t too hard, although it took around half the total time. When I first moved away from WordPress its themes were a messy mystery to me, but with four years’ more experience, I can see the parallels between my Jekyll templates and WordPress’ ones, and the transition went very smoothly.

    Although it’s been a few days of working late into the night, I’m happy to say it’s now done. Hopefully the blog will be easier to manage from here on.

    No compile stage!

    Farewell to Facebook

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    My blogging history has not been lacking in posts where I consider deleting my Facebook profile. It’s been a common thread throughout that time that Facebook has its advantages (having become my sole practical means of contacting many old friends) and disadvantages (that it is a privacy-devouring monster). In the main, we have been willing to make a deal with the Devil in order to use the vast network of communication possibilities it opens up for us.

    After holding a Facebook account for 10 years, and apparently struggling with whether that was a good thing for at least six of them, today I deactivated my account—a temporary measure to see how it goes, before a potential full deletion in the near future.

    The last straw for Facebook was, as with the last straw for LinkedIn, a matter of privacy.

    Facebook’s “People You May Know” feature is often handy for finding friends-of-frients that you know. Occasionally it recommends someone you kind of know despite not having any common friends, and makes you wonder how the algorithm put two and two together. A little concerning sometimes, but not the end of the world.

    Today, Facebook’s recommendations went from “a little creepy” to “compromising friends’ private medical data”.

    I shan’t name any names, for obvious reasons, but I have a friend who is currently suffering from a particular medical problem. As part of their treatment, that friend has regular appointments with a medical professional, and in supporting my friend, I’ve previously been in contact with that person as well. The friend isn’t on Facebook at all, citing privacy concerns. I have not mentioned anything about them nor their problem on Facebook. And yet today, Facebook’s blissfully context-free recommendation algorithm started suggesting that I add that medical professional as a friend.


    As far as we can tell, what happened is this:

    1. I exchanged an email with the person, via my GMail address.
    2. I have GMail set to remember people I email by automatically adding them to my address book.
    3. I have an Android phone, which automatically syncs my Google contacts, so their email address became stored on my phone.
    4. I have had, in the past, the Facebook and Messenger apps on my phone. I had granted access to my contacts, so they could set contacts’ photos to their Facebook profile picture.
    5. The Facebook and/or Messenger apps hoovered up my contacts and sent them to their server.
    6. The person’s email address matched the one they used for their Facebook account, and so Facebook knew that we had some kind of connection.

    Now, it’s not entirely Facebook’s fault. Some of the fault lies with Google, and a not inconsiderable portion of the fault lies with me for not checking apps’ permissions and privacy policies. However, it’s the Facebook part of the puzzle that made the whole thing creepy, and so, that’s the part that has to go.

    So farewell, Facebook. You’ve made staying in touch with a lot of friends much easier over the last decade, and for that I’m grateful. And I’ve always known that the price for that was that you’d play fast and loose with my own privacy. But when you start to infringe on the privacy—and potentially the confidential medical information—of one of my most vulnerable friends, you crossed the line.

    I’m done.

    The Open Source Disadvantage

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    Three years ago, Google shut down its popular RSS reader web application. The decision angered many users, and I penned a long rant about how horrible proprietary services are as they can be taken away from the users at any time without their consent.

    I found the News app for OwnCloud, installed in on my own server and never looked back.

    Until today.

    Updating the version of OwnCloud on my server, to get the latest security patches, has broken the News app permanently.

    It turns out that some time ago the OwnCloud development team split acrimoniously and started a rival fork called “NextCloud”. The maintainer of the News app jumped ship, leaving OwnCloud News unmaintained until it eventually broke.

    It looks like I now have three options:

    1. Take over development of an abandoned project, which I am (in terms of both time and experience) ill-equipped to deal with
    2. Migrate from OwnCloud to NextCloud, a complex process which also involves changing the software I use for file, contacts and calendar synchronisation
    3. Use a proprietary service like Feedly instead.

    As you might imagine, I picked option 3. I was up and running again within five minutes.

    It’s enough of a frustrating experience to have me considering the reverse of a post I made years back, considering which proprietary services I should stop using in favour of doing my own thing. Since then I started running my own mail server, as well as OwnCloud, to meet my online needs; I migrated all my websites from Heroku to my own server as well. I learnt a lot—that fighting spam is hard, SPF is hard, maintaining SSL certificates is hard, few clients support CalDav and CardDav properly, and so on.

    It’s been an experience, certainly—mostly a good one, or at least an interesting one. But I do wonder, over the years, how much frustration and wasted time I’ve had that could have been saved by dropping my ideological preference for open source software and “DIY”, and accepting that even if they can shut down unexpectedly, some proprietary services are just so much easier.

    How I Blog Now

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    It’s fifteen years today since I first posted something—specifically, terrible teenage poetry—on what would become my blog. Back then my website was a purple-and-black exhibition of my poor teenage sense of humour, and I started posting snippets of poetry to it under the category of “Thoughts”.

    Mad Marmablue Web Portal, circa 2001

    In 2002 I was invited to an up-and-coming site called “LiveJournal”, a perfect platform for sharing my young adult angst and drama for the world to see. At university it became central to our social lives, a foreshadowing of the social network generation that was yet to come.

    LiveJournal came and went. By 2009 I was blogging on my own WordPress site and syndicating the posts to LJ, and by 2011 I was reminiscing about what we had lost. In 2013, beset by buggy plugins and security problems with WordPress, I moved to about the nerdiest blog platform imaginable, the static site generator Octopress.

    My Octopress blog, circa 2013

    Editing a site this way has its advantages—the end result is fast and secure, and appeals to my geekier tendencies by allowing me to keep it all under version control. But it has its disadvantages too, principally the fact that the site needs a “compile” step before the results can be seen. In recent months the combination of my old PC, 3000+ pages to render, and a few poorly-implemented plugins have resulted in compile times in excess of three minutes. That’s not too bad for a one-off post, but it’s particularly grating when we do Film Review by the Numbers on a Saturday night. Writing the reviews is something of a spontaneous group activity, and when it takes three minutes to see what a change will look like, those minutes feel a lot longer.

    A fifteenth anniversary seems like as good a time as any to make some changes, so I’ve been working on some ways to speed up the writing and generating process.

    Firstly, I have started doing the simpler editing tasks, like writing a new post, directly on GitHub where the source code lives. Its Markdown editor has a preview function that renders instantly, meaning that for Film Review by the Numbers (and everything else) we can get an approximately-correct rendered page with inline images straight away. I can also commit directly to the repository from there once everything is looking right.

    I’ve contemplated using GitHub Pages to host the site directly as well, though its lack of support for SSL certificates and Jekyll plugins rules it out for now. I have, however, started using GitHub’s “webhooks” to trigger an automatic rebuild of the site—on every commit, GitHub pings a script on my server based on marcomarcovic’s simple-php-git-deploy, which updates its local copy of the site, rebuilds it using jekyll, and deploys it to the public directory on the server.

    With it all configured, I can now keep my fast and secure static site, while also regaining some of the ease of a web-based editor that I miss from the WordPress days. I can also sensibly blog on the move from my phone or tablet, without having to open up a command-line console every time.

    This is my first test, and if you’re reading this, I guess it works!

    The Long, Slow Death of Facebook

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    “Facebook has a big problem”, the tech media breathlessly cries. Despite using it every day, I’m not a fan of Facebook, and so am drawn to these articles like a moth to a flame. Let’s all enjoy guilt-free schadenfreude at the expense of a billion-dollar business! So, what’s Facebook’s problem this week? People are sharing more web pages and news stories, but fewer “personal stories”—plain status updates that relate to their lives.

    A while back I complained of a slightly different problem: a lack of customisability of the news feed:

    Does anyone know if there are secret Facebook settings to customise the news feed? Lately it’s been 90% stuff I don’t care about:

    • $friend liked $never-heard-of-you’s photo
    • $friend shared $clickbait-article
    • $friend is going to $event-miles-away

    All I really want to see is real status updates!

    In essence, I was fed up of every day scrolling past a wall of this:

    It turns out that Facebook’s controls for the news feed are pretty terrible. If a friend of mine comments on a non-friend’s post, “likes” it, or worst of all “reacts to” it, that’s automatically considered newsworthy for me. Facebook offers no way to customise the feed to remove these kind of posts.

    You can, however, choose to hide all posts from certain people, including those not on your friends list. So based on the advice I received, I started “hiding all from” everyone I didn’t recognise who appeared in my news feed.

    I’ve done this almost every day for the last couple of weeks, and in a way, it has been very successful. Almost all the strangers’ profile pic changes and distant events have gone, there’s fewer clickbait posts and memes, and mercifully almost no Minions at all.

    But what’s left?

    Not much.

    The media was right, at least as it pertains to my Facebook friends. What remains after you’ve removed all the crap is real status updates—from about five people. Out of 200-odd friends, very few are actually posting status updates and pictures. Mostly of their kids, because I’ve reached that age. The rest of my friends either largely share stuff I didn’t care about, so I don’t see them any more, or they post so rarely that they’re drowned out by the wall of baby photos.

    Although Facebook was our LiveJournal replacement, the place we went to stay in touch with our friends’ lives once we left university for our far-flung pockets of adulthood, it looks like for us that age of constant sharing may be on the decline.

    I’m not sure if I will be happy or sad to see it go.

    Optimising for Download Size

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    If, by some vanishing small probability, you are a regular visitor to this website, you may have noticed a few subtle changes over the past few weeks. In part due to trying to access it from a slow mobile connection, and also in part due to a series of tweets courtesy of @baconmeteor which got me wondering how much data is required to load a simple page on my own website.

    The answer, apparently, is just over quarter of a megabyte.

    Not a tremendous amount in this world of 8MB rants about how web pages are too big nowadays, but still unnecessarily large given that it contains only about two kilobytes of useful text and hyperlinks. After 65ms (10% of total load time) and 1.59kB (0.5% of total data size), the content and structure of the page is done — the remaining 90% of time and 99.5% of data are largely useless.

    Over the past few days I have made a few changes to improve the performance of the site.

    Changes Made

    • Web fonts have been removed. I was using three: one for body text, one for italic body text, and one for the menu. Together they comprised over 50% of the data that browsers were expected to download, and although I do like those fonts, it’s dubious for me to impose my font choices on others, let alone make them download 100kB for the priviledge. If you happen to have Open Sans and ETBembo on your system they’ll be used, otherwise the website will appear in something reasonably close.
    • Font Awesome JavaScript is gone. I used the excellent Font Awesome to generate the menu icons on the site, but it pulls in 63kB of JavaScript to support 500+ icons, when all I use it for is static rendering of 12 of them. All major browsers now support inline SVG, so I took the relevant icons from the Font Awesome SVG set and used them instead. Aside from the Juvia commenting system and MathJax, there is no longer any JavaScript on the website.
    • Reduced image size. Although my inner egotist is quite fond of people being able to put a face to the name on all my stuff, the 28kB JPEG could be compressed to 6kB with no discernable loss of quality.

    The result has been a significant reduction in download size and load speed — the same page is now served in less than half the time and with less than 10% of the data usage.

    One extra addition was to explicitly set cache expiry times in the HTTP headers for the website and associated files. Since the CSS and image files are unlikely to change, and in any case it wouldn’t matter much if a user used old ones, setting the cache timeout to a week and a month for various file types has helped speed up loading of subsequent pages after the first. I use the Apache server’s mod_expires module, which has some example config here.

    Changes Not Made

    A couple of changes I considered, but eventually avoided making, were minifying the HTML and CSS of the site.

    The Octopress 3 minify-html gem does what it says, but unfortunately increased the build time of the website by 150% — from around two minutes to over five. I already find the build time annoyingly slow on my mid-range laptop, so have decided to skip this one.

    Another benefit would have resulted from minifying the CSS used for the site. This proved to be significantly more complex, involving configuration of the very capable jekyll-asset-pipeline module. However, the configuration seemed difficult for what would have been at most a 1kB saving, so I avoided this as well.

    Useful Tools

    Two tools were particularly useful in optimising the site for download size:

    • Google PageSpeed Insights identifies speed issues, along with user experience issues and provides a simple display of how the site appears on mobile and desktop. In the case of the image & CSS optimisations it suggests, it automatically performs the optimisation and allows you to download the result.
    • The Pingdom Website Speed Test was also useful as it picks up some issues that the Google tool doesn’t, such as the lack of explicitly-set expiry times on certain files.

    I hope this post has offered some useful hints if you are seeking to “minify” your own website, and optimise it for the download size of each page.

    Android Without Google

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    For several years, I’ve been considering whether I could—and should—dispose of my Google account. Since I wrote the linked post back in 2011, my use of Google services has declined anyway, and I no longer use GMail, Google+ or Google Calendar. At the same time, it has become apparent that users are at the whim of Google’s decision to close unprofitable services (even beloved ones like Reader), and to force us into using others against our will. “Don’t Be Evil” is starting to look hilariously naïve.

    The last hold-out in my desire to dump Google is my Android phone. Without a Google account and the closed-source “Google Play Services” blob that sits at the core of an Android phone, the experience is diminished significantly. While I like my phone’s hardware, I am not fond of the Google integration that I no longer fully trust. So, for the last few months I have been experimenting with running Android without Google.

    Here’s what I’ve learned.

    I do Depend on Some Google Apps after all.

    Aside from what’s provided in my AOSP-based CyanogenMod base software (much of which was written by Google, but is at least open source), I have two Google apps left on my phone—Maps and YouTube.

    Google Maps

    Maps features live traffic updates, a key feature when driving long distances to see friends and family. Although other apps have voice guided navigation (I also have OsmAnd installed), I’ve been unable to find a free live traffic offering that matches Google’s.

    YouTube doesn’t seem to have a decent open source client for Android, either due to legal problems or just that there’s little desire for one. To cope with my son’s regular desire to watch the music videos for obscure pop songs on my phone, I’ve had to keep this installed too.

    EDIT: Check out NewPipe for a YouTube replacement!

    That means Google Play Services stays.

    Neither Maps nor YouTube works without Google Play Services installed, and so far that’s meant I have had to keep it installed on my phone.

    However, my main trust issues with Google stem from their tracking and the amount of data they (want to) store about me. I can still prevent this another way—by removing the Google account from my phone. Play Services, Maps and YouTube continue working correctly, but my phone-based activities are no longer reported to Google in a way that connects them to me, and I can move one step closer to deleting my account.

    You can Just About Survive on Open Source Apps


    My replacement for the Play Store is F-Droid, a repository for open source apps, and in the spirit of trying to copy my laptop’s (mostly) open source software, I have decided to use it almost exclusively.

    There are other app stores available, such as Amazon’s, but the “ditching Google” exercise is about trust, and I’m not sure I trust Amazon any more than Google when it comes to their ability to monitor my phone usage and try to sell me things. Aptoide is another possibility, but the user-hosted repositories are full of software the users don’t own, are pirated, or potentially full of malware; once again the trust is lacking.


    As far as standard productivity apps go, there has either been an open source equivalent that fulfilled all my needs, or I was already using an open source app anyway and could update it direct from F-Droid.

    • K9 Mail was my default mail client anyway. It has a few failings, but in my opinion is still the best mail client on Android.
    • Firefox was my default browser, and is open source as you would expect.
    • Silence replaced TextSecure. It’s a fork that is identical in every way bar the name.
    • WeeChat’s Android client is open source.
    • VX ConnectBot is open source.
    • DAVDroid replaced CalDAV-sync and CardDAV-sync. My self-signed SSL certificate had to be added to Android itself manually, but aside from that quirk this one app replaced two.
    • For security, Google Authenticator and OpenKeyChain are open source apps that I was already using, and KeePassDroid replaced Keepass2Android with only a few little irritations.
    • Ghost Commander replaced File Explorer and Turbo FTP Client together—I find its interface annoying to use, but it seems to be the only open source file manager with SFTP and certificate-based login support.

    Web Stuff

    Here again, I didn’t find much to change from my original apps, largely because I was using non-standard software anyway.

    • OnoSendai is written by a friend of mine, and is my normal Twitter client. It has its own F-Droid repository.
    • FaceSlim is a simple wrapper around the Facebook mobile website. Again, I have used this in preference to the permission-hungry “real” Facebook app for a long time.
    • I was already using ownCloud and ownCloud News Reader for file sync and RSS reading; both are open source.
    • Slide replaced Reddit Sync as my go-to Reddit client. There’s another service I ought to ditch one of these days.

    Everything else, I already used my browser for on Android.


    Games have been the biggest difference between my phone before ditching the Play Store and my phone now. The number of games for Android—even closed source ones—outside of the Play Store is very limited. F-Droid has a variety of simple puzzle and arcade games, while past Android Humble Bundles have provided some high-quality indie titles as downloadable APKs, but on the whole the choice is bleak.

    On the plus side, the exercise has given me a great excuse to dump a number of potentially evil mobile games that I seem to have picked up since my last purge.

    Castle Clash

    How many hours have I wasted on improving pixel people and harvesting ephemeral bits?

    The Gaps

    There are a bunch of applications that I still use because there is no proper open source replacement. oandbackup is not yet a decent replacement for Titanium Backup, and try as I might, I cannot get my VPN server working in “OpenVPN for Android” while it works fine in the closed source “OpenVPN Connect”.

    And there are the proprietary services that will likely never have open source clients that offer the full functionality—Spotify, Netflix, the apps for my mobile ISP and my bank. This is where the main problem lies: these apps that I can continue using but are no longer receiving security updates via the Play store.


    The vast majority of apps on my phone either come preinstalled in AOSP or Cyanogenmod, or can be found in the F-Droid repository and successfully kept up to date there. It’s workable as an entirely open source platform (bar the separate issue of device drivers).

    But dragging a few “essential” closed-source apps into the situation is a lot more difficult than on a desktop operating system. On my laptop I can install Spotify direct from the company’s website, and even add a repository to my package manager to get automated updates. But on Android, the Play Store dominates and the majority of app writers do not allow publishing anywhere else.

    This is an unintentional lock-in that prevents users from having a choice of software sources.

    It’s an almost useless fight, but I feel that we ought to continue fighting against the new operating systems’ desire to contain our purchasing and constrain what we can and cannot do with our devices.

    The future’s good for me—I’m getting by without Google’s tracking features on my phone, which puts me in a good position for a potential switch to Ubuntu Touch, Sailfish or another phone OS that respects its users’ privacy. But not everyone would find it so easy to do without the proprietary blob at the middle of Android, and that’s worrying for the future of the general purpose computers we all could have in our pockets.

    Preparing to Leave Heroku

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    An email today announced a beta test of some new features that Heroku are “excited” to introduce. New service levels are available that include a “hobby” tier that does… exactly what the old “free” tier used to do. For $7 per month per app!

    The free tier has now been downgraded so that it must “sleep” — i.e. be unavailable — for at least six hours a day.

    As a long-term abuser of Heroku’s free tier, I’ve enjoyed continuous uptime for all my sites courtesy of Heroku. A lot of sites.

    Heroku Apps

    All of which I now have to slowly migrate off Heroku as freeloaders like me are no longer welcome!

    The sites that are static HTML (including the Octopress sites) and PHP have in the main already been migrated back to my own web server over the last few hours, and I’ll continue to monitor usage statistics over the next few days to ensure it can cope with the extra load.

    Some sites using Ruby, and others that depend on HTTPS will be a little more difficult to move. Certain sites such as the SuccessWhale API that require high bandwidth and good uptime may stay on Heroku and move up to a paid tier if required.

    Hopefully none of this should impact users of the sites, but please let me know if you find a site or application is inaccessible or suffering from poor performance.

    The End of the Road for SuccessWhale’s Facebook Support?

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    My SuccessWhale application has long supported both Twitter and Facebook social networks, despite both networks’ relatively developer-hostile stances. The worst offender by far was Twitter, with it’s 100,000 user limit that has deliberately crippled many third-party clients in order to drive users to the official website and app, which make money for Twitter through adverts. While I was never under any delusion that SuccessWhale would be popular enough to reach 100,000 users, it’s not a nice thing to have hanging over your head as a developer.

    Facebook’s permissions policy, as I have ranted about before, also makes it difficult for third-party clients to deliver a useful service for their users. With the new requirement that apps migrate to API v2, they are adding the extra hassle of requiring all apps be reviewed by Facebook staff. This isn’t a problem itself — SuccessWhale has been through the somewhat scary process of manual review before when it was added to the Firefox Marketplace.

    But Facebook has now snuck something extra into the notes for some of its permissions, each of which must now be manually approved as part of the review process. Into pretty much all the permissions that are fundamental for SuccessWhale, such as read_stream:

    Facebook dialog for read_stream permission

    Yep, this permission will be denied, as a matter of policy, to apps running on Android, iOS, web, desktop, and more.

    So predictably, SuccessWhale failed its manual review and has been denied approval to use Facebook API v2.0 or above. As far as I can tell at this point, that means on May 1st all Facebook features of SuccessWhale will cease to function. Facebook, ever the proponent of the walled garden path down which Twitter has ventured as well, has struck another blow for increasing their profits and user lock-in at the expense of the open web that SuccessWhale depends on.

    It’s a sad time for the web; the “web 2.0” of mashups and free access to data is slipping away with it. And though Facebook’s change does not kill off SuccessWhale and its kin outright, the future does not look rosy for us developers that believe users should be free to access a service in a way they prefer.

    State of the Whale Address

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    It’s no secret that the current state of my SuccessWhale social network client is not a good one. It currently exists in three forms:

    • The main server runs SuccessWhale version 2.0.3. It’s not been updated in nearly a year, and the only changes within the last three years have been playing catch-up with the changing Twitter and Facebook APIs. It probably has some broken features by now, because I don’t regularly test it out.
    • The test server runs SuccessWhale version 2.1.2 with debug flags enabled. The 2.1 branch includes things like mixed feeds and LinkedIn support, and is “beta-ish”. Some people use it anyway. LinkedIn support is broken and will never be fixed.
    • The dev server runs SuccessWhale version 3.0.0-dev, a complete rewrite of the whole thing that has stalled in a half-finished state. It’s just about usable provided you’re willing to drop back to the test server to fiddle with any settings (they use the same database). It’s buggy, and as far as I know used only by me.

    SuccessWhale 3 interface

    SuccessWhale v3.0 web interface

    Very occasionally, I get the motivation to do something about SuccessWhale. It feels bad to leave it in its current “limbo” state where there isn’t really a version that works and is properly maintained. I use SuccessWhale every day, so at least there’s the dogfooding aspect, but “it works well enough for me” is far from “it’s something other people would want to use”. And my friend Fae produces the excellent OnoSendai Android client that uses SuccessWhale, so I have some sort of responsibility to him to keep SuccessWhale going.

    But there’s a hell of a lot of reasons why I would rather not.

    • Free time is nice. I started SuccessWhale five years ago, when I still had the energy to keep big projects going. Now, with less free time in the evenings and more responsibilities in my day job, I’m much more keen on grabbing a few minutes of that blissful feeling that comes from having nothing to do.
    • We created a monster. SuccessWhale (or FailWhale as it was then called) was first and foremost a simple Twitter client. I explicitly declared that it would never be a client for other social networks such as Facebook. Nowadays, SuccessWhale has its own API that wraps both Twitter and Facebook, along with several front-ends.
    • Rewrites are no fun. Version 2.0 was badly coded and had to go. Version 3 is nice and designed properly from the start! But it requires hundreds of hours of work just to let it do all the things that version 2 could already do.
    • The APIs are crap. In fairness to Twitter, its API is well-documented and makes a lot of sense. But, like all APIs it is regularly updated, meaning that all application developers need to work just to keep up — we put hours in not to add new features, but just to make sure the existing stuff doesn’t break.
      Facebook’s API is much the same, except that it makes much less sense and the documentation is largely non-existent. It’s quite telling that I asked a simple question on StackOverflow, and a Facebook dev replied with “here’s how to do it. I guess I’d better add that to the docs, huh?”
    • The services are hostile. Twitter, once the darling of those that believed in a strong 3rd-party client ecosystem, are now if their friends have configured their privacy settings badly.
    • The services are crap. Twitter is the playground of celebrities, companies seeking “engagement” and people who want to have a pleas to return to more honest times fall on deaf ears. But I don’t want to use them, and that makes developing a client for them a distinctly unfulfilling experience.

    For now, SuccessWhale stays alive. Twitter and Facebook are what I’m stuck with as the only sensible way of communicating with many of my friends and family, and SuccessWhale helps me avoid the worst features of their interfaces — their cryptically-curated feeds, in-line adverts and one-feed-at-a-time pages. That, plus a vague sense of responsibility to my users, are what keeps it around.

    When the day comes that I can jetission Twitter and Facebook from my life without missing them, it will be SuccessWhale whose loss I mourn. Like many projects before it, its user count will fall to zero and it will slowly start to fade from the internet.

    One day, I’ll be sad that I made a thing that is no more. But right now, all I feel for the thing is the frustration that developing it is fighting a losing battle that has no end in sight.

    The Last Straw for LinkedIn

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    LinkedIn Intro in action (picture from LinkedIn blog)

    If you’ve been paying attention to technology-related news recently, you may have noticed that social network LinkedIn has released a new app for iOS devices called “Intro”. It’s a handy tool for people who do a lot of work-related email on their iDevice, as it embeds information from LinkedIn into your emails so you can get a summary of who you’re talking to.

    Unfortunately it does this not by making Intro a mail client with the extra feature of retrieving this information, but by rewriting your mail settings to send and receive mail exclusively through LinkedIn’s servers.

    In these times where online privacy and security are the subject of worldwide headlines, it shouldn’t come as a suprise that the app has been widely condemned for the complete loss of privacy it entails for its users.

    But this is just the latest in a long line of dubious methods used by LinkedIn to find connections between its users. It has been accused of sending email on users’ behalf without permission — indeed, handing over the password to your GMail and Hotmail accounts (ostensibly to harvest your address book) is one of the steps it recommends when you sign up. LinkedIn also uses names and photos in advertising by default, and comments on Reddit even say that LinkedIn is recommending people connect with former residents of their apartment based on their common IP address.

    LinkedIn requesting to connect to GMail

    Added to that list of privacy failings, the 2012 breach of LinkedIn’s database revealed a major security failing, in that user accounts were stored as unsalted SHA-1 hashes, many of which were easily compromised.

    Although the Intro app does not affect me in any way — I don’t use it, and don’t have an Apple device to use it on anyway — it makes it abundantly clear that LinkedIn still do not care about their users’ privacy or security. No privacy-conscious Internet users, myself included, should support a company like that.

    Make no mistake, by having accounts on LinkedIn we are supporting them. We are not paying; we are the product.

    Given that the only thing I have received through being a LinkedIn member has been regular nuisance calls from recruitment agencies, I think it is high time I deleted my account. I would encourage all of you to weigh up what you gain from the service against what you lose by handing over your personal information to a company that is highly likely to abuse it.

    Sharing Isn’t Caring

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    Like many angsty young adults, I spent the last few months of my time at University wondering what would become of the friendships I’d made there — which friends I’d keep in touch with; how often I’d see them. Having lived and worked with many of them, and shared each other’s lives in such minute detail, how could I deal with not having that constant interaction any more?

    Then, something magical happened.

    Facebook app running on an iPod Touch

    Suddenly, it was like the old times were back again. We could stay in touch forever, and share the minutiae of our lives just like always.

    But since then, it’s kind of taken over. I’ve caught myself checking Twitter and Facebook on my phone while crossing the street, as if that iota of interaction couldn’t wait thirty seconds for me to ensure my own safety. My son has started talking to me while I was using my phone, and in my mind it was the phone that had priority and Joseph that was the inconvenience.

    I saw this comic the other day, and although its charicature of the social networking-obsessed user is a long way from the way I act most of the time, the intention behind it still rings true.

    Art (c) Gavin Aung Than of ZenPencils.com

    How did we get to a point where I would rather share some witticism I think of with the internet at large than with my own wife, who matters far more to me than the rest of the web ever could? Why do I regularly spend my evenings idly refreshing Facebook, then complain that the flat is a mess because I never have time to do chores?

    This culture we created of over-sharing our own experiences and being glued to a screen awaiting what our friends share seems to be cheapening our interactions with the real world. It’s escapism from something I no longer want to escape.

    If I am allowed to make “mid-year’s resolutions”, I resolve to share less of my life online, and to spend less time refreshing a page waiting for others to share their lives. It’s no bad thing to wait a few days to see what friends are up to, if it means spending more time caring about my family, my home; the things that I’m sad to say are more important than friends and certainly more important than the retweets and “likes” of strangers.

    The End of Westminster Hubble

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    Three years ago, after a two-month secret development period working with my old school friend Chris, we announced Westminster Hubble.

    The name was a pun on the “Westminster Bubble” in which MPs are sometimes unkindly said to live — implying a lack of awareness of the rest of the country — and “Hubble” alluding to the Hubble Space Telescope, which has allowed us to see distant objects in more detail than ever before.

    Westminster Hubble was a website that aimed to bring MPs and their constituents closer online by providing a single location to find contact details for an MP, in real life and on social networks. It also provided customised feeds of MPs’ activity from a variety of sources, from YouTube videos to speeches made in the House of Commons. At its core was a RSS-parsing engine powered by SimplePie that pulled in content from all the sources it knew about as quickly as it could, stashing the results in one giant database table. The contents of this would then be served to users as HTML, or as an RSS “meta” feed to users who preferred to get the data that way.

    Westminster Hubble MP Feed

    Westminster Hubble’s main “feed” page for an MP, in this case tech-savvy MP Tom Watson.

    Amongst my favourite features were the Google Maps / They Work For You mashup that allowed users to find their local MP in an intuitive way, and the “badges” awarded to MPs for particular dedication (or just a lot of tweeting).

    Find Your MP map

    Westminster Hubble’s “find your MP” map

    We launched just after similar service Tweetminster really took off, and although we never achieved their relevance or their Wired UK features I still feel that we were offering separate complimentary services — Tweetminster curated tweets around particular subjects for use by those in and around Westminster, while we pulled together tweets and other items from particular people inside Westminster and provided them to those on the outside.

    In many ways, Tweetminster provided a destination, somewhere people would go to get information, whilst Westminster Hubble was designed to fade into the background and become part of the plumbing of the internet — RSS feeds went in, RSS feeds came out in a more structured form as chosen by the users. In many ways, then, it shouldn’t be surprising that this week I am closing Westminster Hubble due to a lack of use. Without the user appeal of being a “destination”, the users didn’t come — didn’t spread the word.

    Westminster Hubble "badges"

    Westminster Hubble “badges”

    In recent months, the web itself seems to have turned a corner from the heady days of the early 2000s; the Web we lost. Twitter’s discontinued API v1 takes with it the availability of RSS feeds for a user — parsing Twitter feeds now requires a “proper” Twitter client that must authenticate and use the JSON API. Facebook pages no longer advertise their RSS feeds; third-party tools must often be relied upon instead.

    It seems the days of mashups, of open services that exposed their data in freely-usable machine-readable formats, are fading. Facebook, and to a lesser extent Twitter, are realising that to maximise their profits, they need to keep users on their sites rather than accessing their data from elsewhere. They are becoming walled gardens in the tradition of AOL, a transition that is fundamentally bad for the free and open web that most of us enjoy today.

    If I were more of an activist, I would keep Westminster Hubble alive and fix its links to Twitter and Facebook precisely for the reason that this trend needs to be fought — that the British public should have the right to see what MPs post on “walled garden” websites without the members of the public themselves needing to enter that garden. But the fact of the matter is that Westminster Hubble has failed to become a popular service. In the past month there have been exactly six unique visitors, and that includes consumers of the RSS feeds.

    It is tempting to leave the service running somewhere in some capacity — its database currently contains nearly a million items posted by MPs over the course of 16 years. (Westminster Hubble has only been running for three years; it retrieves old posts from feeds when it can.) However, there seems little point in maintaining the domain name, the Twitter account and the Facebook page for a service that now sees so few users.

    For anyone wanting one last play with the site, on the understanding that many social network integration features no longer work, can do so on the Westminster Hubble temporary server. On request I am also happy to host the complete (~420MB) database dump, in case anyone wants a large data set of MP activity on which to run some analysis.

    To everyone else who has used Westminster Hubble over the years, thank you. I hope it proved useful, and I like to hope that maybe even one of you was inspired by it to support an open government, to campaign for it, or to follow in the footsteps of Chris and I and build your own tools to make it happen.

    After many MPs have held Hubble’s “badges” over the years, I’d like to award one special, final badge of honour. The Westminster Hubble award for Social Network Mastery could go to nobody else: ladies and gentlemen, Ed Balls.

    So long, and thanks for all the fish.

    The Last of Last.fm: Seven Years in Pretty Graphs

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    I started using coming to the conclusion that I should stop back in 2011. Although the social media narcissism of “everyone must know what I’m listening to!” is no longer appealing in these days of over-sharing, I kept my Last.fm account around for its free “recommendations” streaming services until deciding earlier this year that a Spotify subscription was a worthwhile investment.

    I was reluctant to delete my account, though, as seven years of listening to over 30,000 songs is a lot of data — so much that it feels wrong to click a single button and pretend it never happened.

    Luckily, I’m far from the first person to want to turn their years of recorded listening habits into some kind of accessible permanent record. The most famous such service, LastGraph shut down earlier this month — annoyingly on the very day that I intended to use it — but there are many other ways to get interesting data from a Last.fm history.

    Last.fm Playground

    Last.fm offers their own visualisation tools in their “Playground” site. Many are for subscribers only, but even free users get access to some interesting graphs.

    For example, the Gender Plot uses your history to guess your gender and age. As you can see below, Last.fm pegs me as 24 (I’ll take that as a compliment) and it’s pretty indecisive on my gender — a largely manly playlist conflicts with my fondness for Tokio Hotel, apparently only listened to by 18-year-old girls.

    Last.fm Gender Plot

    Last.fm Graph

    Last.fm Graph is a third-party Java app that takes your favourite artists and displays them as a network graph, showing the interlinking between them. The result is interactive and designed to be played with, which unfortunately makes for a pretty poor screenshot.

    According to my output, my main genres of metal and EBM don’t intersect anywhere — perhaps they would have if more industrial acts had made the “top 50 artists” cut-off that I used for the data set. My 2006-2007 J-Pop phase is sitting on its own separate from everything else (and deservedly so).

    Last.fm Graph

    Last.fm Extra Stats

    Last.fm Extra Stats (Windows only, .NET 2.0) generates much the same graphs that LastGraph did, more configurably but perhaps a little less pretty. Everyone’s favourite is the “Wave chart” view, showing trends in listening to your most popular bands over time.

    Here, the amount of music I listened to — or at least, the number of tracks I scrobbled to Last.fm — dominates the chart causing a very bumpy output, but it’s all there. The sheer volume of Kotoko and Scooter tracks I’ve listened to are now laid bare for the world to see and silently judge me on.

    Last.fm Graph


    My favourite of the bunch has to be LastHistory (OSX only). It’s not the prettiest visualisation, but what it does do is not just plot your listening over time on a day-by-day basis, but minute-by-minute. The resulting visualisation displays information about your life, while others simply display your music.

    In this history I can see my varying sleep patterns as I changed from student to office worker to father. I can see the all-nighters I pulled and what music I chose to accompany me. The days when I listened to music only on my commute, and the rarer interludes where I managed a whole day of listening.

    Last.fm Graph

    Reminiscence rears its head in strange places, few stranger than a 30,000 point data set began one day with a 20-year-old thinking people on the internet would be interested in his music.

    Today I delete my Last.fm account, thankful for the opportunity to look back over seven years of my life summarised in scrobbles. I hope this page proves useful for anyone else in a similar situation, looking to extract pretty graphs — or even memories — from their Last.fm history.

    Announcing: "Can I Call It…?"

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    There are a whole host of decisions involved with starting a new software project. What’s my target audience? What language shall I write it in? Which libraries shall I use? And of course, “What shall I call it?”

    For anyone looking to give their new project a unique name, there’s an annoying process to go through of searching for each idea to see if something already exists by that name. Linux packages need to have unique names, as do SourceForge projects, Ruby Gems and projects on many other distribution systems.

    As of 4pm yesterday, there was no simple way of querying all these repositories and package management systems together, to see if your chosen name was already taken by someone else.

    So at 8pm I sat down to code. And by 11pm, there was a way to do exactly that.

    Meet CICI, or “Can I Call it…?”

    CICI is a simple website. You give it a name you would like to use for your project, it checks against a bunch of services, and tells you if your name is unique – i.e., you can call it that – or not.

    CICI Results Page

    Currently, CICI looks up information on packages and projects using Github, SourceForge, Ruby Gems, PyPI, Maven, Debian and Fedora, but it’s easy to add more. CICI itself is a simple Ruby script (full of ugly hacks, as is befitting for a program that I knocked together in a few hours), which you can download and contribute to on GitHub. It’s all BSD-licenced.

    Of course, you can play with CICI on the web right here:

    Can I Call It…?

    As we have also discovered, typing random words into the search box to see what it finds is surprisingly addictive… See what odd (or even useful) things you can find on CICI, and good luck with your new projects – whatever name you end up giving them!

    From Hell’s Heart I Stab at Thee, Thou Facebook Privacy Model

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    This morning I tweeted my annoyance with Facebook’s privacy model, and since that provoked some (albeit minor) reaction, I thought I’d follow it up with a better explanation of what I’m on about.

    Have you ever used a third-party app to access Facebook – such as TweetDeck, FriendCaster or my very own SuccessWhale? If you have, did you notice that some of your friends’ posts and notifications just don’t appear in the app, whereas on the Facebook website they are perfectly visible? Have you seen some odd comment threads where certain friends’ comments are missing when you view the thread from an app?

    If you have, the problem isn’t with your app. It’s a problem with the settings that the users you can’t see have set – and that problem is that they have their privacy settings set correctly.

    This is a pain in the arse for me and many others who access Facebook primarily through third-party apps, because not only do we miss out on important updates from these people, but the only way to ‘fix’ the situation is to ask them to degrade their privacy settings. As a big fan of online privacy, that’s not something I’m willing to do.

    So how does this problem come about?

    Well, let’s say we have two characters who want to communicate on Facebook. I’d suggest Alice and Bob, but after 20 years learning about cryptography, neither of them are willing to trust their paranoid conspiracy theories to Facebook’s messaging system. No, I started this post with an oddly-placed reference, and I’m going to persevere with it.

    We have two characters, Ahab and Ishmael, who are friends aboard the good ship Facebook. Ahab fires up his favourite whale-themed client, SuccessWhale, and links it to his Facebook account. He gets a dialog like this:

    Extended Permissions Dialog

    Ahab clicks “Allow”, and as he granted the “Access posts in your News Feed” permission, he starts seeing posts from his friends. But not Ishmael.

    Why not? Because Ishmael is a privacy-conscious sailor, and has previously found the “Apps” section of Facebook’s “Privacy Settings” dashboard. That section not only contains settings for what personal data apps you use can see, but also settings for the apps that your friends use. By default, it looks like this:

    Apps Others Use Settings

    Ishmael saw that section and, quite rightly, thought “So if a friend of mine uses Farmville, this means Zynga can see everything I do on Facebook without asking my permission? Fuck that!”, and promptly unticked all the boxes. His data is now safe from unscrupulous apps used by his friends – but his data is also hidden from ‘nice’ apps too, like SuccessWhale and TweetDeck.

    Arguably, this privacy model is also “right” – Facebook can’t control what apps do with the data they can see, so it has no way to distinguish between SuccessWhale (which needs to see friends’ posts in order to be useful) from FarmVille (which has no business looking at friends’ posts at all).

    Not only are two sets of privacy settings making apps like mine annoying to use, but the annoyance is doubled by the fact that both of those privacy settings are arguably the right decision on the part of both our Ishmael and Facebook.


    Fig. 3. A Successful Whale (artist unknown)

    Fuck it, Let’s Remake TweetDeck. Only Better.

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    It’s no secret that, since the launch of version 2.0 back in July of 2011, my SuccessWhale social network client has stagnated somewhat. It had reached that point at which it did everything that I needed it to do, and so my enthusiasm for updating it kind of disappeared.

    SuccessWhale 2.0

    Well, no longer. Twitter discontinued TweetDeck, the only Android client that merged Twitter notifications and Facebook feeds without sucking. At the same time it discontinued TweetDeck’s desktop client, and removed Facebook support from the web-based client.

    That really sucks.

    And that’s where SuccessWhale comes in.

    I’m no longer content with the ways in which I interact with Twitter and Facebook, particularly on mobile devices, so we’re going to fix it.

    SuccessWhale began as a “my first PHP application” kind of affair, and right now it still is. The code behind it is an ugly mash of model, view and controller without a decent structure. SuccessWhale version 3 will be rebuilt from the ground up with proper design principles behind it.

    It begins with a proper API, which I’m coding up right now using the Sinatra framework in Ruby. Once complete, the web-based front end will be rewritten too, as a strict user of the API using client-side templating in JavaScript. It will be a responsive design, displaying the user’s preferred number of feed columns in landscape mode and reverting to a single swipe-able column in portrait mode for mobile phones.

    Even better, haku is making an Android client called OnoSendai which will feature the combined feed columns that are SuccessWhale’s major feature. We will bring TweetDeck’s feature set back to Android with a lot more besides, offering the users the ability to mix together the feeds in their social network client like never before.

    And to prevent our software going the way of TweetDeck – being bought up and eventually scrapped – SuccessWhale and OnoSendai are open source software. A version of SuccessWhale’s API, operating on the main database at sw.onlydreaming.net, will be open for anyone to use and build clients for. SuccessWhale is released under the BSD 2-clause licence and OnoSendai under the Apache 2.0 licence, meaning that even if we were to be bought out, anyone on the web could simply grab our source code and run their own SuccessWhale.

    We’re bringing TweetDeck’s features back to Android and to the web. We’re making SuccessWhale an application to be proud of. We’re free, we’re open, and we’re Twitter-proof.

    Alas, Poor TweetDeck

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    It should have been obvious when TweetDeck was acquired by Twitter back in 2011 that it wasn’t long for this world. Even more so when the only significant update in the intervening period was to remove a feature (handling tweets over 140 characters).

    Although Twitter started out by enthusiastically embracing 3rd-party app developers, its quest to find a way to monetise its service has led the company to grab more and more control over how its users interact with the platform. Users who use Twitter’s website and mobile apps can be served ads, or “promoted tweets”, much more easily than those using 3rd-party clients. The transition was an obvious one, but not a pleasant one – many developers turned on Twitter, accusing it of being actively hostile to developers.

    I would be hard pushed to disagree. Westminster Hubble relied on Twitter’s RSS feeds to let people follow their MPs more easily – a feature broken by Twitter’s API changes. SuccessWhale survives, but hardly with Twitter’s blessing – if it were to ever have 100,000 users, it would be banned.

    TweetDeck's Merged "Mentions" and "Notifications" Column

    TweetDeck’s Merged “Mentions” and “Notifications” Column

    And today we lose the TweetDeck app on desktop and mobile platforms.

    I am an avid user of TweetDeck for Android (actually, its fork “TweakDeck”, though they are very similar). This is for the simple reason that it is the only Android app that can combine “mentions” from multiple Twitter accounts and “notifications” from a Facebook account in a single column view. Surely this is a feature that plenty of people would like in an app. But check out the competition. This is a list of all the Android apps that are both Twitter and Facebook clients:

    • TweetDeck is dying – service outages are forecast before it is killed off completely in May.
    • TweakDeck is an old fork of TweakDeck, not under Twitter’s control – but the API changes that kill TweetDeck will take TweakDeck with them.
    • Seesmic offers combined Twitter and Facebook feeds in its paid version – I don’t object to paying £1.89 for an app, but Seesmic has been acquired by HootSuite and will be phased out.
    • HootSuite itself does support multiple Twitter and Facebook accounts, but its user interface offers no way to merge feeds together or even swipe between columns from different accounts.
    • Scope offers a merged mentions/notifications feed, but only supports one Twitter account, has performance issues (on my devices at least) and has odd defaults (all retweets are also posted to Facebook, Tumblr etc unless manually turned off every time).
    • UberSocial (formerly Twidroid) supports only one Twitter account, and adds Facebook as an afterthought with no merging of feeds.
    • Plume (formerly Touiteur) supports multiple Twitter and Facebook accounts, but only supports Facebook’s posts feed, not notifications.
    • StreamLife is intentionally low on functionality, and only shows “home” timelines, not mentions/notifications.

    The functionality that Twitter is removing by retiring TweetDeck is simply not found anywhere else in the Android ecosystem. Until some other application steps in to fill the gap, a function that I and many other users love is simply and infuriatingly impossible to achieve on Android.

    Just like with Facebook, it is the network effect that keeps me – and countless other developers – using Twitter despite its increasingly developer-hostile control over the ways in which we interact with it.

    One day, perhaps the “next big thing” in social networking will be a platform that starts open and stays open.

    Two Lessons in Running Web Servers

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    First of all, a big shout out to les hommes et femmes at http://korben.info/, who today have taught me an important lesson.

    At about 9:30 today, they posted a list of Raspberry Tank. At about 9:40, my web server melted. This is the disk I/O graph:

    Sparrowhawk Disk I/O Graph

    Somewhere around five Apache instances per second were being spawned, all of which seemed to be waiting for each others’ I/O operations, and combined together managed to slow everything else to a crawl. It took twenty minutes to successfully ssh into the server and stop Apache. In that whole time, I think about five visitors might have actually have seen a properly-formed web page.

    From that point, it was a dainty command-line dance to get enough of WordPress up and running that I could set up a page caching plugin, but not so much of it that visitors could actually request pages themselves.

    At around 1pm, I finally managed to get back up and running again – and the floodgates opened.

    Sparrowhawk IPv4 I/O Graph

    So, today I learned two important lessons about running your own web server:

    1. If you are going to do something cool with a Raspberry Pi and post about it on your blog, CACHE THE PAGES.

    2. It’s a great idea for your web server to send out e-mail alerts when it is dying. It’s a less great idea to host your e-mail system on the same machine.

    Thanks, crazy French blog.

    The Ego, the Social Graph, and the Great Unfriending

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    Long ago, in the early years of Facebook’s rise to power, it became apparent that it had another key feature alongside feeds and wall posts – the friends list. Not only was it a good way to keep in touch with friends after University, it also became a good way of declaring who those friends were. This aspect was emphasized more and more as the site’s user base increased; you could now keep a quite exhaustive catalogue of who you knew. There were even apps on Facebook’s fledgling platform that allowed to to map those friends, and see interesting groups and connections form.

    Facebook Friends Graph

    My Facebook Friends Graph

    For a shameless nerd such as myself, this is great stuff – I love having a neatly curated index of almost everyone I know, particularly one with which I generate pretty visualisations. This one here shows a nice distinction between people I went to school with (orange), university (blue), people I work with (green), DDRFUKers (purple), and a great interconnected yellow mass of Soton Kiddies, LARPers, neighbours and post-University friends.

    But however nice it might be to see this in pictorial form, I know this information. All of it is in my head; each different group and the few people that make the links between them. There’s no need to record this data to help me.

    Of course, I need to record this data in order to talk to these people and share status updates on Facebook. But I barely interact with anyone I went to school with. At work, a mention of something I posted on Facebook tends to be embarrassing. Most of the dots marked yellow or purple are people who are on Twitter, and who I would prefer to talk to there.

    So for whom am I updating, and publishing, what has become known as my “Social Graph”? I have already established that although I curated my Social Graph out of an egotistic and nerdy desire to catalogue everything, it serves no purpose for me. Presumably, then, I am doing it for the benefit of Facebook and its advertisers who can use it to add cruel hooks into friends’ feeds. “Hey, 24 of your friends play this!” “Ian R likes some guy’s band!”

    At best, “unfriending” on Facebook seems like something that is done by spurned teenage girls complaining about how much of a bitch their ex-“BFF” turned out to be. At worst, it seems like an outright denial that you have ever known a person. But what benefit does a user get from declaring themselves “friends” with someone they’ve said not a single word to in ten years?

    If, as I have previously bemoaned, I still don’t want to quit Facebook entirely, then I fear a Great Unfriending may be nigh.

    Lament for Web 0.1

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    With every passing day, my Facebook feed is spending more and more time informing me that old school friends “like Amazon”. (No shit, really?) In the background, it’s fiddling our feeds, showing and hiding entries according to what it thinks is relevancy, and also what it thinks is profit for itself. Game spam is constant. On the other side of the fence, Twitter is trying to force out the third-party clients that made it great, so that it can monetise its users more easily.

    Facebook Pages You May Like

    Should we be surprised? Feel betrayed? Not at all. Facebook and Twitter are in it to make money, yet we use them for free. It’s pretty clear that if you aren’t paying for the product, you are the product. We should only expect free-to-use websites to change in favour of their profits, never in favour of us as users.

    But I’m growing tired of it. My use of these sites is intensely personal – they are my default, or only, way of contacting many of my friends – but yet this personal process is controlled by a company that is willing and able to affect the process to make money. If it’s more profitable to show me “Bob likes Product X” than to show me Bob’s deep and meaningful status update, you can bet I’ll be shown the “like”.

    I miss everyone being equal. I miss services that were honestly free. I miss being close to the infrastructure I use to communicate, rather than having it abstracted. I miss Web 1.0.

    Hell, I miss Web 0.1.


    There was a time, not so very long ago, when IRC was our Twitter. It was just as full of funny links and pithy comments, but it was communication between friends, not 140 character witticisms broadcast into the ether in the constant, vain hope of affirmation delivered by the retweets of strangers.

    There was a time when blogs were our Facebook, our innermost thoughts put out there for our friends and no-one else; when our friends would think of something to say and say it, rather than simply dishing out an iota of affirmation with the “like” button.

    There was a time when mailing lists were our forums, just simple e-mails back and forth without the need for moderators, or advertising, or CAPTCHAs.

    There was a time when USENET was our Reddit, a place to while away hours without karma whores and downvotes.

    Those times are never coming back. No friends of mine are willing to leave Facebook and talk to each other on a mailing list. The monetising services of Web 2.0 are simply much better, easier to use, nicer to look at, more functional. But they’re lagging behind the tools and services of the old internet in other ways. Honesty – what you put into IRC is what you got out, no server inserted “promoted tweets” into your channel. Thoughtfulness – we had to say things to each other, no likes, no retweets, no upvotes.

    At this point it would be appropriate for me to announce some kind of online “back to the land” movement, ending with a rhetorical “who’s with me?”. But rhetorical it would be, because nobody’s with me. I am, at the age of 27, simply old and curmudgeonly before my time; sitting typing in monospaced text to an audience that already sold themselves to play FarmVille.

    Anti-SEO Spam from iProspect (for British Gas)?

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    Today, I received a rather unusual e-mail.  Or more precisely, nine rather unusual e-mails within about a second of each other.  They were of the following form, altering only the onlydreaming.net link in the middle to use another WordPress tag (always ending with /feed):


    I work for the digital marketing agency iProspect on behalf of British Gas.
    As part of our ongoing SEO campaign – we looking to edit or remove some of the backlinks pointing to the https://www.britishgas.co.uk/ domain name.
    We have identified the following link to British Gas on your site (onlydreaming.net):

    We would like to work with you and request that one of the below actions are taken regarding this link.
    This is to ensure that our client avoids violating the Google Webmaster Guidelines in any form due to a historic decision they or a previous agency has made.

    • Please remove the link from your website

    Please note that we are not trying to imply that your website is of fault for violating any guidelines, but that we have advised British Gas should remove any historic links that they acquired which could be interpreted as paid or intended to manipulate PageRank.

    Please let me know if you are able to action this request or if you require any further information.
    Apologies if you have received multiple emails, this is due to their being multiple links on your website (please review each one).

    Kind regards

    Has anyone seen the like of this before? To me it just seems utterly bizarre that in order to help British Gas meet Google’s guidelines for search listing, a third party is asking bloggers to take down links to their site.

    (For reference, the blog post that features in each of the feed URLs I received e-mails for is this one. It is not defamatory towards British Gas, does not deep-link into their site or do anything to influence British Gas’s search results – it is simply a link to https://www.britishgas.co.uk/, with the text “British Gas”.)

    I’m considering the following as a response, and would be interested to know if you thought it was appropriate, if you would add/remove anything, or whether you think I should ignore these e-mails completely, etc.

    Dear Sir/Madam,

    I have read your many automated e-mails of August 7, 2012 and would like to let you know that I will not be removing a link to British Gas’ website from my blog. Although the link to British Gas adds little to the content of the blog post concerned, aside from as an aid to visitors not from the UK who may not be aware of the company, I would prefer not to bow to what seems like a very odd request. I perceive your request as odd for the following reasons:

    • It is no business of mine whether or not British Gas’ website meets Google’s requirements. I have no particular animosity towards British Gas or iProspect, but simply feel that the contents of my blog are no concern of theirs, and neither will they be a concern for any staff at Google who review adherence to the Webmaster Guidelines.
    • If your concern is “links that they acquired which could be interpreted as paid or intended to manipulate PageRank”, a few moments of investigation will assure you that neither is the case here. My web presence is fairly transparent and it should be plainly obvious that I am not in the pay of British Gas. Furthermore, the blog post of mine that your links point to (https://www.ianrenton.com/blog/the-perils-of-gas-supply/) simply contains a link to https://www.britishgas.co.uk/, with the text “British Gas”, which is obviously not an attempt to affect British Gas’s PageRank or associate certain keywords with the site.
    • The post you have picked on is over two years old and posted on a blog that averages only 150 visitors per day. Like everyone else, I have no access to the calculations that set my own blog’s PageRank – however, it is surely low enough as to have no impact whatsoever on that of the British Gas website.
    • I feel some desire to refuse your request simply because your process is automated and clearly wide-ranging. For it to have picked up the problematic post nine times in quick succession – all nine being RSS feed URLs rather than the URL of the blog post itself – implies an automated crawler is at work. A vast number of people may have been hit with similar requests to this.
    • The final proof, if any was required, that my blog post is not an attempt to affect your client’s PageRank is that all nine of the URLs your crawler has flagged are explicitly disallowed in the robots.txt for onlydreaming.net. Although your crawler clearly disregards the requests made of it in this file, Google’s crawlers do not, and thus do not index any of the URLs you have identified.

    I hope these reasons satisfy you as to why I do not wish to remove the link you have identified. If you and your client concerned with removing “astroturf” links and links intended to manipulate their PageRank, perhaps the pages containing these links should be identified first by investigating any “historic decision(s) they or a previous agency has made”, rather than deploying a web crawler to notify everyone on the internet who has ever linked to British Gas’ homepage.


    Ian Renton

    The Need for Mobile General Computation (aka, why I’m stuck with Android)

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    My mobile phone contract has well and truly hit the “18-month itch” stage – although I still have six months until an upgrade is due, I can’t help but look at adverts and scan gadget blogs and think “ooh, I want one of those”.

    I could go for an iPhone, and have a vast library of apps to choose from – far more than Android has ever offered.  I could go for a Windows Phone device and enjoy a user interface that is genuinely refreshing compared to the rest of the mobile OS options.

    But much as it annoys me with its weird bugs, poor battery life, fragmentation, weird manufacturer-specific skins and inconsistent interface, there’s one important advantage to Android that sways my decision back to it every time I consider the alternatives. It is simply this:

    I want to be in charge of my device.

    The seeds of the war on general-purpose computation are already taking root in the mobile OS space. Phones and tablets are quickly gaining ground as the primary means of getting things done in our online worlds, and implicit in that is that users of these devices are putting the manufacturers and the mobile networks in charge of what they can and cannot do with them.

    I reject this trend. I want root.

    I want to be able to uninstall the apps HTC and Vodafone think I should use. I want to firewall apps off from “phoning home”. I want to back up a complete partition image of my phone. I want to run any script I can think of. I want to tunnel my network access over SSH.

    By and large, mobile software and hardware manufacturers are hostile to this kind of activity. It’s impossible on a Windows Phone device. iPhones can be jailbroken but OS updates – including important security updates – undo the jailbreaking until some enterprising hacker can find another exploit.  Of the current crowd of mobile operating systems, only Android, with its open-source releases of the core OS, allows said enterprising hackers to create their own distributions of the operating system and maintain “root” whilst applying Google’s own OS updates.

    So although I am bored of Android, though I crave a new and interesting user interface to play with, I crave freedom more. If I can’t make a device mine; if I can’t choose to be master of all that goes into it, out of it and through it, it’s not a general purpose computer – and I refuse to base a good proportion of my future computing needs on it.

    On Very Small PCs

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    With my recent acquisition of a Bluetooth keyboard added to the PowerSkin, my phone has completed its transition from thin, attractive polycarbonate slate to the monstrous assault on product design you see before you.

    Desire HD + PowerSkin + Bluetooth Keyboard

    Or so I would have said in the dim and technologically distant days of 2010.

    But really, I don’t have a giant ugly phone – because the other day, an incoming call interrupted my SSH session and I was briefly confused as to why someone was calling me on my computer.

    I don’t have a giant phone – I have a really tiny laptop, with a battery that lasts two days.

    Did the future happen while I wasn’t looking again?

    Designing for Granddad

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    Slate’s recent article, “2011 Was a Terrible Year for Tech”, coins the term “mom-bomb” for the moment that technology journalists declare a gadget so easy-to-use that it is actually useful to people who aren’t technology journalists:

    He begins by praising the gadget’s intuitive interface and its easy setup process, but eventually he finds that mere description doesn’t adequately convey the product’s momentous simplicity. That’s when he drops the mom bomb: This thing is so easy that even my mom could use it.

    I’m blessed with parents that, by and large, ‘get’ technology.  Their VCR never flashed 12:00 (and now they have a DVD recorder); they both have Android phones that they can happily e-mail from.  My grandparents are a different story, of course.  Two of them have almost never used a computer, but my Granddad has a nice new shiny one and uses it regularly.  But as the article points out, what tech journalists and we tech-savvy users think is simple and ‘user-friendly’ often falls far short of the ‘mom (or granddad) test’.

    A few observations spring to mind:

    • Moving photos from a digital camera to a computer is one of the simplest tasks non-‘tech-savvy’ users often want to do.  But when you plug in a digital camera, Windows 7 helpfully pops up this dialog:

    Windows 7 Camera AutoPlay Dialog

    Do I want to “Import Pictures and Videos” using Windows, or using Windows Live Photo Gallery?  What’s the difference?  Do I want to “Copy pictures to [my] computer”?  Do I want to “Download images”? Where will the photos go?  Will they still be on the camera?  I just want to see my photos, so I click “Open device to view files”, but what the heck is “DCIM”?

    • I set Google as his browser homepage, and since then, he has been getting his news not from the BBC News bookmark I created, but using the ‘News’ link on Google’s own menu that appears at the top of its pages:

    Google Menu Bar

    …which is great, except that Google can change that menu at any time.  And of course they are doing exactly that:

    New-Look Google Menu

    To my granddad, and many other novice internet users, the distinction between bookmarks – which only change if you want them to – and web page navigation menus – which can change at the webmaster’s whim – is not necessarily clear.

    • Even simple mouse commands can be unclear and difficult.  In the example above, Google’s instruction to find the new menu is to ‘roll over’ the logo.  When the novice user figures out that means ‘hover the cursor over’, they’re greeted with a JavaScript popup which will disappear again if their cursor accidentally wanders too far from the popup.

    It’s my family duty to be tech support, and occasionally I am called upon to fix things that have actually gone wrong.  But more often than not, I am called upon to try to rationalise a simple task that is unexpectedly complex to perform.  This complexity has usually arisen because the software’s developers and most vocal users are so immersed in common UI paradigms that they just don’t notice that the complexity exists.  For the novice user, on the other hand, even your software’s installation wizard is complexity they’d rather not deal with.

    The Slate article is right to cite Facebook’s user interface as a particularly onerous example of software complexity.  Feeds, live updates, inboxes, hidden inboxes, walls, profiles, Timeline, comments, likes, tags – some users need and revel in that level of complexity, but a significant number just want to, say, see what their kids are up to.  I’m nervous that one day soon, my granddad will ask me to set him up with a Facebook account.  I’ll dutifully comply, log him in, and give him this:

    Facebook User Interface

    Where does one even begin?  There are multiple feeds, multiple menus, pop-up and pop-down boxes.  How do you add one of these “status” things?  How do you add a friend?  How do I send a message to someone?  What’s public and what’s private?  Why is there so much stuff?

    In the world of User Experience (UX) design, we spend so much time thinking about how software will be used and by whom – personas, use cases, red routes and all the rest.  But in the majority of software I see when working with novice users, it seems that either the novice user has not been considered, or their persona is paid lip service while the latest excitingly complicated new features are bolted onto the software.

    As creators of software and of user experiences, I know we can do better than this.

    Do you have any thoughts on how we can design better for the novice user?  Just want to vent about an app with a particularly poor UI, or about a relative with a particularly poor grasp of computing?  Fire away in the comments below!

    SuccessWhale is Terrifying: VPS Edition

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    Just under two years ago, my noticed with alarm, was about to blow through my then-limited bandwidth allowance.

    I’ve since relocated all my web stuff to Dreamhost, taking advantage of their unlimited bandwidth offering to plow through 10 GB and more a month. But now I’m coming up against the last remaining limit of my shared hosting – memory usage.

    Both Westminster Hubble, which constantly crawls MPs’ social networks and RSS feeds, and an increasingly complex SuccessWhale, churn through a ton of memory. I don’t have a nice scary graph for this one, but at peak times, I’d estimate that my web server kills over half my PHP processes due to excess memory use. That means Only Dreaming basically goes down, while SuccessWhale throws errors around if it even loads at all.

    It looks like I’m left taking the expensive plunge of moving my hosting to a VPS rather than a shared solution, which is a jump I’m nervous to make, especially since none of my web properties make me any money. Most worrying of all is that VPS prices tend to vary by available memory, and I don’t actually know how much memory all my stuff would take up if it were allowed free rein. And nor do I have any way of finding out, bar jumping ship to a VPS and taking advantage of free trial weeks.

    So, dear lazyweb, do you have any experience with this sort of thing? And can anyone reccommend a good (cheap!) VPS host that fulfils the following criteria:

    • LAMP stack with “P” being both PHP and Python (or *BSD instead of Linux)

    • Full shell access

    • Unlimited (or at least 100 GB) bandwidth

    • Unlimited (or at least 10 GB) disk space

    • At least 20 MySQL databases

    • IMAP mailboxes & mail forwarding

    I’ve been recommended linode by a friend which seems great for tinkering, though the price scales up rapidly with RAM use and I’m not sure I want to deal with the hassle of setting up Apache, MySQL etc. by myself. And there’s Dreamhost’s own offering, which would be virtually zero-hassle to switch to, but probably isn’t the cheapest around.

    So, citizens of the interweb, I seek your advice!

    Announcing: SuccessWhale version 2.0!

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    Ladies and Gentlemen of the Internet, I am pleased to announce that SuccessWhale version 2.0 has just been released and is now live on sw.onlydreaming.net.

    SuccessWhale is a web-based client for Twitter and Facebook, written in PHP, JavaScript and MySQL. It offers a multi-column view that allows users to merge together information from all their connected accounts and view it at a glance from any web browser.

    The big changes between version 1.1.2 and 2.0 are:

    • Facebook support
    • Support for multiple Twitter (and Facebook) accounts
    • As many columns as you want
    • Columns that combine multiple feeds
    • Lightboxed images from Twitpic and yFrog
    • New themes
    • Numerous bug fixes!

    You can see a screenshot of it in action below:

    SuccessWhale Screenshot

    I would particularly like to thank Alex Hutter, Hugo Day, Erica Renton and Rg Enzon, whose help in finding bugs and suggesting new features has been instrumental in bringing SuccessWhale up to version 2.0 today.

    SuccessWhale is an open source project, and the source code is licenced under the GPL v3.

    Could I Live Without…?

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    A couple of months ago, I was particularly scathing about the crop of Facebook games that I was playing, particularly ones that had no end. The result? I no longer play any games on Facebook whatsoever. As I bemoaned at length, not one of them was adding to my life in any appreciable way.

    I wonder if it is now a good time to apply the same logic to various online services – to be extremely critical of them, to discover whether or not they actually add any value to my life. In short, could I live without…

    1. A Google Account

    As a search engine, Google is almost essential to life on the internet today.  Like a lot of you, I have signed up to many Google services over the years, each one simply on the merit that it was better than the competition (if there even was competition).  I go through phases of being alarmed at the amount of data Google collates about us all – their “do no evil” policy is wearing thin in the eyes of their customers.  But could I manage without mail, calendars and contacts synchronised between my phone and the web?  Without the near-endless entertainment of Google Reader?  Without the Android Market?

    Although I resent Google’s dominion over my online existence, its offerings are just better than others’.  And having an Android phone seals the deal.

    Verdict: No.

    2. GMail

    If I can’t live without a Google account, maybe I should just dump the GMail part of it?  I’ve actually done this once before; moved my e-mail wholesale to my own server.  But I went back – it’s a nice feeling to be in charge, to have your own mail server, but everything was so much harder.  ”Archiving” and “tagging” become a multi-click ‘move’ operation, IMAP has a host of strange issues, and no webmail client is a patch on Google’s.

    Ditching GMail appeals, but two months down the line I’d probably spend another evening moving everything back again.

    Verdict: Probably not.

    3. Twitter

    I suspect I’m in the minority, in that I follow no celebrities and don’t use Twitter for anything to do with “brand awareness” or “customer interaction”.  I use it for talking to my friends.  There are simply too many of us, online too irregularly, to use instant messaging – or god forbid, phone calls – any more.  (Whether that says something about the quality of our interaction, I’m not sure.)  But without Twitter I’d be largely unaware of what’s going on in the lives of the dozen or so people I care about the most.  Though my posts may be trivial and of interest to few, losing Twitter would be close to losing friends.

    Verdict: No.

    4. Facebook

    The social network we love to hate, there are a whole host of reasons people would want to quit – disregard for privacy, endless Farmville spam, lack of transparency / import & export functions – but yet, so few do.  I don’t play games on Facebook, I rarely post photos, I don’t “like” pages or take quizzes.  I have around 300 “friends”, many of whom I haven’t seen since school and wouldn’t recognise in the street.

    But there’s a few close friends and family that don’t use Twitter, and closing my Facebook account would mean cutting them off.  And besides, there’s always that nagging thought: “you’re 26 years old, every 26-year-old is on Facebook!”

    Verdict: It’s tempting to try.

    5. Google+

    Like many geeks, I am an “early adopter” of Google+, a social network that’s still in beta.  Now and again I load the page or run the mobile app, to see what people have posted – and they’ve posted exactly the same as they posted on Twitter.  Plus, without an API, I never bother to manually copy my own Twitter and Facebook posts to G+ too.

    It’s nice to be in there in case it picks up and becomes the next Social Network to Rule them All.  But right now, it’s taking up brain power and space on my bookmarks toolbar, and I’m gaining nothing from it.

    Verdict: Yes.

    6. LiveJournal

    All my LiveJournal posts are already syndicated from my blog, and I go through phases of disabling comments on my LiveJournal posts to drag people to comment on the blog itself.  It rarely works, but I have so little interaction with people through LiveJournal these days that it barely matters.  LiveJournal is dying, at least from my perspective, and I have already declared it time to quit.  Perhaps now is the time.

    Verdict: Yes.

    7. DeviantArt

    Once upon a time, I posted stories here with regularity.  Now, it’s a place I visit daily on the off-chance that one of the couple of artists whose pictures I enjoy has posted something.  Usually, they havent.  This is what RSS was made for.

    Verdict: Yes.

    8. Flickr

    Though firmly an amateur, I’m proud of my photos and Flickr is where I choose to show them off.  It’s also where family members abroad go to see what we’re up to, and it’s my insurance against a hard disk crash erasing the bits and bytes of our memories.  Just as with GMail, there’s a strong temptation to move my pictures to my own server, and run my own image gallery – but Flickr just does it better.

    Verdict: No.

    9. Last.fm

    I’ve been a keen scrobbler since the days when people knew what “scrobble” meant, and it’s so easy to set up that I’ve always set it up on any new computer, operating system or media player.  But why?  I know what my taste in music is, and I have little interest in my own listening history.  My friends surely have even less.  The only reason I can see for continuing is that I’m proud of the amount of data I’ve generated already – and that’s no reason at all for carrying on.

    Verdict: Yes.

    10. Foursquare

    In using Foursquare, I may be just as much a victim of the wall-chart sticker might be.  Checking in is just something I do when I arrive at a place.  I’m now essentially getting nothing out of Foursquare, even though I’m still reliably giving the company and its affiliates a complete history of where I go and where I shop.

    Verdict: Hell yes, ditch this yesterday.

    What are your thoughts on my reasoning?  Which services are you tied to, and which are you considering leaving for good?  I’d be interested to know.

    The Rise and Fall of LiveJournal

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    Once upon a time, accounts on blogging site LiveJournal were precious commodities indeed – the site gave out invites for its members to use, but there was no public sign-up page. I got my invite in the autumn of 2003 thanks to sasahara (Account active 2003-2009) from the IRC channel that I frequented at the time.

    LiveJournal was the ‘in’ place to be for angst-ridden students like myself, in the dim and distant pre-MySpace past. We were all there; it was our network before Facebook came along and crushed all other ways of swapping awful memes with your friends.

    If I recall correctly, on our first encounter, squirmelia (2001-2011) asked for my LJ handle before I was asked for my name. (Though seeing as that night was also my first encounter with eldritchreality_ (2004-2011)_ and charon47 (2001-2010), and my first trip to The Dungeon, that recollection may easily be in error.)

    As the place where we bared our hearts for the world to see, there were good and bad times aplenty, all pasted up on the internet – though in the case of the most intense drama, it was locked down for only certain groups of people to see. I recall having “Everyone except X” groups for all three of my University crushes, plus the girl I ended up with.

    The LiveJournals we created for characters in a roleplaying game, like my own Kotori (2004-2005) are still there. And aside from an in-character Remus Lupin blog (2003), eldritchreality and I are still the only LJ users to express an interest in combat magic. We spammed countless quizzes and memes together, organised dozens of parties over LJ; my friends and I.

    Good times. And yet, in a few short years, it has become nearly irrelevant.

    10% of those people I was friends with on LJ have properly closed their accounts; 90% of the rest stopped posting long ago. 20% of the groups I was a member of are closed, 100% of the rest are silent or beset by Russian spammers. 19 of my friends have their own blogs elsewhere. And I irritate everyone I’m sure by syndicating my own posts from my blog to LJ with the accompanying hook link to direct people back to my site.

    Scrolling back as far as I go in my LiveJournal friends list turns up a grand total of 10 people still using it, of which 8 post only unprotected entries which I could easily pull using an RSS feed.

    Which leads to the conclusion that LiveJournal is taking up its space on my toolbar and in my brain in order that I stay in touch with two people – both of whom I interact with more on Facebook than LiveJournal anyway.

    Sad as it is to see LiveJournal wither and die when once it was our companion through our angstiest years, I think it may soon be time to declare it over. Like all technology in our century, it ends not with a bang but with a whimper, simply rendered archaic and irrelevant by its successors.

    Like tears in rain, and all that.

    A Farewell to Marmablues

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    May 1998, half a lifetime ago. It was my 13th birthday, and my parents – no doubt annoyed by four years of me messing with the family computer – bought me my own. It had a 333MHz processor, 32 glorious megabytes of RAM, and most exciting of all, a 56k dial-up modem.

    With Microsoft Word as my co-pilot and under the ever-watchful phone-bill-monitoring eyes of my parents, I discovered the delights of owning my own website. It had it all, oh yes. Giant background images, a different one for each page. Animated GIFs. Background MIDIs. Frames, <blink> and <marquee>. Web rings to click through, and Tripod’s banner ads inserted at the top of every page. It was called “The Mad Marmablue Web Portal”, and it was exactly as horrendous as you are imagining.

    A few years later, a chronic lack of smallprint-reading led me to buy it a ‘free’ domain name, only to receive a scary-looking invoice a month later. In the end my parents sent the domain reseller a letter explaining that I was a dumb-ass kid who shouldn’t be trusted on the internet, and that was that. But at the age of seventeen, in possession of a Switch debit card, I found a web host who would set me up with a domain and 100MB of space for £20 a year. “marmablue.co.uk” was born.

    Today, it died.

    I shouldn’t feel sentimental about jettisoning an old unused domain, particularly not one that harks back to the late-90s animated GIF horror of the Mad Marmablue Web Portal. But it was a part of my youth, the place where for the first time I could put something and anyone could see it. It was where I took my first steps with HTML – by ripping off other websites, naturally – and in time, it was where I first learned JavaScript, PHP and SQL too.

    I will miss it. But if I sit very still, and very quietly, I can still hear that horrible 8-bit MIDI rendition of the RoboCop theme tune. So maybe I won’t miss it all that much.

    For the Discerning Lady or Gentleman, SuccessWhale version 1.1

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    The sudden proliferation of peoples’ syndicated tweets from sources such as Foursquare and Fallen London annoys me far more than it should. Any more sensible old grouch would pick up his pipe, don slippers and write a strongly-worded letter to the local newspaper about how this ‘checking in’ business is corrupting society.

    Instead, I made my Twitter client block them. Also, you can now do it too!

    SuccessWhale users will now see a link at the top-right of the interface called ‘Manage Banned Phrases’. Clicking it will take you to a page where you can specify a semicolon-separated list of things you don’t want to see, such as “4sq.com;fallenlondon.com;bieber”. Once confirmed, any tweets in any timeline that are sucky enough to contain one of these phrases will be hidden from your view.

    Twitter: now 12% less full of shite!

    An extra feature has been rolled into this release, which is the ‘Reply All’ button. It only appears where two or more people are having a conversation (three or more if you’re included too). Clicking on it starts a reply to everyone mentioned, not just the tweet’s originator. So if @Alice is talking to @Bob, and you click ‘Reply All’ on one of her tweets, your entry box will then read “@Alice @Bob”.

    So that’s version 1.1. Share and enjoy!

    SuccessWhale is a free, open, multi-platform web-based Twitter client. It’s hosted at sw.onlydreaming.net, and you can find out more about SuccessWhale here. It’s GPL-licenced, so you can download yourself a copy too if you want one.

    a thousand words: Finishing Touches

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    The vast majority of user-reported bugs and requested features on “a thousand words” have now been sorted out. As requested by my co-conspirator Eric, we now have an ‘adult content’ filter based on a date of birth field in users’ profiles, and a ‘report’ button to bring problematic stories and pictures to the attention of the moderators. There’s also a DeviantArt-style “request critique” option to let users know what kind of comments you’re looking for.

    Timestamps have been fixed, “no stars yet” ratings introduced, and text field policies such as “mustn’t be empty” have been added across the site. A few rendering issues in IE have been sorted out, so it now looks much the same across all platforms.

    The biggest change is unfortunately something most of you will never see – the moderator console. Picture submissions and reported stories/pictures now sit in queues that can be dealt with by moderators. An item entering a queue triggers an e-mail to all mods, who are invited to review it and make changes as appropriate. Once changes are made, the affected users are then e-mailed to let them know what happened (and in the case of reported items, to give them a chance to challenge it).

    There’s one major feature request that’s not yet been implemented: file uploads. Once in the system this would allow users to submit pictures from their hard drives rather than from the web by URL, and would allow moderators to copy URL-linked pictures to the site to avoid hotlinking. (At present we don’t hotlink, but we do therefore have to copy pictures to the site manually using FTP.) It could also allow users to use a non-Gravatar picture for their profile.

    Depending on how things go, that may or may not be ready by tomorrow night. On Saturday morning I jet off to sunny Saudi Arabia, so any changes not made by then are going to remain unmade for a while. From that point it’s in Eric’s capable hands as to whether she wants to release the site or not. Even if the site does advance to release status, I’m still taking bug reports (they’ll sit in my inbox until I get back), so keep on letting me know what’s broken and what you’d like to see added!

    a thousand words: Alpha, Beta

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    “a thousand words” has now reached a stage where every feature that I give a damn about is implemented. Thus, we’re opening it up to a limited beta test to iron out the wrinkles and get a list of any features potential users would like to see us launch with. If you’re bored or simply have a love of breaking other people’s shit, head along to http://athousandwords.org.uk and see what hell you can raise. As the Big Red Box Text warns you, really don’t submit any work of fiction you care about, just in case some kind soul finds an SQL injection vulnerability and trashes the database.

    Since last time I bored the hell out of you all, voting and commenting has been implemented, registration has been fixed, filtering HTML tags from submissions has been added, as has a word count and the picture selector on story submission. There’s been a bunch of behind-the-scenes tweaks to improve security too.

    The one feature that Eric definitely wants is a way to mark stories according to their content. We could do this in several ways – I would prefer, if anything, to just have a “not for kids” option on each post and a Date of Birth field associated with user accounts, so we can hide stories as required. Other options include a range of ratings (U, PG, 12, 15, 18…) or tags for certain content (violence, sex, language) so people can avoid whatever they’re picky about.

    This probably ought to come with a Report button so that users can report incorrectly rated stories, and I would add a similar feature to report pictures. (Picture submissions are moderated, so Goatse isn’t going to make it through anyway, but the mod team might miss subtler things like licencing terms and copyright infringement.)

    At that point, all that’s left on my list is the admin interface and anything that users suggest during this beta. Hopefully we’ll be ready to launch by the time I depart for sandier shores at the end of the week!

    a thousand words: Hot Profilin’ Action

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    A few days’ laziness (by which I mean a few days’ Starcraft) have passed with not much work being done on “a thousand words”. That came to an end tonight, with a productive evening resulting in a working profile system so that users can now add and display personal information, change their registered e-mail address and password, etc.

    There’s now a database backend for the voting and commenting systems, which will be complemented by their GUI pages tomorrow night.

    Once that’s done, that’s the last of the main functions out of the way and we’re basically down to tweaks. I think we ought to, in no particular order:

    • Decide on what formatting users can add to stories, and filter for it

    • Add a word count, and possibly limit submissions to e.g. 600-1400 words

    • Add a means of reporting stories and pictures for e.g. copyright issues

    • Add a means of rating stories, so users can mark them as containing sex, violence etc.

    • Create an admin interface, so we don’t just have to run the site with raw SQL queries

    • Add ranks, etc. (incentives for achieving high Total Stars)

    • jQuery up some of the main bits to improve user experience

    • Implement the scrolling list of pictures for users to select when creating a new story

    At that point, I think it should be ready for open beta. Hopefully we can get it all done within a week, before I depart for internet-less shores!

    a thousand words: GETting and POSTing

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    Another day, another bunch of functionality added to a thousand words. With the main public-facing interfaces largely complete, I have moved on to the guts of the site’s user interaction. The site now has working, but ugly, implementations of:

    • E-mail address / password authentication, with cookie support based on a secret phrase generated at registration.

    • Registration itself, including the setting of a display name (users authenticate with their e-mail address, so we need something friendlier to display in the UI). Accounts are created in an unactivated state, and an e-mail is sent allowing the user to use their secret phrase to activate the account (GETted via a “click here to activate!” URL).

    • Picture submission, which adds the submission to a ‘queue’ table. In time there will be an admin interface for moving items from the queue to the real pictures table, i.e. promoting a suggested picture to “picture of the week” status.

    • Story submission, which adds the story to the live site and takes you there after submission. There’s currently no edit capability, and the picture that the story is based on must be manually specified by ID number. (The latter will become a scrollable jQuery list of all pictures.)

    A story edit/delete interface is my next task, and once that’s done, the core functionality (excluding any user profile-related code) will be largely finished. After that there’ll be a period of testing and improving the interfaces of the new functions, before I put a call out for a couple of willing guinea pigs to try and break the site for me! If anyone out there is expecting to be really bored sometime this week, let me know!

    a thousand words: First Sketches

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    With the main browsing UI for a thousand words up and running, it’s time to bore the world with more pointless trivia before moving on. Today: design sketches!

    Pretty much every software project I undertake these days begins with a sketch of the user interface and an initial structure for the database. Labouring under the cruel ‘no whiteboard’ conditions at home (maybe I should get one?), I drew these out on paper. Passing the UI sketch over to Eric after about 5 minutes’ work, she described it as “awesome”. I think that’s the first time that’s ever happened; the general response at work is along the lines of “but where are you going to put giant-ugly-element-X that I’ve just thought of and wasn’t in the spec?”. So that was that, and I’ve coded it up pretty much as it was on paper.

    The database hasn’t changed much from the original design yet, but it will have to soon – as designed, the vote (‘stars’) system doesn’t record each user’s vote on each story, so it can’t support users changing their vote. Sometime during development I’ll have to devote a few hours to figure out the best way of handling it, though that probably comes down to a few minutes as someone on Stack Overflow has inevitably asked about it already.

    a thousand words UI Sketch
    a thousand words Database Design

    Next up on a thousand words is coding the first few forms that will allow users to register and log in, submit photos and submit stories. That should be done within the next few days, and will allow me to play with actually changing the contents of the database, rather than just showing views of it.

    a thousand words: A New Timesink has Arrived!

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    Somehow unable to cope with actually having free time of an evening, I have taken on yet another project which will doubtless push me deeper into the dark, untamed wilds of the internet, the land stalked only by the mysterious beast known as the “web developer”.

    Eric has come up with the idea for a fiction-writing community known as “A Thousand Words”. The concept is simple:

    • Users submit photos or other images that they find interesting
    • Every week (or other suitable period of time), one of these is chosen by the site staff
    • Users then write short stories, of around 1000 words, inspired by the picture
    • Users rate, comment etc. on each other’s stories

    I’ll be coding up this site in my spare time over the next few weeks, and you can check out my current progress on the live site at a thousand words.  Currently, the database design is done and I’m partway through the UI of what will be the main page.  My todo list is roughly:

    1. Finish the main page and story page UIs.
    2. Add bare-bones pages for all the GET/POST functions, e.g. registering accounts, submitting stories, submitting pictures.
    3. Test all the functions.
    4. Work on their UIs.
    5. Start closed beta testing for anyone interested.
    6. Liberally apply jQuery to improve user experience.
    7. Add commenting, possibly via DISQUS.
    8. Add proper user profiles, gravatar support etc.
    9. Get everyone I can find to try and break it.
    10. Release!  Open the flood-gates, and despair at the dribble I receive.

    As I go I’ll be posting updates and hopefully-interesting insights here, and you can always check the site at athousandwords.org.uk to see how I’m getting on.

    Farewell, Dynamic Democracy

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    Back in April, the Digital Economy Bill was rushed through the wash-up procedure of the outgoing government without the due debate and consideration that I and others believe such a far-reaching bill deserved. My disillusionment with the government decision-making process over the following week led me to set up and announce a new site, called “Dynamic Democracy”. It was an experiment to see what would be discussed if everyone was involved – on an anonymous basis – rather than just our elected representatives that often do not do a good job of representing us anyway.

    The site allowed all users to create and comment on ‘Bills’, encapsulated ideas or laws that they would be pushing for if they were in power. Registering gave users the ability to vote bills (and comments) up and down, leading to a list of highest-ranked bills that represented the users’ favourite potential policies.

    Dynamic Democracy saw little success, possibly because writing a full, well-thought-out bill represented significant effort that a casual browser would be unlikely to commit. ‘Karma’, the point system that aimed to encourage users to submit bills and comments, did not prove to be a good enough incentive as there were so few users to compete with and no direct reward was ever implemented for reaching high karma levels.

    What the site did bring, however, was a number of enquiries from like-minded individuals all over the world, keen to discuss the ideas behind the site and whether or not something like Dynamic Democracy could ever be implemented as a real government policy-making tool. One of the more notable contacts, Denny de la Haye, stood as a candidate for Hackney South and Shoreditch in the general election and promised to implement a crowd-sourced voting system similar to Dynamic Democracy for his constituents to voice their opinions in Parliament through him. (Denny, who sadly did not win his seat, now represents the UK arm of political party DemoEx.)

    I have decided that today is the day to close the Dynamic Democracy experiment, because today the UK government announced their “Your Freedom” website. While largely focussed on repealing or changing laws rather than the complete freedom to suggest anything you like, Your Freedom is certainly in the same vein as Dynamic Democracy, with the crucial extra feature that is endorsed and used by our government and thus ideas proposed there stand at least some chance of making it into official government policy.

    Time will tell whether that really happens, or if like the No. 10 Petitions site, suggestions will be responded to with an e-mail from the Prime Minister’s office explaining why thousands of users are all wrong. But I do still hold out hope.

    Did Dynamic Democracy influence the government in their decision to create Your Freedom? Almost certainly not. As my discussions with visitors to the site have shown, I am far from the only person to have come up with this idea, and neither am I the only one to have coded up a website around it. No – this is simply an idea whose time has come. A vast gulf exists between Westminster and the world outside, just as it always has, but these days the public are coming to question why that is and if we can do something to correct it. And nowhere is the desire to bridge that gulf stronger than among the tech-savvy youth that have the drive and the ability to use the internet to that end. Sites like these will come and go a hundred times over the coming years and decades, and slowly but surely we’ll reshape our government into what we want it to be.

    So to everyone who contributed to Dynamic Democracy: thank you, and goodbye.

    If you’d like to contact me about Dynamic Democracy (or anything else), you can still do that via email. If you’d like to help get the Digital Economy Act repealed, please vote up and comment on one of these ideas on Your Freedom. If anyone would like use of dynamicdemocracy.org.uk until my ownership expires in 2012, let me know. Stay tuned for the announcement of another project that bridges politics and the internet in the next few weeks.

    An Experiment in Dynamic Democracy

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    Dynamic Democracy

    I’ve been an advocate of opening up our democracy and involving the public in government decision-making for some time, without doing anything particularly concrete about it besides placing my vote. The Digital Economy Bill fiasco showed us that, really, we’re not involved with the day-to-day workings of government at all, and born of that is this experiment.

    I’d like to know what we, the people, think our government should be talking about. I’d like us ordinary people to submit our ideas, vote on other people’s ideas, and come up with some idea of what we really care about. And so here we are:

    Dynamic Democracy

    This is all very experimental at the moment – please sign up, post ideas, vote on other people’s ideas, and if it proves popular I’ll take it on as a permanent project. Let’s do this!

    Coming of Age

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    Yes, she's legal.

    The other day, while excavating the depths of our airing cupboard-turned-junk pile, I discovered possibly the oldest gadget I own: a Psion Series 3a… thingy. Time has obscured from my memory what we actually called these things when they were new. It certainly wasn’t ‘netbook’ – was it ‘palmtop’? After some new batteries and a non-trivial number of blunt impacts against the table to reseat the display connector, it spluttered into life. The back of the unit declares it to have been made in 1993, so this thing is sixteen years old.

    Now where I am, at sixteen, one can do the following:

    • Drive a scooter

    • Have heterosexual sex

    • Marry (heterosexually) with your parents’ consent

    • Enter full-time employment

    • Play the lottery.

    The Psion 3a, having the decency to look embarrassed next to my cellphone.

    There are a few issues with most of these. Driving a scooter is clearly beyond the poor thing’s capabilities. It appears to have expansion slots, so I’m going to go ahead and consider it female. Now that by default makes all other Psion 3as female, so marriage (within its own species at least) is presumably out. I have no expansion cards to put in it, and now I’ve mentally pidgeonholed that as “having sex” I’m not sure I even want to. Full-time employment is out as I’m not sure it does anything that peoples’ cellphones don’t these days. And that just leaves playing the lottery. Well, then.

    These things can be programmed in a language called OPL, which appears to be so antiquated that even the internet has largely forgotten it. I’m immensely grateful to Gareth and Jane Saunders, who seem to be the only people left with an OPL-related webpage that hosts the programmers’ manual.

    In the UK, one picks six numbers between 1 and 49 for each draw. Six numbers and a bonus are chosen by the lottery machine, and matching all of the main six is a jackpot (odds about 14 million to one). Matching three is the lowest prize, £10 at odds of about 56 to one. So, not really confident we’ll be winning anything here. Still, onwards!

    Making sure all six numbers it picks are different would take more than the three minutes I’m prepared to spend in contact with OPL – damn thing doesn’t even have FOR loops. I’ll just run the program again if it picks two the same. So here’s possibly the shortest program I’ve ever written:

    Eat your heart out, Visual Studio 2008.

    PROC lottery:
      LOCAL count%, n%
      PRINT "Lottery Numbers:  ";
        n% = (RND*48+1)
        PRINT n%;
        PRINT "  ";
        count% = count% + 1
      UNTIL (count% = 6)

    The Die is Cast.

    And when translated (translated? really?) and run, it does indeed produce lottery numbers. So – to the newsagents! And back, lottery ticket – and granulated sugar – in hand.

    Having foolishly switched the thing off in the meantime, it took a few seconds of mashing the On button and opening and closing the lid to coax it back into life. But back to life it came, long enough to pick its six numbers. And now, we wait to see what fate befalls this aged device.

    Will it quietly be replaced by gadgets a decade and a half its junior? Or become a palmtop millionaire, and, er… and I’ll have to work out what the heck a Psion 3a would do with a million quid. Tune in on Saturday night to find out!

    The lottery results are in! You can find out what happened in my next blog post, here. Spoilers: I am still not a millionaire.

    Announcing: SuccessWhale!

    This is a post from my blog, which is (mostly) no longer available online. This page has been preserved because it was linked to from somewhere, or got regular search hits, and therefore may be useful to somebody.

    For the last few days I’ve been working on a simple web-based Twitter client, to fill the void between the simplicity of Twitter’s own web interface and the broken-in-IE6 complexity of BeTwittered and Seesmic Desktop’s web interface.

    It’s still under heavy development, and there are probably a ton of bugs and missing useful features. Please give it a try and let me know what you think. Bug reports are more than welcome!

    The source code is licenced under the GNU GPL v3.

    Update: Due to a move to the proper OAuth API, the software could no longer continue to be called FailWhale, as someone’s already written a Twitter app with that name! Thus, until I or someone else comes up with a good idea, it’s called SuccessWhale.