Pi, Python and I (part 2)

Raspberry PiIn my previous post, I talked about how I’m using a Raspberry Pi to run a Facebook backup service and provided the Python code needed to get (and maintain) a valid Facebook token to do this. This post will be discussing the actual Facebook backup service and the Python code to do that. It will be my second Python program ever (the first was in the previous post), so there will likely be better ways to do what I’ve done, although you’ll see it’s still a pretty simple exercise. (I’m happy to hear about possible improvements.)

The first thing I need to do is pull in all the Python modules that will be useful. The extra packages should’ve been installed from before. Also, because the Facebook messages will be backed-up to Gmail using its IMAP interface, the Google credentials are specified here, too. Given that those credentials are likely to be something you want to keep secret at all costs, all the more reason to run this on a home server rather than on a publicly hosted server.

from facepy import GraphAPI
import urlparse
import dateutil.parser
from crontab import CronTab
import imaplib
import time

# How many status updates to go back in time (first time, and between runs)
MAX_ITEMS = 5000
# How many items to ask for in each request
REQUEST_ITEMS = 25
# Default recipient
DEFAULT_TO = "my_gmail_acct@gmail.com" # Replace with yours
# Suffix to turn Facebook message IDs into email IDs
ID_SUFFIX = "@exportfbfeed.facebook.com"
# Gmail account
GMAIL_USER = "my_gmail_acct@gmail.com" # Replace with yours
# and its secret password
GMAIL_PASS = "S3CR3TC0D3" # Replace with yours
# Gmail folder to use (will be created if necessary)
GMAIL_FOLDER = "Facebook"

Before we get into the guts of the backup service, I first need to create a few basic functions to simplify the code that comes later. Initially, there’s a function that is used to make it easy to pull a value from the results of a call to the Facebook Graph API:

def lookupkey(the_list, the_key, the_default):
  try:
    return the_list[the_key]
  except KeyError:
    return the_default

Next a function to retrieve the Facebook username for a given Facebook user. Given that we want to back-up messages into Gmail, we have to make them look like email. So, each message will have to appear to come from a unique email address belonging to the relevant Facebook user. Since Facebook generally provides all their users with email addresses at the facebook.com domain based on their usernames, I’ve used these. However, to make it a bit more efficient, I cache the usernames in a list so that I don’t have to query Facebook again when the same person appears in the feed multiple times.

def getusername(id, friendlist):
  uname = lookupkey(friendlist, id, '')
  if '' == uname:
    uname = lookupkey(graph.get(str(id)), 'username', id)
    friendlist[id] = uname # Add the entry to the dictionary for next time
  return uname

The email standards expect times and dates to appear in particular formats, so now a function to achieve this based on whatever date format Facebook gives us:

def getnormaldate(funnydate):
  dt = dateutil.parser.parse(funnydate)
  tz = long(dt.utcoffset().total_seconds()) / 60
  tzHH = str(tz / 60).zfill(2)
  if 0 <= tz:
    tzHH = '+' + tzHH
  tzMM = str(tz % 60).zfill(2)
  return dt.strftime("%a, %d %b %Y %I:%M:%S") + ' ' + tzHH + tzMM

Next, a function to find the relevant bit of a URL to help travel back and forth in the Facebook feed. Given that the feed is returned to use from the Graph API in small chunks, we need to know how to query the next or previous chunk in order to get it all. Facebook uses a URL format to give us this information, but I want to unpack it to allow for more targeted navigation.

def getpagingpart(urlstring, part):
  url = urlparse.urlsplit(urlstring)
  qs = urlparse.parse_qs(url.query)
  return qs[part][0]

Now a function to construct the headers and body of the email from a range of information gleaned from processing the Facebook Graph API results.

def message2str(fromname, fromaddr, toname, toaddr, date, subj1, subj2, msgid, msg1, msg2, inreplyto=''):
  if '' == inreplyto:
    header = ''
  else:
    header = 'In-Reply-To: <' + inreplyto + '>\n'
  utcdate = dateutil.parser.parse(date).astimezone(dateutil.tz.tzutc()).strftime("%a %b %d %I:%M:%S %Y")
  return "From nobody {}\nFrom: {} <{}>\nTo: {} <{}>\nDate: {}\nSubject: {} - {}\nMessage-ID: <{}>\n{}Content-Type: text/html\n\n

{}{}

".format(utcdate, fromname, fromaddr, toname, toaddr, date, subj1, subj2, msgid, header, msg1, msg2)

Okay, now we've gotten all that out of the way, here's the main function to process a message obtained from the Graph API and place it in an IMAP message folder. The Facebook message is in the form of a dictionary, so we can look up the relevant parts by using keys. In particular, any comments to a message will appear in the same format, so we recurse over those as well using the same function.

Note that in a couple of places I call encode("ascii", "ignore"). This is an ugly hack that strips out all of the unicode goodness that was in the original Facebook message (which allows foreign language characters and symbols), dropping anything exotic to leave plain ASCII characters behind. However, for some reason, the Python installation on my Raspberry Pi would crash the program whenever it came across unusual characters. To ensure that everything works smoothly, I ensure that these aren't present when the text is processed later.

def printdata(data, friendlist, replytoid='', replytosub='', max=MAX_ITEMS, conn=None):
  c = 0
  for d in data:
    id = lookupkey(d, 'id', '') # get the id of the post
    msgid = id + ID_SUFFIX
    try: # get the name (and id) of the friend who posted it
      f = d['from']
      n = f['name'].encode("ascii", "ignore")
      fid = f['id']
      uname = getusername(fid, friendlist) + "@facebook.com"
    except KeyError:
      n = ''
      fid = ''
      uname = ''
    try: # get the recipient (eg. if a wall post)
      dest = d['to']
      destn = dest['name']
      destid = dest['id']
      destname = getusername(destid, friendlist) + "@facebook.com"
    except KeyError:
      destn = ''
      destid = ''
      destname = DEFAULT_TO
    t = lookupkey(d, 'type', '') # get the type of this post
    try:
      st = d['status_type']
      t += " " + st
    except KeyError:
      pass
    try: # get the message they posted
      msg = d['message'].encode("ascii", "ignore")
    except KeyError:
      msg = ''
    try: # there may also be a description
      desc = d['description'].encode("ascii", "ignore")
      if '' == msg:
        msg = desc
      else:
        msg = msg + "
\n" + desc except KeyError: pass try: # get an associated image img = d['picture'] msg = msg + '
\n' except KeyError: img = '' try: # get link details if they exist ln = d['link'] ln = '
\nlink' except KeyError: ln = '' try: # get the date date = d['created_time'] date = getnormaldate(date) except KeyError: date = '' if '' == msg: continue if '' == replytoid: email = message2str(n, uname, destn, destname, date, t, id, msgid, msg, ln) else: email = message2str(n, uname, destn, destname, date, 'Re: ' + replytosub, replytoid, msgid, msg, ln, replytoid + ID_SUFFIX) if conn: conn.append(GMAIL_FOLDER, "", time.time(), email) else: print email print "----------" try: # process comments if there are any comments = d['comments'] commentdata = comments['data'] printdata(commentdata, friendlist, replytoid=id, replytosub=t, conn=conn) except KeyError: pass c += 1 if c == max: break return c

The last bit of the program uses these functions to perform the backup and to set up a cron job to run the program again every hour. Here's how it works..

First, I grab the Facebook Graph API token that the previous program (setupfbtoken.py) provided, and initialise the module that will be used to query it.

# Initialise the Graph API with a valid access token
try:
  with open("fbtoken.txt", "r") as f:
    oauth_access_token = f.read()
except IOError:
  print 'Run setupfbtoken.py first'
  exit(-1)

# See https://developers.facebook.com/docs/reference/api/user/
graph = GraphAPI(oauth_access_token)

Next, I set up the connection to Gmail that will be used to store the messages using the credentials from before.

# Setup mail connection
mailconnection = imaplib.IMAP4_SSL('imap.gmail.com')
mailconnection.login(GMAIL_USER, GMAIL_PASS)
mailconnection.create(GMAIL_FOLDER)

Now we just need to initialise some things that will be used in the main loop: the cache of the Facebook usernames, the count of the number of status updates to read, and the timestamp that marks the point in time to begin reading status from. This last one is to ensure that we don't keep uploading the same messages again and again, and the timestamp is kept in the file fbtimestamp.txt.

friendlist = {}

countdown = MAX_ITEMS
try:
  with open("fbtimestamp.txt", "r") as f:
    since = '&since=' + f.read()
except IOError:
  since = ''

Now we do the actual work, reading the status feed and processing them:

stream = graph.get('me/home?limit=' + str(REQUEST_ITEMS) + since)
newsince = ''
while stream and 0 < countdown:
  streamdata = stream['data']
  numitems = printdata(streamdata, friendlist, max=countdown, conn=mailconnection)
  if 0 == numitems:
    break;
  countdown -= numitems
  try: # get the link to ask for next (going back in time another step)
    p = stream['paging']
    next = p['next']
    if '' == newsince:
      try:
        prev = p['previous']
        newsince = getpagingpart(prev, 'since')
      except KeyError:
        pass
  except KeyError:
    break
  until = '&until=' + getpagingpart(next, 'until')
  stream = graph.get('me/home?limit=' + str(REQUEST_ITEMS) + since + until)

Now we clean things up: record the new timestamp and close the connection to Gmail.

if '' != newsince:
  with open("fbtimestamp.txt", "w") as f:
    f.write(newsince) # Record the new timestamp for next time

mailconnection.logout()

Finally, we set up a cron job to keep the status updates flowing. As you can probably guess from this code snippet, this all is meant to be saved in a file called exportfbfeed.py.

cron = CronTab() # get crontab for the current user
if [] == cron.find_comment("exportfbfeed"):
  job = cron.new(command="python ~/exportfbfeed.py", comment="exportfbfeed")
  job.minute.on(0) # run this script @hourly, on the hour
  cron.write()

Alright. Well, that was a little longer than I thought it would be. However, the bit that does the actual work is not very big. (No sniggering, people. This is a family show.)

It's been interesting to see how stable the Raspberry Pi has been. While it wasn't designed to be a home server, it's been running fine for me for weeks.

There was an additional benefit to this backup service that I hadn't expected. Since all my email and Facebook messages are now in the one place, I can easily search the lot of them from a single query. In fact, the Facebook search feature isn't very extensive, so it's great that I can now do Google searches to look for things people have sent me via Facebook. It's been a pretty successful project for me and I'm glad I got the chance to play with a Raspberry Pi.

For those that want the original source code files, rather than cut-and-pasting from this blog, you can download them here:

If you end up using this for something, let me know!

Pi, Python and I (part 1)

Raspberry PiI’ve been on Facebook for almost six years now, and active for almost five. This is a long time in Internet time.

Facebook has, captured within it, the majority of my interactions with my friends. Many of them have stopped blogging and just share via Facebook, now. (Although, at least two have started blogging actively in the last year or so, and perhaps all is not lost.) At the start, I wasn’t completely convinced it would still be around – these things tended to grow and then fade within just a few years. So, I wasn’t too concerned about all the *stuff* that Facebook would accumulate and control. I don’t expect them to do anything nefarious with it, but I don’t expect them to look after it, either.

However, I’ve had a slowly building sense that I should do something about it. What if Facebook glitched, and accidentally deleted everything? There’s nothing essential in there, but there are plenty of memories I’d like to preserve. I really wanted my own backup of my interactions with my friends, in the same way I have my own copies of emails that I’ve exchanged with people over the years. (Although, fewer people seem to email these days, and again they just share via Facebook.)

The trigger to finally do something about this was when every geek I knew seemed to have got themselves a Raspberry Pi. I tried to think of an excuse to get one myself, and didn’t have to think too hard. I could finally sort out this Facebook backup issue.

Part of the terms of my web host are that I can’t run any “robots” – it’s purely meant to handle incoming web requests. Also, none of the computers at home are on all the time, as we only have tablets, laptops and phones. I didn’t have a server that I could run backup software on.. but a Raspberry Pi could be that server.

For those who came in late, the Raspberry Pi is a tiny, single-board computer that came out last year, is designed and built in the UK, and (above all) is really, really cheap. I ordered mine from the local distributor, Element14, whose prices start at just under $30 for the Model A. To make it work, you need to at least provide a micro-USB power supply ($5 if you just want to plug it into your car, but more like $20 if you want to plug it into the wall) and a Micro SD card ($5-$10) to provide the disk, so it’s close to $60, unless you already have those to hand. You can get the Model B, which is about $12 more and gets you both more memory and an Ethernet port, which is what I did. You’ll need to find an Ethernet cable as well, in that case ($4).

When a computer comes that cheap, you can afford to get one for projects that would otherwise be too expensive to justify. You can give them to kids to tinker with and there’s no huge financial loss if they brick them. Also, while cheap, they can do decent graphics through an HDMI port, and have been compared to a Microsoft Xbox. No wonder they managed to sell a million units in their first year. Really, I’m a bit slow on the uptake with the Raspberry Pi, but I got there in the end.

While you can run other operating systems onto it, if you get a pre-configured SD card, it comes with a form of Linux called Raspbian and has a programming language called Python set up ready to go. Hence, I figured as well as getting my Facebook backup going, I could use this as an excuse to teach myself Python. I’d looked at it briefly a few years back, but this would be the first time I’d used it in anger. I’ll document here the steps I went through to implement my project, in case anyone else wants to do something similar or just wants to learn from this (if only to learn how simple it is).

The first thing to do is to head over to developers.facebook.com and create a new “App” that will have the permissions that I’ll use to read my Facebook  feed. Once I logged in, I chose “Apps” from the toolbar at the top and clicked on “Create New App”. I gave my app a cool name (like “Awesome Backup Thing”) and clicked on “Continue”, passed the security check to keep out robots, and the app was created. The App ID and App secret are important and should be recorded somewhere for later.

Now I just needed to give it the right permissions. Under the Settings menu, I clicked on “Permissions”, then added in the ones needed into the relevant fields. For what I want, I needed: user_about_me, user_status, friends_about_me, friends_status, and read_stream. “Save Changes” and this step is done. Actually, I’m not sure if this is technically needed, given the next step.

Now I needed to get a token that can be used by the software on the server to query Facebook from time to time. The easiest way is to go to the Graph API Explorer, accessible under the “Tools” menu in the toolbar.

I changed the Application specified in the top right corner to Awesome Backup Thing (insert your name here), then clicked on “Get access token”. Now I need to specify the same permissions as before, across the three tabs of User Data Permissions (user_about_me, user_status), Friends Data Permissions (friends_about_me, friends_status) and Extended Permissions (read_stream). Lastly, I clicked on “Get Access Token”, clicked “OK” to the Facebook confirmation page that appeared, and returned to the Graph API explorer where there was a new token waiting for me in the “Access token” textbox. It’ll be needed later, but it’s valid for about two hours. If you need to generate another one, just click “Get access token” again.

Now it’s time to return to the Pi. Once I logged in, I needed to set up some additional Python packages like this:

$ sudo pip install facepy
$ sudo pip install python-dateutil
$ sudo pip install python-crontab

And then I was ready to write some code. The first thing was to write the code that will keep my access token valid. The one that Facebook provides via the Graph API Explorer expires too quickly and can’t be renewed, so it needs to be turned into a renewable access token with a longer life. This new token then needs to be recorded somewhere so that we can use it for the backing-up. Luckily, this is pretty easy to do with those Python packages. The code looks like this (you’ll need to put in the App ID, App Secret, and Access Token that Facebook gave you):

# Write a long-lived Facebook token to a file and setup cron job to maintain it
import facepy
from crontab import CronTab
import datetime

APP_ID = '1234567890' # Replace with yours
APP_SECRET = 'abcdef123456' # Replace with yours

try:
  with open("fbtoken.txt", "r") as f:
  old_token = f.read()
except IOError:
  old_token = ''
if '' == old_token:
  # Need to get old_token from https://developers.facebook.com/tools/explorer/
  old_token = 'FooBarBaz' # Replace with yours

new_token, expires_on = facepy.utils.get_extended_access_token(old_token, APP_ID, APP_SECRET)

with open("fbtoken.txt", "w") as f:
  f.write(new_token)

cron = CronTab() # get crontab for the current user
for oldjob in cron.find_comment("fbtokenrenew"):
  cron.remove(oldjob)
job = cron.new(command="python ~/setupfbtoken.py", comment="fbtokenrenew")
renew_date = expires_on - datetime.timedelta(1)
job.minute.on(0)
job.hour.on(1) # 1:00am
job.dom.on(renew_date.day)
job.month.on(renew_date.month) # on the day before it's meant to expire
cron.write()

Apologies for the pretty rudimentary Python coding, but it was my first program. The only other things to explain are that the program sits in the home directory as the file “setupfbtoken.py” and when it runs, it writes the long-lived token to “fbtoken.txt” then sets up a cron-job to refresh the token before it expires, by running itself again.

I’ll finish off the rest of the code in the next post.

Technology, Finance and Education

Yale Theatre

I have been trying out iTunes U by doing the Open Yale subject ECON252 Financial Markets. What attracted me to the subject was that the lecturer was Robert Shiller, one of the people responsible for the main residential property index in the US and an innovator in that area. Also, it was free. :)

I was interested in seeing what the iTunes U learning experience was like, and I was encouraged by what I found. While it was free, given the amount of enjoyment I got out of doing the subject, I think I’d happily have paid around the cost of a paperback book for it. I could see video recordings of all the lectures, or alternatively, read transcripts of them, plus access reading lists and assessment tasks.

The experience wasn’t exactly what you’d get if you sat the subject as a real student at Yale. Aside from the general campus experience, also missing were the tutorial sessions, professional grading of the assessments (available as self-assessment in iTunes U), an ability to borrow set texts from the library, and an official statement of grading and completion at the end. Also, the material dated from April 2011, so wasn’t as current as if I’d been doing the real subject today.

Of these, the only thing I really missed was access to the texts. I suppose I could’ve bought my own copies, but given I was trying this because it was free, I wasn’t really inclined to. Also, for this subject, the main text (priced at over $180) was actually a complementary learning experience with seemingly little overlap with the lectures.

While I tried both the video and transcript forms of the lectures, and while the video recordings were professionally done, in the end I greatly preferred the transcripts. The transcripts didn’t capture blackboard writing/diagrams well, and I sometimes went back and watched the videos to see them, but the lecturer had checked over the transcripts and they had additions and corrections in them that went beyond what was in the video. Also, I could get through a 1hr lecture in a lot less than an hour if I was reading the transcript.

Putting aside the form of delivery, the content of the subject turned out to be much more interesting that I expected at the beginning. Shiller provided a social context for developments in finance through history, explained the relationships between the major American financial organisations, and provided persuasive arguments for the civilising force of financial innovations (e.g. for resource allocation, risk management and incentive creation), positioning finance as an engineering discipline rather than (say) a tool for clever individuals to make buckets of cash under sometimes somewhat dubious circumstances. I’ll never think of tax or financial markets or insurance in quite the same way again.

I will quote a chunk from one of his lectures (Lecture 22) that illustrates his approach, but also talks about how technology changes resulted in the creation of government pension schemes. I like the idea that technology shifts have resulted in the creation of many things that we wouldn’t ordinarily associate with “technology”. By copying his words in here, I’ll be able to find them more easily in the future (since this is a theme I’d like to pick up again).

In any case, while I didn’t find the iTunes U technology to be a good alternative for university education, I think it’s a good alternative to reading a typical e-book on the subject. Of course, both e-books and online education will continue to evolve, and maybe there wont be a clear distinction in the future. But for now, it’s an enjoyable way to access some non-fiction material in areas of interest.

The German government set up a plan, whereby people would contribute over their working lives to a social security system, and the system would then years later, 30, 40 years later, keep a tab, about how much they’ve contributed, and then pay them a pension for the rest of their lives. So, the Times wondered aloud, are they going to mess this up? They’ve got to keep records for 40 years. They were talking about the government keeping records, and they thought, nobody can really manage to do this, and that it will collapse in ruin. But it didn’t. The Germans managed to do this in the 1880s for the first time, and actually it was an idea that was copied all over the world.

So, why is it that Germany was able to do something like this in the 1880s, when it was not doable anywhere else? It had never been done until that time. I think this has to do ultimately with technology. Technology, particularly information technology, was advancing rapidly in the 19th century. Not as rapidly as in the 20th, but rapidly advancing.

So, what happened in Europe that made it possible to institute these radical new ideas? I just give a list of some things.

Paper. This is information technology, but you don’t think – in the 18th century, paper, ordinary paper was very expensive, because it was made from cloth in those days. They didn’t know how to make paper from wood, and it had to be hand-made. As a result, if you bought a newspaper in, say, 1790, it would be just one page, and it would be printed on the smallest print, because it was just so expensive. It would cost you like $20 in today’s prices to buy one newspaper. Then, they invented the paper machine that made it mechanically, and they made it out of wood pulp, and suddenly the cost of paper went down. …

There was a fundamental economic difference, and so, paper was one of the things.

And you never got a receipt for anything, when you bought something. You go to the store and buy something, you think you get a receipt? Absolutely not, because it’s too – well, they wouldn’t know why, but that’s the ultimate reason – too expensive. And so, they invented paper.

Two, carbon paper. Do you people even know what this is? Anyone here heard of carbon paper? Maybe, I don’t know. It used to be, that, when you wanted to make a copy of something, you didn’t have any copying machines. You would buy this special paper, which was – do you know what – do I have to explain this to you? You know what carbon paper is? You put it between two sheets of paper, and you write on the upper one, and it comes through on the lower one.

This was never invented until the 19th century. Nobody had carbon paper. You couldn’t make copies of anything. There was no way to make a copy. They hadn’t invented photography, yet. They had no way to make a copy. You had to just hand-copy everything. The first copying machine – maybe I mentioned that – didn’t come until the 20th century, and they were photographic.

And the typewriter. That was invented in the 1870s. Now, it may seem like a small thing, but it was a very important thing, because you could make accurate documents, and they were not subject to misinterpretation because of sloppy handwriting. … And you could also make many copies. You could make six copies at once with carbon paper. And they’re all exactly the same. You can file each one in a different filing cabinet.

Four, standardized forms. These were forms that had fill-in-the-blank with a typewriter.

They had filing cabinets.

And finally, bureaucracy developed. They had management school. Particularly in Germany, it was famous for its management schools and its business schools.

Oh, I should add, also, postal service. If you wanted to mail a letter in 1790, you’d have trouble, and it would cost you a lot. Most people in 1790 got maybe one letter a year, or two letters a year. That was it. But in the 19th century, they started setting up post offices all over the world, and the Germans were particularly good at this kind of bureaucratic thing. So, there were post offices in every town, and the social security system operated through the post offices. Because once you have post offices in every town, you would go to make your payments on social security at the post office, and they would give you stamps, and you’d paste them on a card, and that’s how you could show that you had paid.

– Robert Shiller, ECON252 Financial Markets, 2011

The Things We Tell Our Children

If you’re under ten years old, stop reading now. Spoilers are coming.

There’s a community of atheists who all teach their children to believe in God. They enjoy seeing the comfort that this brings their kids, and the kids enjoy hearing about Jesus and the various Saints. As the children get older though, they question their parents whether God is real, and the atheist parents go to some trouble to persuade their children that it is so, because they want to keep the beliefs going as long as possible. However, inevitably one of the children discovers that the parents don’t really believe, and then tell all their friends.

Except, there is no such community – I just made it up. It would be absurd. It would also be absurd if a creationist community brought up their children with stories about evolution, or an Islamic community taught their children to believe in the Norse Pantheon.

I found myself reflecting on this over the Easter weekend, as I was caught up in the exercise of teaching children about the Easter Bunny. Are kids really better off with me telling them it’s real when I don’t believe it in myself? I have previously found myself conflicted over the Christmas-time story of Father Christmas / Santa / St Nick, and I expect I’ll find it troubling to get involved in the Tooth Fairy when our kids get older.

An article over at Parenting Science states that one researcher found there was no anger when children found out that their parents were lying to them. But on the other hand, that researcher didn’t interview me, and I recall being angry at the time I found out Santa wasn’t real.

Just like most caring parents, mine took extra effort to build up evidence of Santa’s existence: presents mysteriously appeared under the tree in the dead of night, food left for Santa was eaten, and sometimes Santa even left a note. I stayed up late to try to catch Santa in the act or spot a reindeer, but I never did. One year I did suspect the truth, and confronted my parents, but they denied it and talked me around to believing again. In the end, it was my younger brother who forced the situation, later getting my parents to admit it. I was absolutely distraught. Not really so much because Santa wasn’t real but because I’d been deceived, and (if I’m being honest) that my younger brother managed to discover the truth before me.

However, even if I am an exception (although some other people’s recollections suggest otherwise), and in fact no children are at all distressed by discovering the truth, then why should parents be anxious about their children finding out?

If anything, this is one of the things worrying me about being truthful with my own children: how other parents will react. There is the unspoken basic rule of parenting that no-one else should interfere with how you raise your kids, and others’ children finding out the truth from my own children could be seen as interfering. Unfortunately, it’s not clear how I could tell the truth to my own children and yet prevent them from telling this to their friends.

Still, learning the truth didn’t prevent me from continuing to enjoy Christmas and Easter traditions. An easter egg hunt is still fun even if the eggs were hidden by adults rather than a mystical rabbit. Receiving presents is still a delight even if it is adults giving them. I don’t feel I’ve lost anything important by gaining the truth about what is really going on. All the good stuff keeps happening, despite what Virginia was told.

One strategy I’ve heard is to share the truth but engage in some kind of doublethink where children are told that if they stop believing, then the good stuff  will stop happening, eg. “if you don’t believe in Santa, he won’t bring you a present”. This doesn’t sit well with me, as the solution to a lie from an adult appears to be to invite lies from children: even if they don’t believe, they have to say that they do.

Another strategy I’ve heard is make the truth the answer to a puzzle. For example, if a child works it out, let them know they have been a clever-clogs but keep it a secret so as to not spoil their young friends’ and relatives’ efforts to work it out also. However, surely there’s no quicker way to encourage a child to share the secret than to tell them that?

A final strategy I’ve heard is to answer children’s questions truthfully, but position the belief in Santa et al as a game. For example, adults typically don’t have to explain to their children that Peter Pan (or Shrek or Cinderella) isn’t real, and acting out parts of the story in play-time isn’t engaging in deception.  I feel that this strategy is probably a good one, but I’m not sure how easy it will be to implement in practice. It must be possible, since there are a couple of discussions of this approach on the Gransnet forum, including this gorgeous story from “veronica”:

I could not bring myself to lie to my children but they just grew up knowing that FC was a traditional thing that it was fun to keep up. My daughter when she was was about two had a red coat and she dressed up as FC with a beard and distributed presents to those present.

I’m aware that I’m not yet ready for The Question. However, with Easter successfully navigated and Christmas eight months away yet, the need to find The Answer is not an urgent one. But it would be great if Santa could bring it to me as a present.

Movie Review – Moon

I recently formed a sci-fi movie club along with some other friends, where we watch a movie each month and chat about it with each other. The catch is that none of the others are based in the city that I am in, so it’s all done electronically: we stream movies from iTunes or wherever, and discuss it over email. It’s a bit different from the book club that I’m in, but still enjoyable. Kate is convinced that the real purpose of the club is to justify watching movies that none of the partners of those involved would ever want to watch. She is entitled to her theory.

But I wanted to mention one of the movies that we’ve watched that I found surprisingly enjoyable. It seems to be a film that got very little attention at the time, although it is a bit of a gem.

Moon

A well-made sci-fi mystery set on the moon

There was clearly a big budget set aside for this film. The production values are apparent from the very beginning, and yet the special effects are not gratuitous, despite being set in space. The movie is all about the story.

However, it doesn’t rush the story, and perhaps this feels a little slow at times, but also builds a sense of suspense around what is going to happen next. There is a real mystery here. The acting is also first-class, supporting the feeling of unease around the events. Even the robot character, Gerty, is well “acted”, which is a rare thing indeed.

I found it interesting how Gerty is given just three or four emoticon-type expressions based on how advanced the AI is otherwise. It is probably a fair approach to avoiding any uncanny-valley problems.

In hind-sight, this feels a lot like old-school sci-fi, of the ilk of Robert Heinlein. He was keen on Moon stories, too.

Rating by andrew: 4.0 stars
****

Personal and environmental audio – hear hear!

Just before Christmas, a friend brought me a new pair of headphones back from the US. I still haven’t quite decided yet whether they are the future of personal audio or just a step in the right direction, but I am finding them a bit of a revelation.

The headphones are the AfterShokz Sportz M2, which are relatively cheap, bone conduction headphones. Bone conduction means that instead of the headphones sending sound into your ear canal (like in-ear or full size headphones), they sit against the bones of your skull and send vibrations along them to your inner ear. The main advantage is that while listening to audio from these headphones, you can still hear all the environmental sound around you. The main disadvantage is that, of course, you can still hear all the environmental sound around you.

Clearly, this is not desirable for an audiophile. Obviously, you don’t get these sorts of headphones for their audio quality, and while I find them perfectly decent for listening to music or podcasts, the bass is not as good as typical headphones either. That said, if I want to hear the sound better, I can pop a finger in my ear to block out external noise. Sometimes I use the headphones for telephone calls on my mobile when traveling on the tram, and it probably looks a little odd to the other travelers that I am wearing headphones and putting my finger to my ear, but it is very effective.

For the first week or so that I was wearing them, I had strange sensations in my head, very much like when I first get new frames for my glasses. They push on my head in a way that I’m not used to, and it takes a little bit to get used to. The fact that I can hear music playing in my “ears” and yet hear everything around me was also initially a bit surreal – a bit like I was in a movie with a soundtrack – but the strangeness here diminished very quickly and now it is just a delight.

While they are marketed to cyclists or people who need to be able to hear environmental sound for safety reason (like, well, pedestrians crossing roads, so almost everyone I guess), it’s not the safety angle that really enthuses me. I am delighted by being able to fully participate in the world around me while concurrently having access to digital audio. When the announcer at a train station explains that a train is going to be cancelled, I still hear it. When a barista calls out that my coffee is ready, I still hear it. When my wife asks me a question while I’m doing something on the computer, I still hear it.

A couple of years ago, I yearned for this sort of experience:

For example, if I want to watch a TV program on my laptop, while my wife watches some video on the iPod on the couch next to me, we are going to interfere with each other, making it difficult for either of us to listen to our shows.

Being able to engage with people in my physical environment and yet access audio content without interfering with others is very liberating. I had hoped that highly directional speakers were the solution, but bone conduction headphones are a possible alternative.

Initially I had tried headphones that sat in only one ear, leaving the other one free. They were also very light and comfortable. One issue was that these were Bluetooth headphones and had trouble staying paired with several of the devices I had. However, and more importantly, I looked a bit like a real estate agent when I wore them, and was extremely self-conscious. Even trying to go overboard and wear them constantly for a month wasn’t enough to rid me of the sense of embarrassment I felt. Additionally, others would make a similar association and always seemed to assume that I must be on a phone call. If I did interact with others, I always had to explain first that I wasn’t on a call. What should’ve been a highly convenient solution turned out to be quite inconvenient.

The AfterShokz have none of these issues. I did try coupling them with a Bluetooth adaptor, but it had similar Bluetooth pairing issues. I see that AfterShokz have since released headphones with Bluetooth built in, but I haven’t tested these.

One potential new issue with the AfterShokz that I should discuss relates to the ability for others to hear what I’m listening to – this had been mentioned by some other online reviewers. While at higher volumes, others can hear sounds coming from the headphones (although this is not unique to AfterShokz’ headphones), at lower volumes it is actually very private. In any case, I’ve got a niggling sense of a higher risk of damage to my inner ear from listening to music at higher volumes: bone conduction headphones surely need to send sound-waves at higher energy levels than normal headphones because the signal probably attenuates more through bone than through air, and this is coupled with the fact that it needs to be operated at higher levels in order to be heard over background noise that would be otherwise blocked out by normal headphones. So, I try to set it at as low a volume as I can get away with, and block my ear with my finger if I need to hear better. In quiet environments, it’s not an issue.

Perhaps I am worrying about something that isn’t a problem, since I note that some medical professionals who specialise in hearing loss are advocating them. For that matter, the local group that specialises in vision loss is also promoting them. Although, I guess the long term effects of this technology are still unclear.

In any case, I find using this technology to be quite wonderful. I feel that I’ve finally found stereo headphones that aren’t anti-social. I hope if you have the chance to try it, you will also agree.

Windows 8 – worth the w8?

Our old laptop entered its death spiral a few months ago, but instead of replacing it immediately, we borrowed a stand-in laptop from a kind friend and decided to wait for the slew of Windows 8 compatible laptops that we expected to come in November. Not only would waiting mean that the machines available then be more “future proof”, and cheaper due to competition from other Windows 8 laptops, but we’d be able to pick up (I hoped) a decently-priced touch screen laptop.

Having had the iPad for a couple of years now, and experienced using it with a Bluetooth keyboard, I was completely sold on the idea of a keyboard-enabled device with a touch-screen. The combination of decent keyboard to type with and pleasant touch-based interface is a winner. It also doesn’t hurt that you double the screen real estate by moving from a soft keyboard to a hard keyboard.

So, November came, I put the plan into action, and bought a Sony VAIO E-Series touch laptop for a little over $1,000. It runs a standard Intel i5 processor and comes with a 750 GB hard disk drive. And of course, it had Windows 8.

This post is about sharing my thoughts on Windows 8, having used it now for almost two months. Initially, I was pretty excited with it, but I have since discovered some limitations, so I feel I have a reasonably balanced view of it now.

Major Changes

There are two really big changes that I’ve encountered, coming from Windows 7 up to Windows 8. Many of the other changes stem from these. The changes I’m referring to are: (i) you can now touch the screen to do things, and (ii) the Start Menu has become a Start Screen.

Realistically, Microsoft could’ve introduced native touch screen support in earlier versions of Windows. For example, HP has had this capability on their TouchSmart series of machines. However, it’s not enough for the operating system to be designed around touch if none of the applications are, since controls designed for mouse-based interaction are typically too small to easily manipulate using fingers. So, to introduce touch required Microsoft to push their entire developer community to redesign their applications, and this is logically done together with a major new operating system release.

This may have also spurred Microsoft to redesign their Start Menu. A feature of Windows since Windows 95, it was really a bit too compact for touch and required multiple clicks to navigate which becomes annoying with touch. They could’ve just made the Start Menu bigger and supported scrolling rather than clicking, but instead they took the pretty risky decision to replace the menu with an entirely new screen with a different user interface and its very own app store. Perhaps this understates the level of change. It’s almost as if they decided to replace the humble Start Menu with the entirety of Windows Phone 7.

The Touch Experience

I really love being able to touch the screen. Yesterday I used another laptop without touch, and kept having to pull myself back from touching its screen. It’s not that I use touch at every opportunity: it’s just one way I interact with the interface, along with the keyboard, trackpad and mouse. Some things are best done with a mouse, sometimes the keyboard is best, and some of the time touch is best. This is why I know that eventually touch-screen laptops will become common as those with trackpads.

Windows 8 enables this, but it’s not 100% there yet. Let me tell you about some of the gaps.

When you use touch to control the interface, the mouse pointer disappears. However, since the mouse pointer also used to indicate that the operating system is doing something (a little circular animation appears on it, although in previous versions of Windows it was a sand-timer), having the pointer disappear also leaves me in the dark as to whether that icon I just touched is really launching the program I wanted or whether I was a few pixels out and should really touch it again. Unsurprisingly, sometimes I launch things multiple times. This can get annoying.

It’s not just when launching programs, but any time I try to take an action where there may be a delay. Normally applications rely on the mouse pointer to communicate activity back to the users, so they don’t provide any other indication that things are happening (web browsers are a significant exception). Such applications will need to be rewritten to have an application specific activity indication. Or Microsoft will need to fix this, perhaps in Windows 9.

This tells me that the touch experience was not foremost in the mind of the designers of Window 8. On the contrary, it seems more to be designed around a “keyboard first” principle. Power users are given a range of handy key combinations, and it appears that some of these have been turned into useful gestures, but the whole touch thing isn’t totally elegant.

I find one of the handiest key combinations to be alt-tab, allowing me to quickly switch between applications/windows without having to use the mouse. As this is so useful, this has been converted to a touch gesture: place finger on the left-side bevel outside the screen, swipe to the right onto the screen, then without lifting your finger swipe back to the left. As well as being a clumsy gesture, it doesn’t actually list all the applications since all desktop applications are grouped together.

Another thing is the on-screen “Touch Keyboard”. Despite it being completely unnecessary because this machine is a laptop, ie. it has a keyboard, the Touch Keyboard keeps popping up. It slides up onto the screen when I am logging in, when I’m using Google Chrome, and at other random times. As soon as I touch a key on the real keyboard, the on-screen Touch Keyboard slides away, but I can’t prevent it appearing in the first place. Unchecking the Touch Keyboard Toolbar in the Task Bar properties is a temporary fix, but this resets after rebooting.

Apps and the Start Screen

Despite the Start Screen having the old Start Menu as its heritage, there are two types of application you can start from the Start Screen: (i) Windows desktop applications that we’re all familiar with, and (ii) “apps”. These apps can appear as “live tiles” on the start screen (showing a snippet of content from the full application), a full-screen application with a new touch-centric user interface, or a version of that full-screen application but adapted to fit just a fraction of the screen to allow multiple apps to be on the screen at the same time (not every app necessarily supports this though).  These two types of application live in different worlds.

To get new apps, most users will need to use the Windows Store app to discover and download them. Using the Windows Store is like using Apple iTunes or Google Play, and a Windows Live account needs to be set up with Microsoft before you can download anything, even free apps. This was a pain, since I’d set up one of the computer accounts as a local account for our 4 year old and I didn’t want to set up a Windows Live account for them. Another aspect to apps is that they are associated only with one user. Desktop applications can be installed system-wide for anyone to use, but not these apps. So, it also meant that I couldn’t install apps from the Windows Store under my log-in for my 4 year old to use.

This is not a problem on our iPad, where there is no concept of multiple accounts, so I can easily download apps from the Apple App Store and then my 4 year old can get to them. I guess she’s just going to have to stick to desktop applications for now.

There are a range of built-in apps that are available from the Start Screen, eg. Photos, Music, Video. These are similar to Windows Photo Viewer or Windows Media Player, except they are much simpler and have fewer features, so you might be inclined to just ignore them. Unfortunately, they are the default applications assigned to a large variety of file types. I’ve had to go into the Control Panel and change the defaults back to what they’ve been in previous versions of Windows so that I can actually get things done.

I have downloaded a few useful apps from the Windows Store, such as Skype, a couple of games, and a good internet banking app. However, there are strange omissions, such as no official Facebook or Twitter apps, no iView app, and no YouTube app. Given that Microsoft released their operating systems to developers a long time before they made the final version public for sale, it tells me that it wasn’t for lack of opportunity: these major developers have had absolutely no interest in making their services available as apps on Windows 8.

Developers have generally been pretty slow at updating their desktop applications for Windows 8, also. For example, iTunes 11 was the first version of iTunes that officially supported Windows 8 and it came out well after the public version had shipped (let alone when the original developer versions of Windows 8 were available). Google’s Picasa still doesn’t officially support Windows 8.

Concluding Remarks

Windows 8 is a big change from Windows 7, and users are going to go through a learning curve. However, the rapid uptake of Apple iPads by Windows users has shown that they’re quite happy to learn a completely different interface if there’s enough value in it.

For me, the experience of doing tasks on a Windows 8 touch-screen laptop is better than doing them on an iPad. For example, the freedom of using a powerful and modern web browser like Chrome that also has Adobe Flash support means I can get to all the content on the Internet that I’d ever want to visit – there’s little risk that I’ll come across a site that won’t load or for some reason corrupts my form data when I hit submit – and yet I can tap and swipe to my heart’s content so it is a pleasure to browse. When the experience falls down, it is usually when doing things that can’t be done on an iPad, eg. managing multiple accounts, using desktop applications, or multi-tasking.

Yet it is glaringly obvious that the experience must improve. Both application developers and Microsoft will need to update their software to work properly in this brave new touch-enabled world of Windows. Still, what’s available right now is both fun and useful (notwithstanding several annoyances) and gives me confidence that this world is achievable.

That said, if I didn’t have a touch screen laptop, I’d stay away from Windows 8, and if I didn’t have a high pain threshold when it comes to tinkering with my PC (or have someone in my household like this), I’d hold off on Windows 8 until there was more widespread application support, but for me it was worth the wait.

The Original Ice-cream Recipe

If it was me, I’d be crowing it from the rooftops, but Bernardo Buontalenti’s Wikipedia page is strangely silent on the fact that he invented ice-cream. This may have something to do with being dead for several centuries, and hence he’s not around to fix it. However, if you do a Google search for him, you will uncover information like…

Bernardo Buontalenti (Florence, 1531-Florence 1608) was a Florentine stage designer, architect, theatrical designer, military engineer and artist. …  Besides that, he is also traditionally considered the inventor of modern gelato. The Grand Duke Cosimo I de’Medici wanted him to organize an opulent banquet to celebrate the Spanish deputation, that had to stand open-mouthed in front of so much splendour. Buontalenti invented a new dessert for the occasion: a sorbet made with ice, salt (to lower the temperature), lemon, sugar, egg, honey, milk and a drop of wine.

Fantastic Florence

This may not be completely historically accurate, and other Italians also appear to have a claim to being the inventors, or perhaps even some Americans deserve the credit. However, it’s intriguing to think that before the 16th century, no-one had combined the ingredients of milk, egg, honey and ice in the appropriate way to create this dessert, despite those ingredients being available for thousands of years.

However, three things happened recently that have led me to a renewed interest in this. Firstly, our freezer died and has been replaced with a much larger one. Secondly, our beaters died and were replaced with a mix-master that came with an ice-cream maker bowl. Thirdly, summer arrived!

I have tried to find a recipe that matches the original ice-cream (gelato) recipe, but it appears to be a closely-guarded secret. However, I did come across one recipe for “Buontalenti” ice cream that has turned out very well, and importantly, doesn’t have silly instructions like checking if things coat the back of a wooden spoon. Here’s my Australian conversion of it:

Ingredients

  • 4 large eggs
  • 2 cups (500mL) full-cream milk
  • 180mL caster sugar
  • 3/4 cup (187mL) thickened cream (eg. 35% milk fat)
  • 1/2 teaspoon (2.5mL) liquor for flavouring – in my case I used Grand Marnier but it’s apparently more traditional to use Disaronno Amaretto

Method

  1. Separate the eggs, and place the yolks into a mixing bowl.
  2. Add half the sugar to the bowl, and whisk for a couple of minutes until thick and slightly pale in colour.
  3. Fill the sink with cold water, add some ice-cubes, and place a large, metal mixing bowl in there with a sieve on top. This will come in handy later.
  4. Place the remaining sugar along with the milk into the medium-sized saucepan and heat, stirring regularly, until the sugar has dissolved and the milk is just about to simmer.
  5. Remove the saucepan and pour it slowly into the egg-and-sugar mixture, whisking all the while.
  6. Then pour the mixture back into the saucepan and place over a moderately-low heat, stirring continually, until the temperature reaches 75 degrees Celcius. The mixture should have thickened slightly, becoming a thin custard.
  7. Remove the saucepan and pour the custard into the chilled bowl, through the sieve. Let it cool for a little while, then place in the fridge until completely chilled.
  8. Mix in the cream and liquor.
  9. Process in an ice-cream maker until it forms the consistency of soft ice-cream and increases in volume. Store in the freezer for at least a couple of hours before serving.

Makes about 1.5L of gelato.

No More Winner Takes All

Over the last year, I’ve been in a number of discussions where the concept of Winner Takes All was raised, and it’s now starting to annoy me. In a Winner Takes All market, there is a dominant competitor who takes a very large share of the profits. An example at the moment is the mobile phone manufacturing market, where it seems Apple is the winner who is taking all (or, at least, most). However, there may be a widespread view that any market relating to the Internet is Winner Takes All, and that would be a problem.

Winner Takes All is typically put forward to justify either betting big (eg. intentionally making multi-year losses in order to get the scale of users/customers needed to be dominant in a market) or not doing anything (eg. because only one can win anything, and the likelihood is that it won’t be you). In other words, Winner Takes All markets are for only the bravest of the brave. But if anything relating to the Internet is Winner Takes All, then unless you’re pretty special, you should stay off the Internet. Or so the thinking goes.

You might expect that I disagree – and you’d be right. Let me break down why.

Firstly, there are mature businesses on the Internet that have multiple big players, and yet not a single winner. Web mail is a good example, with the biggest services (from Microsoft, Yahoo and Google) having similar sized user-bases.

Global Web Mail Unique Visits (ComScore May 2012)
Service Users
Microsoft Hotmail 325 million
Yahoo Mail 298 million
Google GMail 289 million

And while one counter-example is enough to disprove a hypothesis, here’s another one to show the first wasn’t merely an exception. Desktop web browser share is largely split between three big players (Microsoft, Mozilla and Google). Another one is Internet-connected game consoles. I’d say this myth is busted.

A response to the above is granting that not every market relating to the Internet is applicable to Winner Takes All, but that there are some important ones that are. For example, Internet services with “network effects” (those where the more users that adopt it, the more value they are to those users) are in such a market, and Facebook’s dominance in social networking illustrates this.

While this watered-down Winner Takes All view appears more reasonable, there are two lines of evidence that discount it also. The first is the historical record of all the previous social networking services where it appeared there was a winner, but then they lost to a subsequent service that rapidly took over. Back in 2007, MySpace was considered dominant over Facebook, and before MySpace were other services like GeoCities which, according to Wikipedia, in 1999 was the third most visited site on the Internet. If a winner can be displaced so quickly, can they really be said to have “won”?

The second line of evidence is the active competition still occurring in the social networking market. There are both alternative services such as Twitter, LinkedIn and Yammer, and also similar services operating in specific (yet still sizeable) markets such as Qzone, Renren or Sina Weibo in the Chinese language market. If a service isn’t dominant everywhere, can they really be said to have taken it “all”?

But wait, I hear someone say. Cyworld was dominant in the South Korea language market, and yet now Facebook has displaced Cyworld over the course of a year. Doesn’t this show that the same could happen in China and Facebook is operating in a Winner Takes All market? Well, yes it could happen in China, but no, all this shows is that Facebook is a good competitor. There’s no need to explain away Facebook’s appeal through claiming their rise in South Korea was an inevitable consequence of the market structure.

So, I don’t find Winner Takes All convincing, but the danger is that some people believe it and choose not to attempt to launch valuable Internet-based ideas. We users of the Internet would end up deprived of those services as a result. But, it seems the good news is that plenty of people do not believe in the pessimistic world view of Winner Takes All and are happily putting their products and services on the Internet.

Book Review – Good to Great

I’d been aware of this book for a while, but it still seems to be available only in expensive hardback format, so I was waiting until it got cheaper. Recently I found it for $15 (still hardback) and this was enough for me to give it a go.

Good to Great

Research-based guidance for established companies to excel in their markets.

I came to this book by Jim Collins with some interest in reading about a new research-based attempt to find a winning corporate formula, but also scepticism due to the unsuccessful attempts that have come before. Perhaps the most infamous was In Search of Excellence which purported to find the recipe for excellence, but gave Atari (had to sell key assets in 1984) and Wang Labs (filed for bankruptcy in 1992) as examples of excellent corporations. Although, that book identified 43 “excellent” companies, so it’s probably not too bad for only a couple of bad apples to end up in their list.

Collins improves his odds by identifying only 11 “good to great” companies. But this is perhaps an uncharitable comparison, as his team appears to have done an extensive job in analysing these companies, and there are only 11 because only 11 companies out of the 1,435 US-based “Fortune 500” companies from 1965-1995 met their criteria. Then to identify the features that relate to being “good to great”, these had to be possessed by all “good to great” companies and lacked by all 17 close-but-not-quite-good-to-great companies also identified by the team.

The book explains the basis for these features, and is engaging and well-written. For me, the most surprising was the feature of “first who.. then what” which is basically the idea that hiring well becomes foundation for all corporate strategy, and not, say an analysis of competitors, technology, financials, or other market fundamentals. I do like this idea, despite its fuzziness, as it says that people aren’t fungible and that they can make a big difference. There are five other features, making six in all, but none were as counter-intuitive as this one. In any case, I will now be paying attention to these features in my workplace and future employers.

However, I can’t bring myself to adopt them as fundamental tenets since despite the rigorous research, the conclusions remain essentially unproven. From my point of view, there are three weaknesses in the research: the set of “good to great” companies is arbitrary, the set is small, and the conclusions are untested.

Taking the first problem, “good to great” companies were defined as having a transition to “great” performance of at least three times the general market (from a point of transition). If, instead of three times, it had been five times or even two times, a different set of companies would’ve been found. Since the features needed to be possessed by all “good to great” companies, a different set would’ve produced a different set of features, e.g. potentially larger or smaller. Hence, perhaps the features found are sufficient for a good-to-great transition but some weren’t actually necessary.

The problem of a small sample is tackled in the book, referencing “two leading professors” who think the sample of 11 companies wasn’t small. Unfortunately, this is not convincing. For example, one professor says that the 11 companies wasn’t a sample as it was 100% of companies that met the criteria – although I would respond that the book promises that these principles are universal, so there will be more such companies in the US-market in the future, and they should also apply to non-US-based companies, hence the 11 companies don’t represent 100% of all possible “good to great” companies.

Lastly, the conclusions are untested. The research team could’ve, say, looked for a couple of companies outside the US that met their “good to great” criteria and then checked that those companies possessed all of the six features. Except they didn’t. The only companies examined as part of the study were those that informed the conclusion. The use of comparison companies gives me a level of faith in the conclusions, but these can’t be validly re-used in testing that conclusion. So, really the conclusion remains a hypothesis for now.

My grumblings notwithstanding, I was impressed with the analysis in the book and the methodology that used comparison companies to filter out features that were shared by both the “good to great” companies and also those that didn’t perform so well. It has shifted my thinking about what a successful business can look like.

Rating by andrew: 3.5 stars
***1/2