The Amazing Bitcoin

Bitcoin over circuit board

I am impressed with the novelty and cleverness behind the online phenomenon known as Bitcoin. For those who came in late, bitcoins could be described as digital commodities. People can trade them for actual currency and sometimes real goods. However, while it’s true that we’ve been using something called money for this purpose already, and so you may ask why we need it, Bitcoin has a couple of interesting properties:

  • Trustless: If I engage in a Bitcoin transaction with you, I don’t need to trust you, your bank, your government, or anyone specifically. Once a transaction has completed, it can be verified to have happened as I expected, removing counter-party risk that exists in many markets (for example, a fraudster may pay me in counterfeit bills).
  • Resilient: There is no central operator of the Bitcoin infrastructure, so everyone’s not worried about a particular company staying solvent, or a particular government staying in power or true to their promises in order for the system to keep working.

Up until Bitcoin, no-one had been able to come up with a system with these properties. Either counter-party risk was removed because there was an operator regulating the market (and the market wasn’t resilient in the face of that operator collapsing) or there were markets without central control that required a lot of trust when dealing with others. If the inventors of Bitcoin had not been hiding their identities, I wouldn’t be surprised if they would be in the running for a future Nobel Prize in Economics. Bitcoin is no less than a completely decentralised technology for financial contracts allowing for value to be transferred over any means – physical or virtual.

However, I’ve found that the way that Bitcoin operates to be a little surprising. It’s not like other systems that I’m used to. Since I haven’t seen these points noted down clearly in the one place, I thought others may be interested as well. (Unless you’re already very familiar with Bitcoin, in which case it’s likely to be old hat.)

1. Miners are both the source of new bitcoins and responsible for documenting all transactions

A miner is just the name for a computing node that works to discover the next block in the Bitcoin blockchain. Every ten minutes (on average), a new block containing all as-yet-undocumented transactions is generated. The first node to generate this block (which requires discovering the solution to a particular computing problem using trial-and-error approaches) also gets 25 bitcoins (BTC) for its trouble. The “winner” here is in part due to luck, and in part due to how much computing power the miner has dedicated to this. The blockchain is the ongoing record of each of these blocks, collectively forming something of a global ledger of all known transactions to date.

In theory, transactions can contain something akin to a tip, representing a fee to the (winning) miner, and these are in addition to the 25 BTC for each ten minutes work (with a single BTC worth something between US$40 and US$1140 over the last year, and currently around US$580). However, such transaction fees are relatively minor at the moment, with miners currently earning less than 20 BTC per day in total. The 25 BTC figure used to be 50 BTC in the early days, and reduces predictably over time with it halving again to 12.5 BTC by about the year 2017.

2. Transactions are not real-time and take around an hour before they are considered certain

Prospective transactions are broadcast around between all the various miners using a peer-to-peer network, who each check them for validity before including them in the current block that’s being worked on. Since a new block comes along every ten minutes (on average), there may be a wait of up to ten minutes for a new transaction to appear in the blockchain, and hence the receiver of BTC can read it and will know that they are going to get some coins.

Except miners may not include your transaction in the next block because there were already too many transactions in it, or perhaps the miner that “won” the block that time decided not to include any transactions at all, so you will need to wait for the next block. And even then it appears that there is a risk that a Bitcoin sender could “double spend” the BTC if two conflicting transactions were sent to different miners, so it’s considered prudent to wait until six blocks have been generated (including the first one with the relevant transaction) to get transaction certainty.

While this is fine for some types of transactions, such as a book order, it is not so fine for other types of transactions where goods are delivered immediately such as an app download or when at a Bitcoin ATM dispensing hard currency. Any solutions to this problem will sit outside of the standard Bitcoin infrastructure, e.g. merchant insurance, but in a world where transaction times are getting shorter and shorter, this may limit Bitcoin’s long term use in the general economy.

3. Bitcoins are not held in Bitcoin wallets

A Bitcoin wallet is technically just a public-private key pair (or multiple such pairs). This provides the means of generating a public address (from the public key, for others to send bitcoins to your wallet) and for generating new transactions (using the private key, when sending bitcoins to other people’s wallets). The bitcoins themselves are not held anywhere, but proof of ownership of them can be established from the records in the blockchain.

Given that everyone can see exactly how many bitcoins belong to every Bitcoin wallet, it’s considered good practice to use a different public address (and hence public-private key pair) for each transaction. A single transaction can take bitcoins from multiple wallets and send them out to multiple wallets, making this all a bit easier to manage.

4. Bitcoin transactions can be complex contracts

Since bitcoins themselves are not actually moved around and bitcoin balances are not kept within the Bitcoin infrastructure, each transaction sending some bitcoins refers to previous transactions where those bitcoins were “received”. At a minimum a single sending transaction needs to refer back to a single receiving transaction. As part of validating that this pair of transactions should be allowed, miners actually run a small script embedded within the sending transaction followed by another one embedded in the receiving transaction. The scripting language is pretty extensive.

Also, because Bitcoin transactions are just a series of bytes and can be sent directly to others, e.g. over email, instead of broadcasting them to the miners, complex contracts can be created. You can use Bitcoin to pay someone, but only if a third party also approves the transaction. Or you can use Bitcoin to pay a deposit / bond where the money comes back to you after an agreed period but the other party can’t spend it in the mean-time. Or you can use Bitcoin to contribute towards a transaction that will go ahead only if enough other people contribute towards it for it to reach a specified sum. Some are using Bitcoin to run a provably-fair lottery. Some are even looking to use Bitcoin to allow for electronic voting.

Concluding remarks

Bitcoin is still relatively new for a payment technology, and I would not pretend that using it is risk-free. Regulation of Bitcoin is still nascent and inconsistent between geographies, it operates in a legally grey area with perhaps half of all Bitcoin transactions being made with gambling services, and Bitcoin-based marketplaces seem to be regularly collapsing.

Even if Bitcoin itself is replaced by one of the other newer “cryptocurrencies” such as LiteCoin, Ripple or dogecoin, I suspect that its invention has opened the door for amazing new ways to transact online.

Pi, Python and I (part 1)

Raspberry PiI’ve been on Facebook for almost six years now, and active for almost five. This is a long time in Internet time.

Facebook has, captured within it, the majority of my interactions with my friends. Many of them have stopped blogging and just share via Facebook, now. (Although, at least two have started blogging actively in the last year or so, and perhaps all is not lost.) At the start, I wasn’t completely convinced it would still be around – these things tended to grow and then fade within just a few years. So, I wasn’t too concerned about all the *stuff* that Facebook would accumulate and control. I don’t expect them to do anything nefarious with it, but I don’t expect them to look after it, either.

However, I’ve had a slowly building sense that I should do something about it. What if Facebook glitched, and accidentally deleted everything? There’s nothing essential in there, but there are plenty of memories I’d like to preserve. I really wanted my own backup of my interactions with my friends, in the same way I have my own copies of emails that I’ve exchanged with people over the years. (Although, fewer people seem to email these days, and again they just share via Facebook.)

The trigger to finally do something about this was when every geek I knew seemed to have got themselves a Raspberry Pi. I tried to think of an excuse to get one myself, and didn’t have to think too hard. I could finally sort out this Facebook backup issue.

Part of the terms of my web host are that I can’t run any “robots” – it’s purely meant to handle incoming web requests. Also, none of the computers at home are on all the time, as we only have tablets, laptops and phones. I didn’t have a server that I could run backup software on.. but a Raspberry Pi could be that server.

For those who came in late, the Raspberry Pi is a tiny, single-board computer that came out last year, is designed and built in the UK, and (above all) is really, really cheap. I ordered mine from the local distributor, Element14, whose prices start at just under $30 for the Model A. To make it work, you need to at least provide a micro-USB power supply ($5 if you just want to plug it into your car, but more like $20 if you want to plug it into the wall) and a Micro SD card ($5-$10) to provide the disk, so it’s close to $60, unless you already have those to hand. You can get the Model B, which is about $12 more and gets you both more memory and an Ethernet port, which is what I did. You’ll need to find an Ethernet cable as well, in that case ($4).

When a computer comes that cheap, you can afford to get one for projects that would otherwise be too expensive to justify. You can give them to kids to tinker with and there’s no huge financial loss if they brick them. Also, while cheap, they can do decent graphics through an HDMI port, and have been compared to a Microsoft Xbox. No wonder they managed to sell a million units in their first year. Really, I’m a bit slow on the uptake with the Raspberry Pi, but I got there in the end.

While you can run other operating systems onto it, if you get a pre-configured SD card, it comes with a form of Linux called Raspbian and has a programming language called Python set up ready to go. Hence, I figured as well as getting my Facebook backup going, I could use this as an excuse to teach myself Python. I’d looked at it briefly a few years back, but this would be the first time I’d used it in anger. I’ll document here the steps I went through to implement my project, in case anyone else wants to do something similar or just wants to learn from this (if only to learn how simple it is).

The first thing to do is to head over to developers.facebook.com and create a new “App” that will have the permissions that I’ll use to read my Facebook  feed. Once I logged in, I chose “Apps” from the toolbar at the top and clicked on “Create New App”. I gave my app a cool name (like “Awesome Backup Thing”) and clicked on “Continue”, passed the security check to keep out robots, and the app was created. The App ID and App secret are important and should be recorded somewhere for later.

Now I just needed to give it the right permissions. Under the Settings menu, I clicked on “Permissions”, then added in the ones needed into the relevant fields. For what I want, I needed: user_about_me, user_status, friends_about_me, friends_status, and read_stream. “Save Changes” and this step is done. Actually, I’m not sure if this is technically needed, given the next step.

Now I needed to get a token that can be used by the software on the server to query Facebook from time to time. The easiest way is to go to the Graph API Explorer, accessible under the “Tools” menu in the toolbar.

I changed the Application specified in the top right corner to Awesome Backup Thing (insert your name here), then clicked on “Get access token”. Now I need to specify the same permissions as before, across the three tabs of User Data Permissions (user_about_me, user_status), Friends Data Permissions (friends_about_me, friends_status) and Extended Permissions (read_stream). Lastly, I clicked on “Get Access Token”, clicked “OK” to the Facebook confirmation page that appeared, and returned to the Graph API explorer where there was a new token waiting for me in the “Access token” textbox. It’ll be needed later, but it’s valid for about two hours. If you need to generate another one, just click “Get access token” again.

Now it’s time to return to the Pi. Once I logged in, I needed to set up some additional Python packages like this:

$ sudo pip install facepy
$ sudo pip install python-dateutil
$ sudo pip install python-crontab

And then I was ready to write some code. The first thing was to write the code that will keep my access token valid. The one that Facebook provides via the Graph API Explorer expires too quickly and can’t be renewed, so it needs to be turned into a renewable access token with a longer life. This new token then needs to be recorded somewhere so that we can use it for the backing-up. Luckily, this is pretty easy to do with those Python packages. The code looks like this (you’ll need to put in the App ID, App Secret, and Access Token that Facebook gave you):

# Write a long-lived Facebook token to a file and setup cron job to maintain it
import facepy
from crontab import CronTab
import datetime

APP_ID = '1234567890' # Replace with yours
APP_SECRET = 'abcdef123456' # Replace with yours

try:
  with open("fbtoken.txt", "r") as f:
  old_token = f.read()
except IOError:
  old_token = ''
if '' == old_token:
  # Need to get old_token from https://developers.facebook.com/tools/explorer/
  old_token = 'FooBarBaz' # Replace with yours

new_token, expires_on = facepy.utils.get_extended_access_token(old_token, APP_ID, APP_SECRET)

with open("fbtoken.txt", "w") as f:
  f.write(new_token)

cron = CronTab() # get crontab for the current user
for oldjob in cron.find_comment("fbtokenrenew"):
  cron.remove(oldjob)
job = cron.new(command="python ~/setupfbtoken.py", comment="fbtokenrenew")
renew_date = expires_on - datetime.timedelta(1)
job.minute.on(0)
job.hour.on(1) # 1:00am
job.dom.on(renew_date.day)
job.month.on(renew_date.month) # on the day before it's meant to expire
cron.write()

Apologies for the pretty rudimentary Python coding, but it was my first program. The only other things to explain are that the program sits in the home directory as the file “setupfbtoken.py” and when it runs, it writes the long-lived token to “fbtoken.txt” then sets up a cron-job to refresh the token before it expires, by running itself again.

I’ll finish off the rest of the code in the next post.

Technology, Finance and Education

Yale Theatre

I have been trying out iTunes U by doing the Open Yale subject ECON252 Financial Markets. What attracted me to the subject was that the lecturer was Robert Shiller, one of the people responsible for the main residential property index in the US and an innovator in that area. Also, it was free. :)

I was interested in seeing what the iTunes U learning experience was like, and I was encouraged by what I found. While it was free, given the amount of enjoyment I got out of doing the subject, I think I’d happily have paid around the cost of a paperback book for it. I could see video recordings of all the lectures, or alternatively, read transcripts of them, plus access reading lists and assessment tasks.

The experience wasn’t exactly what you’d get if you sat the subject as a real student at Yale. Aside from the general campus experience, also missing were the tutorial sessions, professional grading of the assessments (available as self-assessment in iTunes U), an ability to borrow set texts from the library, and an official statement of grading and completion at the end. Also, the material dated from April 2011, so wasn’t as current as if I’d been doing the real subject today.

Of these, the only thing I really missed was access to the texts. I suppose I could’ve bought my own copies, but given I was trying this because it was free, I wasn’t really inclined to. Also, for this subject, the main text (priced at over $180) was actually a complementary learning experience with seemingly little overlap with the lectures.

While I tried both the video and transcript forms of the lectures, and while the video recordings were professionally done, in the end I greatly preferred the transcripts. The transcripts didn’t capture blackboard writing/diagrams well, and I sometimes went back and watched the videos to see them, but the lecturer had checked over the transcripts and they had additions and corrections in them that went beyond what was in the video. Also, I could get through a 1hr lecture in a lot less than an hour if I was reading the transcript.

Putting aside the form of delivery, the content of the subject turned out to be much more interesting that I expected at the beginning. Shiller provided a social context for developments in finance through history, explained the relationships between the major American financial organisations, and provided persuasive arguments for the civilising force of financial innovations (e.g. for resource allocation, risk management and incentive creation), positioning finance as an engineering discipline rather than (say) a tool for clever individuals to make buckets of cash under sometimes somewhat dubious circumstances. I’ll never think of tax or financial markets or insurance in quite the same way again.

I will quote a chunk from one of his lectures (Lecture 22) that illustrates his approach, but also talks about how technology changes resulted in the creation of government pension schemes. I like the idea that technology shifts have resulted in the creation of many things that we wouldn’t ordinarily associate with “technology”. By copying his words in here, I’ll be able to find them more easily in the future (since this is a theme I’d like to pick up again).

In any case, while I didn’t find the iTunes U technology to be a good alternative for university education, I think it’s a good alternative to reading a typical e-book on the subject. Of course, both e-books and online education will continue to evolve, and maybe there wont be a clear distinction in the future. But for now, it’s an enjoyable way to access some non-fiction material in areas of interest.

The German government set up a plan, whereby people would contribute over their working lives to a social security system, and the system would then years later, 30, 40 years later, keep a tab, about how much they’ve contributed, and then pay them a pension for the rest of their lives. So, the Times wondered aloud, are they going to mess this up? They’ve got to keep records for 40 years. They were talking about the government keeping records, and they thought, nobody can really manage to do this, and that it will collapse in ruin. But it didn’t. The Germans managed to do this in the 1880s for the first time, and actually it was an idea that was copied all over the world.

So, why is it that Germany was able to do something like this in the 1880s, when it was not doable anywhere else? It had never been done until that time. I think this has to do ultimately with technology. Technology, particularly information technology, was advancing rapidly in the 19th century. Not as rapidly as in the 20th, but rapidly advancing.

So, what happened in Europe that made it possible to institute these radical new ideas? I just give a list of some things.

Paper. This is information technology, but you don’t think – in the 18th century, paper, ordinary paper was very expensive, because it was made from cloth in those days. They didn’t know how to make paper from wood, and it had to be hand-made. As a result, if you bought a newspaper in, say, 1790, it would be just one page, and it would be printed on the smallest print, because it was just so expensive. It would cost you like $20 in today’s prices to buy one newspaper. Then, they invented the paper machine that made it mechanically, and they made it out of wood pulp, and suddenly the cost of paper went down. …

There was a fundamental economic difference, and so, paper was one of the things.

And you never got a receipt for anything, when you bought something. You go to the store and buy something, you think you get a receipt? Absolutely not, because it’s too – well, they wouldn’t know why, but that’s the ultimate reason – too expensive. And so, they invented paper.

Two, carbon paper. Do you people even know what this is? Anyone here heard of carbon paper? Maybe, I don’t know. It used to be, that, when you wanted to make a copy of something, you didn’t have any copying machines. You would buy this special paper, which was – do you know what – do I have to explain this to you? You know what carbon paper is? You put it between two sheets of paper, and you write on the upper one, and it comes through on the lower one.

This was never invented until the 19th century. Nobody had carbon paper. You couldn’t make copies of anything. There was no way to make a copy. They hadn’t invented photography, yet. They had no way to make a copy. You had to just hand-copy everything. The first copying machine – maybe I mentioned that – didn’t come until the 20th century, and they were photographic.

And the typewriter. That was invented in the 1870s. Now, it may seem like a small thing, but it was a very important thing, because you could make accurate documents, and they were not subject to misinterpretation because of sloppy handwriting. … And you could also make many copies. You could make six copies at once with carbon paper. And they’re all exactly the same. You can file each one in a different filing cabinet.

Four, standardized forms. These were forms that had fill-in-the-blank with a typewriter.

They had filing cabinets.

And finally, bureaucracy developed. They had management school. Particularly in Germany, it was famous for its management schools and its business schools.

Oh, I should add, also, postal service. If you wanted to mail a letter in 1790, you’d have trouble, and it would cost you a lot. Most people in 1790 got maybe one letter a year, or two letters a year. That was it. But in the 19th century, they started setting up post offices all over the world, and the Germans were particularly good at this kind of bureaucratic thing. So, there were post offices in every town, and the social security system operated through the post offices. Because once you have post offices in every town, you would go to make your payments on social security at the post office, and they would give you stamps, and you’d paste them on a card, and that’s how you could show that you had paid.

– Robert Shiller, ECON252 Financial Markets, 2011

Personal and environmental audio – hear hear!

Just before Christmas, a friend brought me a new pair of headphones back from the US. I still haven’t quite decided yet whether they are the future of personal audio or just a step in the right direction, but I am finding them a bit of a revelation.

The headphones are the AfterShokz Sportz M2, which are relatively cheap, bone conduction headphones. Bone conduction means that instead of the headphones sending sound into your ear canal (like in-ear or full size headphones), they sit against the bones of your skull and send vibrations along them to your inner ear. The main advantage is that while listening to audio from these headphones, you can still hear all the environmental sound around you. The main disadvantage is that, of course, you can still hear all the environmental sound around you.

Clearly, this is not desirable for an audiophile. Obviously, you don’t get these sorts of headphones for their audio quality, and while I find them perfectly decent for listening to music or podcasts, the bass is not as good as typical headphones either. That said, if I want to hear the sound better, I can pop a finger in my ear to block out external noise. Sometimes I use the headphones for telephone calls on my mobile when traveling on the tram, and it probably looks a little odd to the other travelers that I am wearing headphones and putting my finger to my ear, but it is very effective.

For the first week or so that I was wearing them, I had strange sensations in my head, very much like when I first get new frames for my glasses. They push on my head in a way that I’m not used to, and it takes a little bit to get used to. The fact that I can hear music playing in my “ears” and yet hear everything around me was also initially a bit surreal – a bit like I was in a movie with a soundtrack – but the strangeness here diminished very quickly and now it is just a delight.

While they are marketed to cyclists or people who need to be able to hear environmental sound for safety reason (like, well, pedestrians crossing roads, so almost everyone I guess), it’s not the safety angle that really enthuses me. I am delighted by being able to fully participate in the world around me while concurrently having access to digital audio. When the announcer at a train station explains that a train is going to be cancelled, I still hear it. When a barista calls out that my coffee is ready, I still hear it. When my wife asks me a question while I’m doing something on the computer, I still hear it.

A couple of years ago, I yearned for this sort of experience:

For example, if I want to watch a TV program on my laptop, while my wife watches some video on the iPod on the couch next to me, we are going to interfere with each other, making it difficult for either of us to listen to our shows.

Being able to engage with people in my physical environment and yet access audio content without interfering with others is very liberating. I had hoped that highly directional speakers were the solution, but bone conduction headphones are a possible alternative.

Initially I had tried headphones that sat in only one ear, leaving the other one free. They were also very light and comfortable. One issue was that these were Bluetooth headphones and had trouble staying paired with several of the devices I had. However, and more importantly, I looked a bit like a real estate agent when I wore them, and was extremely self-conscious. Even trying to go overboard and wear them constantly for a month wasn’t enough to rid me of the sense of embarrassment I felt. Additionally, others would make a similar association and always seemed to assume that I must be on a phone call. If I did interact with others, I always had to explain first that I wasn’t on a call. What should’ve been a highly convenient solution turned out to be quite inconvenient.

The AfterShokz have none of these issues. I did try coupling them with a Bluetooth adaptor, but it had similar Bluetooth pairing issues. I see that AfterShokz have since released headphones with Bluetooth built in, but I haven’t tested these.

One potential new issue with the AfterShokz that I should discuss relates to the ability for others to hear what I’m listening to – this had been mentioned by some other online reviewers. While at higher volumes, others can hear sounds coming from the headphones (although this is not unique to AfterShokz’ headphones), at lower volumes it is actually very private. In any case, I’ve got a niggling sense of a higher risk of damage to my inner ear from listening to music at higher volumes: bone conduction headphones surely need to send sound-waves at higher energy levels than normal headphones because the signal probably attenuates more through bone than through air, and this is coupled with the fact that it needs to be operated at higher levels in order to be heard over background noise that would be otherwise blocked out by normal headphones. So, I try to set it at as low a volume as I can get away with, and block my ear with my finger if I need to hear better. In quiet environments, it’s not an issue.

Perhaps I am worrying about something that isn’t a problem, since I note that some medical professionals who specialise in hearing loss are advocating them. For that matter, the local group that specialises in vision loss is also promoting them. Although, I guess the long term effects of this technology are still unclear.

In any case, I find using this technology to be quite wonderful. I feel that I’ve finally found stereo headphones that aren’t anti-social. I hope if you have the chance to try it, you will also agree.

Technology Forecasting

Several years ago, I bought a book by Richard Feynman about science and the world. The following passage has stuck with me:

Now, another example of  a test of truth, so to speak, that works in the sciences that would probably work in other fields to some extent is that if something is true, really so, if you continue observations and improve the effectiveness of the observations, the effects stand out more obviously. Not less obviously. That is, if there is something really there, and you can’t see good because the glass is foggy, and you polish the glass and look clearer, then it’s more obvious that it’s there, not less.

I love this idea. It’s not just that you test a theory over time and if it hasn’t been disproven then it’s probably true, but that over time a true theory becomes more obviously true.

In forecasting technology trends, this is not necessarily a helpful thing. The more obviously true something is, the less likely it is that other people credit you with having an insight, even if it dates from when it was unclear.

Still, the converse of the idea is definitely helpful. If a theory requires constant tweaking in the face of new evidence, just to maintain the possibility of being true, it most likely isn’t.

I have no trouble coming up with crazy ideas about how technology might develop, but faced with a number of equally crazy ideas, it is difficult to know which are the ones with some merit and which are false. Happily, the above approach gives me a process to help sort them: giving them time. The ideas that are reinforced by various later developments are worth hanging on to, while those that fail to gain any supporting evidence  over time may need to be jettisoned.

Ideas that I initially supported but have been forced by time to jettison include: Java ME on the mobile, RSS news readers, ubiquitous speech recognition, mobile video calling, and the Internet fridge.

One idea that I’m proud to have hung onto was that of mobile browsing. I saw the potential back in the late 1990s when I was involved in the WAP standards, enabling mobile browsing on devices such as the Nokia 7110, even if it was wracked with problems. Several colleagues, friends and family members dismissed the idea. However, over time, mobile browsing received more evidence that it was credible, with the successes in Japan, the appearance of the Opera browser, and then Safari on the iPhone. Now, I regard Safari on the iPad to be the best web browsing experience of all my devices – PCs included.

While Feynman was a great physicist, and his advice has helped me in forecasting technology trends, there’s no guaranteed way to get it right. The last word should belong to another physicist, Niels Bohr, who is reputed to have said: prediction is very difficult, especially about the future.

Contactless Sport

The other week, I got my first contactless credit card – a Visa payWave. You’ve probably seen the ads for payWave and PayPass cards – the banks have been issuing them for a while now – and I was keen for my old card to expire so that I could get a new card with this feature.

That said, I haven’t gotten the chance to use its contactless capabilities yet, but that’s not to say I haven’t noticed anything different. The day after I added the new card to my wallet, my Myki travel card stopped working.

The problem is that both my Myki and my new payWave credit card use a wireless standard called ISO/IEC 14443 that operates at 13.56MHz. Myki uses a technology called MIFARE that complies with this standard, while payWave uses contactless EMV technology. However, while they are sisters in the technology domain, neither card pays any attention to the other when in my wallet, and they interfere when I put the wallet near the reader in a station turnstile.

One solution to this is to replace the wallet with a special RF-shielded one, like this, and place the different cards in the right spots so that interference doesn’t occur. However, while I experimented with some strategically-placed aluminium foil in my wallet, in the end all I needed to do was ensure that the EMV and MIFARE cards were distantly separated by a chunk of other plastic cards and a coin pouch (I know my wallet is chunky, but I can still fit it in my pocket!).

While this may be a first world problem, it’s still something that’s going to occur more and more as new contactless cards are added to the wallet. Today, I have just a travel card and a payment card. But in the future, I am likely to have more payment cards, plus a contactless library card, drivers licence, medicare card, health insurance card, auto club membership card, frequent flyer card, etc. It won’t be possible to distantly separate all these cards from each other, and they won’t play as nicely with each other as I would like.

One of the great advantages of contactless is that it’s so convenient. For example, I don’t need to take my Myki out of my wallet to get through the station turnstiles. However, in the future scenario above, that sort of convenience might apply to one card, but not to the rest.

As a software guy at heart, I see the logical solution being to turn all of these cards into pieces of software running on a single piece of hardware – that way the multiple pieces of hardware won’t conflict at the radio level and, essentially, changing the game. Whether that hardware is a phone, a dongle or just another plastic card, this has got to be the future for contactless.

Old books new looks

I read my first proper e-book 17 years ago. It was 1994, and I enthusiastically devoured Bruce Sterling‘s (free!) digital release of The Hacker Crackdown, scrolling through it line-by-line on a  small CRT display. At the modem speeds of the day, it took around 7 mins to download it, and it must’ve taken me a week to read it.

While it was common to write long form content – books, essays, dissertations, and the like – on computers, it is interesting how uncommon it was (for anyone but the author) to read such content on computers. Essentially, computers were a write-only medium when it came to books.

At the time, I knew it was a bit strange to read a whole book this way, but it was a great experience. The book was about the computer culture of the time, and so it was appropriate to be reading it hunched over a computer. However, the process of discovering that a book exists, getting it, and then beginning to read it – all within half an hour – was very satisfying.

Given how long ago I started to read e-books, I’m a little late to the party regarding the modern generation of them. This has now been rectified.

I had to buy our book-club book as an e-book through the Kindle app on our iPad. Not only was it available in time for us to read it compared with buying it for real or borrowing it from the library (it took much less than 7 minutes to download), but it was also:

  • cheaper (less than half the price),
  • easier to share between Kate and myself (no risk of losing multiple book-marks),
  • easier to read in bed (self-illuminating, so less distracting to the other person), and
  • took up none of the scarce space on our bookshelves (given that the book turned out to be not-very-good).

And with the latest book-club book, Bill Bryson’s At Home, I found myself wishing it was an e-book rather than hard-cover. While it would’ve been a lot lighter (the iPad 2 is 600g versus the book at 900g), it was more that this book mentions all sorts of interesting things in passing that made me want to look into them in more detail before continuing. It would’ve been much more convenient to jump straight to a web browser as I read about them, rather than having to put the book down and find an Internet-connected device in order to indulge my curiosity.

All the various advantages that e-readers and tablets have over their physical book counterparts remind me of the advantages that digital music players had over CDs, cassettes and records. However, as I’ve written about before, the digital music player succeeded when it was able to offer the proposition of carrying all one’s music, but e-readers cannot yet offer this.

Apple’s iPod, supported three sources for acquiring music: (i) importing music from my CDs, (ii) file sharing networks (essentially, everyone else’s CDs), and (iii) purchase of new music from an online shop (iTunes Music Store). For e-books on my iPad, it’s not easy to access anything like the first two sources, while there are at several online shops able to provide new reading material. As a result, the iPad (or any other e-reader) doesn’t really offer any way for me to take my book library with me wherever I go.

Although, that would be pretty amazing, it’s honestly not that compelling. I could go back and re-read any book whenever I want, but that’s not actually something I feel I’m missing. I could search across all my non-fiction books whenever I needed to look something up, but really I just use the Internet for that sort of thing.

The conclusion may be that books aren’t enough like music. The experience of consuming music – whether old media or new digital – is sufficiently similar that the way for technology to offer something more is to, literally, provide more of it – thousands of pieces of music. However, for books, the new digital experience of books may end up being a very different thing to the books of old.

Already, there are books with illustrations that obey gravity and can be interacted with,  books that are like a hybrid of a documentary and allow you to dig deep as you like into the detail, and we’ve only just started. If these are the sort of books that I’m going to have on my iPad in future, why would I want to be putting my old-school book collection on there instead? I can also imagine publishers seeing the chance to get people to re-purchase favourite books, done with all the extras for tablets, in the same way that people re-purchased their VHS collection when they got a DVD player.

So, while my book-club e-book experience wasn’t materially different to the one I had 17 years ago, we are going through a re-imagining of the digital book itself. If it took journalists 40 years from the first email in 1971 to officially decide to drop the hyphen from “e-mail”, then the fact that we still commonly have a hyphen in “e-book” suggests it’s not a mature concept yet, despite the progress of a mere 17 years.

Is mobile video-calling a device thing?

Ever since I’ve been involved in the telecoms industry, it seems that people have been proposing video calling as the next big thing that will revolutionize person-to-person calling. Of course, the industry has been proposing it for even longer than that, as this video from 1969 shows.

YouTube Preview Image

One thing not anticipated by that video is mobile communication, and video calling was meant to be one of the leading services of 3G mobiles. When 3G arrived in Australia in 2003, the mobile carrier Three sold its 3G phones as pairs so that you’d immediately have someone you could make a mobile video call to.

Needless to say, the introduction of 3G didn’t herald a new golden age of person-to-person video calling in Australia. So, despite all the interest in making such video calling available, why hasn’t it taken off? I’ve heard a number of theories over the years, such as:

  • The quality (video resolution, frame rate, audio rate, etc.) isn’t high enough. Once it’s sufficiently good to be able to easily read subtle expressions/ sign language gestures, people will take to it.
  • The size of the picture isn’t big enough. When it is large enough to be close to “actual size”, it will feel like communicating with a person and it will succeed.
  • The camera angle is wrong, eg. mobile phones tend to shoot the video up the nose, and PC webcams tend to look down on the head. If cameras could be positioned close enough to eye-level, people would feel like they are talking directly to each other, and video calling would take off.
  • People don’t actually want to be visible in a call, for various etiquette-related reasons such as: it prevents them multi-tasking which would otherwise appear rude, or it obliges them to spend time looking nice beforehand in order to not appear rude.

But despite the low level of use of video calling on mobiles, there is one area where it is apparently booming: Skype. According to stats from Skype back in 2010, at least a third of Skype calls made use of video, rising to half of all calls during peak times.

One explanation could be that Skype is now so well known for its ability to get video calling working between computers that when people want to do a video call, they choose Skype. Hence, it’s not so much that a third of the time, Skype users find an opportunity to video call, but that a third of Skype users only use Skype for video. Still, it’s an impressive stat, and also suggests that super-high quality video may not be a requirement.

Certainly, I’ve used Skype for video calling many times. I’ve noticed the expected problems with quality and camera angle, but it hasn’t put me off using it. I find that it’s great for sharing the changes in children across my family who are spread around the world, and otherwise difficult to see regularly. But a tiny fraction of my person-to-person calls are Skype video calls.

However, I’ve ordered an Apple iPad 2 (still waiting for delivery) and one of the main reasons for buying it was because of the front-facing camera and the support for video calling. I am hoping, despite all of the historical evidence to the contrary, that this time, I am going to have a device that I want to make video calls from.

The iPad 2 seems to be a device that will have acceptable quality (640×480 at 30fps), and it is large enough to be close to actual size, but not so large that the camera (mounted at the edge of the screen) is too far away from eye line. So, they may have found the sweet spot for video calling devices.

If you know me, be prepared to take some video calls. I hope that doesn’t seem rude.

Metric of the Moment

Being on the technology-side of the telco industry, it’s interesting to see how all the complexity of technological advances is packaged up and sold to the end user. An approach that I’ve seen used often is reducing everything to a single number – a metric that promises to explain the extent of technological prowess hidden “under the hood” of a device.

I can understand why this is appealing, as it tackles two problems with the steady march of technology. Firstly, all the underlying complexity should not need to be understood by a customer in order for them to make a buying decision – there should be a simple way to compare different devices across a range. And secondly, the retail staff should not need to spend hours learning about the workings of new technology every time a new device is brought into the range.

However, an issue with reducing everything to a single number is that it tends to encourage the industry to work to produce a better score (in order to help gain more sales), even when increasing the number doesn’t necessarily relate to any perceptible improvement in the utility of the device. Improvements do tend to track with better scores for a time, but eventually they pass a threshold where better scores don’t result in any great improvement. Reality catches up with such a score after a few months, when the industry as a whole abandons it to focus on another metric. The whole effect is that the industry is obsessed with the metric of the moment, and these metrics change from time to time, long after they have stopped being useful.

Here are some examples of the metrics-of-the-moment that I’ve seen appear in the mobile phone industry:

  • Talk-time / standby-time. Battery types like NiCd and NiMH were initially the norm, and there was great competition to demonstrate the best talk-time or standby-time, which eventually led to the uptake of Li-Ion batteries. It became common to need to charge your phone only once per week, which seemed to be enough for most people.
  • Weight. Increasing talk-time or standby-time could be accomplished by putting larger batteries into devices, but at a cost of weight. A new trend emerged to produce very light handsets (and to even provide weight measurements that didn’t include the battery). The Ericsson T28s came out in 1999 weighing less than 85g, but with a ridiculously small screen and keyboard (an external keyboard was available for purchase separately). Ericsson later came out with the T66 with a better design and which weighed less than 60g, but then the market moved on.
  • Thinness. The Motorola RAZR, announced at the end of 2004, kicked off a trend for thin clamshell phones. It was less than 14mm thick (cf. 1mm thinner than the T28s). Other manufacturers came out with models, shaving off fractions of millimeters, but it all became a bit silly. Does it really matter if one phone is 0.3mm thicker than another?
  • Camera megapixels. While initially mobile phone cameras had rather feeble resolutions, they have since ramped up impressively. For example, the new Nokia N8 has a 12 megapixel camera on board. Though, it is hard to believe that the quality of the lens would justify capturing all of those pixels.
  • Number of apps. Apple started quoting the number of apps in the app store of its iPhone soon after it launched in 2008, and it became common to compare mobile phone platforms by the number of apps they had. According to 148Apps, there are currently over 285,000 apps available to Apple devices. One might think that we’ve got enough apps available now, and it might be time to look at a different measure.

In considering what the industry might look to for its next metric, I came up with the following three candidates:

  • Processor speed. This has been a favourite in the PC world for some time, and as mobiles are becoming little PCs, it could be a natural one to focus on. Given that in both the mobile and PC worlds, clock speed is becoming less relevant as more cores appear on CPUs and graphics processing is handled elsewhere, perhaps we will see a measure like DMIPS being communicated to end customers.
  • Resolution. The iPhone 4 Retina 3.5″ display, with 960×640 pixels and a pixel density of 326 pixels / inch, was a main selling point of the device. Recently Orustech announced a 4.8″ display with 1920×1080 pixels, giving a density of 458 pixels / inch, so perhaps this will be another race.
  • Screen size. The main problem with resolution as a metric is that we may have already passed the point where the human eye can detect any improvement in pixel densities, so screens would have to get larger to provide benefit from improved resolutions. On the other hand, human hands and pockets aren’t getting any larger, so hardware innovations will be required to enable a significant increase in screen size, eg. bendable screens.

But, really, who knows? It may be something that relates to a widespread benefit, or it may be a niche, marketing-related property.

The fact that these metrics also drive the industry to innovate and achieve better scores can be a force for good. Moore’s Law, which was an observation about transistor counts present in commodity chips, is essentially a trend relating to such a metric, and has in turn resulted in revolutionary advances in computing power over the last four decades. We haven’t hit the threshold for it yet – fundamental limits in physical properties of chips – so it is still valid while the industry works to maintain it.

However, it is really the market and the end customers that select the next metric. I hope they choose a good one.

If your TV was like a Book

Last month, Apple released their latest device – the iPad. It is capable of many wonderous things, and has many fabulous properties, but of all of them, for now I am interested in just three: its screen, its weight, and its ability to show video.

As various other manufacturers rush to market with devices to compete in the segment that Apple has just legitimised, they will most likely produce things that share those same three properties. However, as it is still early days, we don’t yet know for sure what people will end up doing with these devices. That’s why it’s so much fun to speculate!

The iPad has a 24cm (diagonal) screen, weighs about 700g (WiFi version) and can deliver TV quality video from the Internet to practically wherever in the house you decide to sit yourself down with it. If you hold it up in front of your face (about 60cm away), it’s as big as if you were watching a 120cm (diagonal) TV from 3m away. And, while lighter than a 120cm TV, it’s going to feel heavy pretty quick.

However, 700g is not very heavy if you’re willing to rest it on your lap, and there’s another category of content consumption “device” that is comparable in this regard: the book. I am willing to spend hours intently focused on a book while reading it, and a quick weigh of some of my books (using the handy kitchen scales) suggests the iPad is not unusual…

Which provides some legitimisation of a “TV-watching” scenario of a family in their lounge room, with everyone watching a show on their tablet device. (Assuming that you have overcome issues like individuals’ TV audio interfering with others and ensuring adequate bandwidth for everyone.) However, this scenario feels strange, even anti-social.

I am perhaps conditioned by the ritual of people coming together to share a TV watching experience. And before we had TVs, people came together to share a radio listening experience. But before broadcasting technologies, what did we do? In reality, this sort of broadcasting experience is a relatively recent phenomenon. Before that, presumably we all sat around in the lounge room and read books.

I’ve previously written on the idea that people prefer the personal, and that a personal TV experience will be preferred to a shared TV experience. The iPad and similar devices have the potential to enable this, through becoming as light and portable as books.

“Netbooks” also have similar attributes to the iPad. However, they tend to weigh at least 1 kg and have screens that are smaller. So, while future Netbooks might have the right form factor, it certainly isn’t common yet. The iPad is the first mass-market device that properly fills this niche.

The issue of the scenario feeling anti-social is still a little troubling. While our ancestors might have looked up over their books and engaged in a casual chat, momentarily pausing their reading, this is harder to accomplish with a video experience. Not only are the eyes and ears otherwise engaged, making casual interruption more difficult, but the act of pausing and resuming is not as easy either.

I suspect that while we’re now reaching the point where hardware can fill the personal TV niche, the software is not yet ready. We may need eye-tracking software that pauses the video when the viewer looks away, integration of text-based messaging alongside video-watching, and other adaptations to the traditional video player software.

I’m keen to see what competition in this new segment produces.