Turning up for work as an avatar

I don’t think we’re talking enough about avatars. I don’t mean the James Cameron film or the classic anime series. I’m referring to the computer 3D model that can represent you online, instead of a picture or video of the “real you”.

Due to the Covid-19 pandemic, we’ve had something like 5 years of technology uptake in an accelerated timeframe. Remote working has become much more common, with people regularly joining meetings with colleagues or stakeholders via services like Teams, Webex or Zoom rather than meeting up in person.

While pointing a camera at your face and also seeing an array of boxes containing other people’s faces has its merits, it can have a bunch of downsides. It turns out that many of these can be addressed by attending the meeting as an avatar rather via camera.

Interacting with others via avatars is the normal way of things when it comes to computer games. Many people are familiar with avatars from online social settings like Minecraft, Fortnite or Roblox. I’d think that for many kids today, they have spent more hours interacting online with others as an avatar than on camera.

So, it may be there is a generational shift coming as such people come up through our Universities and workplaces. But there are also fair reasons for moving to use avatars for meetings in any case. Here are five reasons why you should consider turning up for work online as an avatar.

1. It’s less stress

Being on camera can be a bit stressful, since your appearance is broadcast to all the other people in the same meeting, and other people can be a bit judgy. Why should your appearance be the concern of people that don’t need to share the same physical space as you?

If you attend a meeting as an avatar, you

  • Don’t have to shave, brush hair, put on makeup
  • Don’t have to worry about a pimple outbreak, or a bad haircut
  • Don’t have to get out of pyjamas, take off a beanie, or cover up a tattoo
  • Know there’s no chance of someone embarrassing wandering past in the background or a pet leaping up in front of you

2. You will appear more engaged

Well, if having the camera on is stressful, why not just turn it off? In some workplaces or schools, it is considered bad etiquette to turn off your camera in a group video call. It is not a great experience to be talking to a screen of black boxes and not seeing anything of your audience. Seeing a participant’s avatar watching back instead of a black box is a definite improvement.

However, sometimes it is a good idea to turn off the camera, such as when eating or having to visit the bathroom. The participant is still engaged in the meeting but for good reasons has turned off the camera. There is no need to do that with an avatar.

An avatar is also able to make eye contact through the meeting. Unfortunately, not everyone with a camera can do this, as the camera position might be to the side, above or below the screen that the participant is actually looking at. This tends to make the participant look distracted, as that would be how such behaviour would be interpreted in a face-to-face meeting. Avatars don’t have this issue.

3. Avatars are more fun

With Teams, Webex or Zoom, you can replace your background with a virtual background for a bit of fun. With an avatar, you can change everything about your look, and make these changes throughout the day.

You don’t even need to be human, or even a living creature. You might want to stick to an avatar that is at least humanoid and has a face, but there’s a huge creative space to work within.

In some online services, avatars are not limited to being displayed in a box (like your camera feed is), but can interact in a 3D space with other avatars. This also means that stereo audio can be used to help position the avatar in a physical space, making it easier to tell who is speaking by just where the sound is coming from, or distinguish a speaker when someone is talking over the top of them.

4. There may be less risk of health issues

Most group video meeting services show a live feed of your own camera during the call. It’s not exactly natural to spend hours of a day looking at yourself in a mirror, especially if the picture of you is (most likely) badly lit, from an odd or unflattering angle, and with a cheap camera lens. Then, if you couple this with seeing amazing pictures of others online, say on social media, it all appears to be a bit unhealthy.

While it’s not an official condition, there is some discussion about what is being called Zoom dysmorphia, where people struggle to cope due to anxiety about how they appear online. These people may go the plastic surgery route in order to deal with this.

Having a camera on all the time may also be generally unhealthy since it ties people to the desk for the duration of the call. Without this, for some meetings, people might instead take a call while walking the dog or taking a stroll around the block.

5. It works well for hybrid meetings

Hybrid is hard. It’s typically not a level playing field to have some meeting participants together in a room and some joining remotely. Having a camera at the front of a room capturing all of the in-person attendees means it is often difficult for the remote participants to see them.

The main alternative is that all the participants in the room have a device in front of them that allows them to join the meeting as a bunch of remote participants who happen to be in the same place. This usually results in a bunch of cameras pointing up people’s noses, as the cameras in a laptop or tablet are not at eye-level.

If the people in the room join as avatars, they can be showed nicely to the other participants, and the individuals’ cameras are often still adequate for animating their avatar to track with their face and body.

However

There are some down-sides to using avatars. It can make it more difficult for hard-of-hearing participants since they can’t rely on lip reading to follow a conversation. There will need to be avatar etiquette discussions so people aren’t made uncomfortable by certain types of avatar turning up to meetings. The technology is still evolving so it can look a bit unnerving if an avatar doesn’t show expected human emotions.

But directionally, avatars solve problems with our current group video meetings, and we can expect to see them become more mainstream over the coming years.

What is a qubit?

I am not a deep expert in quantum computing, but I know several who are. In order to chat to them, I have read quite a few introductory quantum computing articles or online courses. However, I find that these are either pitched at a level where it’s all about the hype, or at a level where you need to have a good background in either mathematics or physics to follow along. So, I have been trying to describe a quantum computer in a useful way to people without the technical background.

This is just such an attempt. If you’re still with me, I hope you find this useful. This is for people that don’t know the difference between Hamiltonians, Hermitians or Hilbert spaces, and aren’t planning to learn.

Let’s start with some definitions. A quantum computer is a type of computing machine that uses qubits to perform its calculations. But this raises the question of what is a qubit?

Digital, or classical, computers use bits to perform their calculations. They run software (applications, operating systems, etc.) that run on hardware (CPUs, disk drives, etc.) that are based on bits, which can be either 0 or 1. The hardware implementation of these bits might be based on magnetised dots on plastic tape, pulses of light, electric current on a wire, or many others.

Qubits are “quantum bits”, and also have a variety of hardware implementations such as photon polarisation, electron spin, or again many others. Any quantum mechanical system that can be in two distinct states might be used to implement a qubit. We can exploit the properties of quantum physics to allow a quantum computer to perform calculations on qubits that aren’t possible on bits.

Before we get to that, it is worth noting that quantum computers are known to be able to perform certain calculations in minutes that even a powerful classical computer could not complete in thousands of years. For these specialised calculations, the incredible speed-up in processing time is why quantum computers are so promising. As a result, quantum computers look to revolutionise many fields from materials engineering to cyber security.

Since a qubit can be made from a variety of two-state quantum systems, let’s consider an analogy where we implement a qubit on something we all have experience with: a coin. (I know this is not an exact analogy since a coin is a classical system not a quantum mechanical system, and it can’t actually implement entanglement or complex amplitudes, but it’s just an analogy so I’m not worried.)

If we consider a coin lying on a table, it can be either heads-up or heads-down (also known as tails). For the purposes of this analogy, let’s call these states 1 and 0. You will recognise that this is like a classical bit.

Maybe this coin has different types of metals on each side, so we could send some kind of electromagnetic pulse at it to cause it to flip over, and this way we could change it from 1 to 0, or visa versa. If there is another coin next to it, we might consider another kind of electromagnetic pulse that reflects off only one of those metals in a way that would flip the adjacent coin if the first coin’s 1 side was up. You might ultimately be able to build a digital computer of sorts on these bits. (You can build a working digital computer within the game of Minecraft, so anything’s possible.)

Let’s now expand our analogy and add a coin flipping robot arm. It is calibrated to send a coin up into the air and land it on the table, such that it always lands with the 0 side up. While the coins are in the air, these are our qubits. When they land on the table, they become bits.

Now we can flip coins into the air, and send electromagnetic pulses at them to change their state. However, unlike bits that can be only either 0 or 1, qubits have probabilities. A pulse at a coin can send it spinning quickly so that when it lands on the table it will be either 0 or 1 with a 50-50 chance. Another pulse might reflect off this spinning coin so that it hits the next coin and spins it only if the pulse happens to hit the 1 side of the first coin. Now when the coins land, they have a 50-50 chance of either being both 0 or both 1.

However, you won’t know this from measuring it just the one time. You will want to perform the coin flips and the same electromagnetic pulses a hundred times or more and measure the number of different results you get. If you do the experiment 200 times, and 100 of those times you get two 0s and the other 100 times you get two 1s, you can be pretty confident that this is what is going on. For more complicated arrangements of pulses, and greater numbers of coins, you might want to do the experiment 1000 times to have a clear idea of what is happening.

This is how quantum computing works. You perform manipulations on qubits (coins in the air), these set up different possible results with different probabilities, the qubits become bits (coins on the table) that can then be read and manipulated by a classical computer, and you repeat it all many times so you can determine things about those probabilities.

Wrist Computers

At some point in the last century, a strange thing happened: people took something that they’d been happy to carry around in their pockets for centuries and started to wear it on their wrist. Why?

I have just bought myself a smartwatch, and it’s got me thinking about this. A smart watch is typically what a 1980s calculator watch would be if someone invented it today. Because that’s basically what 99% of them are. Not calculator watches, of course, but stick with me for a bit. Just as in the 1980s, the most computing power an ordinary person could carry around in their pocket was a calculator, so people tried to put a tiny version of it on their wrist. These days, the most computing power an ordinary person can carry around in their pocket is a smartphone, so people are trying to put a tiny version of it on their wrists.

That said, you may not be too surprised to hear that the smartwatch I bought was part of the 1% that aren’t like that. It is a Withings Activité Pop, which is an analog watch that happens to also talk to my smartphone using Bluetooth. Withings isn’t the only maker of this sort of smart watch, e.g. you can also get a Martian watch which takes a similar approach to being “smart”. I expect other watch makers will put chips in their watches and it will become pretty normal soon.

I am really loving my Withings smartwatch. It automatically updates the time when daylight savings changes or when I travel into a different timezone. It has a pedometer inside it, and shows me my progress towards my daily step target on a dial on the face. It also has a bunch of other features, and sometimes gets new ones that appear for free, like tracking swimming strokes. But most of all, it looks good, is light on my wrist, and has a battery life of over 8 months. While these as expected features of a normal watch, they are rather novel in a smartwatch.

As a result, smartwatches haven’t really taken off yet in the way that, say, FitBit fitness trackers have. Is the smartwatch market destined for greatness or niche-ness?

Perhaps the history of the pocket watch has some relevant lessons, for which I will be drawing heavily on Wikipedia. The wearable watch was a 16th century innovation, beginning as a clock-on-a-pendant with only an hour hand. Some 17th century improvements brought the glass-covered face and the minute hand, and they became regularly carried in (waist coat) pockets at this time. It took until late in the 18th century for the pocket watch to move beyond a pure luxury item.

Pocket watches continued to be the dominant form of watch, at least for men, until the late 19th century, when the “wristlet” (we know it better as the wrist watch) came along. The British Army began issuing them to servicemen in 1917, where synchronising the creeping barrage tactic between infantry and artillery was important, and pocket watches were impractical. Reading the time at a glance was probably the first “killer app”, and by 1930, the ratio of wrist to pocket watches was 50 to 1. Within a couple of decades, the pocket watch had been completed disrupted.

While it was more convenient to read the time on a wrist watch than a pocket watch, it was also was also awkward to wear a heavy thing on a wrist, and in terms of fashion, the wrist watch was considered more of a women’s fashion item. In the end, World War I forced the issue, eliminating the fashion consideration, and the convenience factor overcame the weight problem.

Coming back to the present, UK mobile operator O2 published a report called “All About You” in 2012 that noted 46% of respondents had dispensed with a watch in favour of using their smartphone to check the time. It seems the greater utility of a smartphone has led people to forgo their watches, even if it means that time has gone back into the pocket.

So, there’s an argument that if the smartwatch provided similar utility to the smartphone, people would again shift from the pocket to the wrist. My Withings watch doesn’t in any way substitute for my smartphone, and is really a smartphone accessory. However, something like a LG Urbane Second Edition watch runs Android and has an LTE connection for calls and texting, and is more powerful than even a smartphone of a few years ago. Speech recognition can make up for the lack of keyboard entry, and a Bluetooth headset can enable private conversations.

However, economically a smartphone is actually a games platform, and games dominate the revenues from apps on smartphones. Making the smartwatch a viable games platform may be required for it to replace smartphones. Even in the 1980s, there were attempts to create games for the wrist, but they weren’t enormously successful compared to the game & (pocket) watch versions. Admittedly, there are games for modern smartwatches. However, they drain the battery and aren’t the same calibre as smartphone games.

If we measure the period of the smartphone since 2002, when Nokia introduced Series60 handsets, it has been with us for 13 years. The pocket watch, from invention to disruption, lasted 400 years, but declined due to the rise of the wrist watch in the last 50 of those years. If the smartwatch disrupted the smartphone at the same speed, it would need less than 2 years.

All I can say is: watch this space.

Lessons from NYT on innovation

The Kindle New York TimesWhatever the circumstances that led someone at The New York Times to leak their report on Innovation, I am thankful. Published (internally) in March, it is the fruits of a six month long deep-dive into the business of journalism within a company that has been a leader in that industry for over a century, and provides an intimate and honest study into how an incumbent can be disrupted. It is 97 pages long, and worth reading for anyone who is interested in innovation or the future of media.

The report was leaked in full in May, and I’ve been reading bits of it in my spare time. Just recently I completed it, and felt it was worth summarising some of the lessons that are highlighted by the people at the Times. As it is with such things, my summary is going to be subjective and – by nature – highly selective, so if this piques your interest, I encourage you to read the whole thing.

(My summary ended up being longer than I’d originally intended, so apologies in advance.)

Organisational Division

Because of the principle of editorial independence, the Times has clear boundaries between the journalists in the newsroom and those who operate “the business” part of the newspaper, which has been traditionally about selling advertising. This separation is even known as “church and state” within the organisation, and affects everything from who is allowed to meet with whom (even during brown-bag lunch style meetings) to the language used to communicate concepts. This has worked well in the past, allowing the journalism to be kept at the highest quality, without fear of being compromised by commercial considerations.

However, the part of the organisation that has been developing new software tools and reader applications is within “the business” (not being journalists), and has hence been disconnected from the newsroom. Hence new software is not developed to support the changing style of journalism, and where it is, it is done as one-off projects. Other media organisations are utilising developers more strategically, resulting in better tools for the journalists and a better experience for the readers.

Lesson: Technology capability needs to be at the heart of an innovation organisation, rather than kept at arms-length.

Changing Customers

For a very long time, the main customer of the Times has been advertisers. However, print media is facing a future where advertisers will not pay enough to keep the organisation running. Online advertising pays less than print advertising, and mobile advertising even less again. Coupled with declining circulation due to increased digital readership, the advertising business looks pretty sick. But there’s a new type of customer for the digital editions that is growing in importance: the reader.

While advertising revenues had the potential to severely compromise journalism, it’s not so clear that the same threat exists from reader revenues. In theory there is a good alignment: high quality journalism results in more readers. But if consideration of attracting readers is explicitly kept away from the newsroom as part of the “church and state” division, readers may end up being attracted by other media organisations. In fact, this is what is happening at the Times, with declines in most online reader metrics, and none increasing.

In the print world, it was enough to produce a high quality newspaper and it would attract readers. However, in the digital world this strategy is not currently working. Digital readers don’t select a publication and then read the stories in it, they discover individual articles from a variety of sources and then select whether to read them or not. The authors of articles need to take a bigger role in ensuring those articles are discovered.

Lesson: When customers radically change, the business needs to radically change too (many true-isms may be true no longer).

Experimentation

The rules for success in digital are different from those of traditional print journalism, although no-one really knows what they are yet. That said, the Times newsroom has an ingrained dislike of risk-taking. Again this made sense for a newsroom that didn’t want to print an incorrect story, and so everything had to be checked before it went public. However, this culture inhibits innovation if applied outside of the news itself.

Not only does it a culture of avoiding risks prevent them from experimenting and slow the ability to launch new things, but smart people within the organisation risk getting good at the wrong things. A great quote from the report: “When it takes 20 months to build one thing, your skill set becomes less about innovation and more about navigating bureaucracy.”

Also, the newsroom lacks a dedicated strategy and operations team, so doesn’t know how well readers are responding to experiments, or what is working well for competitors. Given that competitors are no longer only other daily newspapers, it’s not enough to just read the morning’s papers to get insight into the competition. BuzzFeed reformatted stories from the Times and managed to get greater reader numbers than the Times was able to for the same stories.

Lesson: If experimentation is being avoided due to risk, then business risks are not being managed effectively.

Acquiring Talent

It turns out that people experienced in traditional journalism don’t automatically have all the skills to meet the requirements of digital readers. However, the Times has a bias for hiring and promoting people in digital roles based on their achievements as journalists. While this likely worked in the past to create a high quality newspaper, it isn’t working in digital. In general, the New York Times appears to be a print newspaper first, and a digital business second. The daily tempo of article submission and review is oriented around a daily publication to be read in the mornings, rather than supporting the release of stories digitally when they are ready to be published. Performance metrics are still oriented around the number of front page stories published – a measure declining in importance as digital readers cease to discover articles via the home page.

The lack of appreciation for the digital world and digital people in general has resulted in the departure of a number of skilled employees, according to the report. Hiring digital talent is also difficult to justify to management given that demand has pushed salaries higher for skilled people even if those people are relatively young. What could be a virtuous circle, with talent attracting talent, is working in the opposite direction with what appears to be a cultural bias against the very talent that would help the Times.

Lesson: An organisation pays for the talent either by paying market rates for capable people or paying the cost in lost opportunities.

Final words

When I first came across the NYT Innovation report, I expected to read about another example of the innovators’ dilemma, where rational business decisions kept them from moving into a new market. Instead, the report is the tale of how the organisation structure, culture and processes that made The New York Times great in the past are actively inhibiting its success in the present. Some of these seem to have become sacred cows and it is difficult for the organisation to get rid of them. It will require courage – and a dedication to innovation – to change the organisation into one that is able to compete effectively.

The Amazing Bitcoin

Bitcoin over circuit board

I am impressed with the novelty and cleverness behind the online phenomenon known as Bitcoin. For those who came in late, bitcoins could be described as digital commodities. People can trade them for actual currency and sometimes real goods. However, while it’s true that we’ve been using something called money for this purpose already, and so you may ask why we need it, Bitcoin has a couple of interesting properties:

  • Trustless: If I engage in a Bitcoin transaction with you, I don’t need to trust you, your bank, your government, or anyone specifically. Once a transaction has completed, it can be verified to have happened as I expected, removing counter-party risk that exists in many markets (for example, a fraudster may pay me in counterfeit bills).
  • Resilient: There is no central operator of the Bitcoin infrastructure, so everyone’s not worried about a particular company staying solvent, or a particular government staying in power or true to their promises in order for the system to keep working.

Up until Bitcoin, no-one had been able to come up with a system with these properties. Either counter-party risk was removed because there was an operator regulating the market (and the market wasn’t resilient in the face of that operator collapsing) or there were markets without central control that required a lot of trust when dealing with others. If the inventors of Bitcoin had not been hiding their identities, I wouldn’t be surprised if they would be in the running for a future Nobel Prize in Economics. Bitcoin is no less than a completely decentralised technology for financial contracts allowing for value to be transferred over any means – physical or virtual.

However, I’ve found that the way that Bitcoin operates to be a little surprising. It’s not like other systems that I’m used to. Since I haven’t seen these points noted down clearly in the one place, I thought others may be interested as well. (Unless you’re already very familiar with Bitcoin, in which case it’s likely to be old hat.)

1. Miners are both the source of new bitcoins and responsible for documenting all transactions

A miner is just the name for a computing node that works to discover the next block in the Bitcoin blockchain. Every ten minutes (on average), a new block containing all as-yet-undocumented transactions is generated. The first node to generate this block (which requires discovering the solution to a particular computing problem using trial-and-error approaches) also gets 25 bitcoins (BTC) for its trouble. The “winner” here is in part due to luck, and in part due to how much computing power the miner has dedicated to this. The blockchain is the ongoing record of each of these blocks, collectively forming something of a global ledger of all known transactions to date.

In theory, transactions can contain something akin to a tip, representing a fee to the (winning) miner, and these are in addition to the 25 BTC for each ten minutes work (with a single BTC worth something between US$40 and US$1140 over the last year, and currently around US$580). However, such transaction fees are relatively minor at the moment, with miners currently earning less than 20 BTC per day in total. The 25 BTC figure used to be 50 BTC in the early days, and reduces predictably over time with it halving again to 12.5 BTC by about the year 2017.

2. Transactions are not real-time and take around an hour before they are considered certain

Prospective transactions are broadcast around between all the various miners using a peer-to-peer network, who each check them for validity before including them in the current block that’s being worked on. Since a new block comes along every ten minutes (on average), there may be a wait of up to ten minutes for a new transaction to appear in the blockchain, and hence the receiver of BTC can read it and will know that they are going to get some coins.

Except miners may not include your transaction in the next block because there were already too many transactions in it, or perhaps the miner that “won” the block that time decided not to include any transactions at all, so you will need to wait for the next block. And even then it appears that there is a risk that a Bitcoin sender could “double spend” the BTC if two conflicting transactions were sent to different miners, so it’s considered prudent to wait until six blocks have been generated (including the first one with the relevant transaction) to get transaction certainty.

While this is fine for some types of transactions, such as a book order, it is not so fine for other types of transactions where goods are delivered immediately such as an app download or when at a Bitcoin ATM dispensing hard currency. Any solutions to this problem will sit outside of the standard Bitcoin infrastructure, e.g. merchant insurance, but in a world where transaction times are getting shorter and shorter, this may limit Bitcoin’s long term use in the general economy.

3. Bitcoins are not held in Bitcoin wallets

A Bitcoin wallet is technically just a public-private key pair (or multiple such pairs). This provides the means of generating a public address (from the public key, for others to send bitcoins to your wallet) and for generating new transactions (using the private key, when sending bitcoins to other people’s wallets). The bitcoins themselves are not held anywhere, but proof of ownership of them can be established from the records in the blockchain.

Given that everyone can see exactly how many bitcoins belong to every Bitcoin wallet, it’s considered good practice to use a different public address (and hence public-private key pair) for each transaction. A single transaction can take bitcoins from multiple wallets and send them out to multiple wallets, making this all a bit easier to manage.

4. Bitcoin transactions can be complex contracts

Since bitcoins themselves are not actually moved around and bitcoin balances are not kept within the Bitcoin infrastructure, each transaction sending some bitcoins refers to previous transactions where those bitcoins were “received”. At a minimum a single sending transaction needs to refer back to a single receiving transaction. As part of validating that this pair of transactions should be allowed, miners actually run a small script embedded within the sending transaction followed by another one embedded in the receiving transaction. The scripting language is pretty extensive.

Also, because Bitcoin transactions are just a series of bytes and can be sent directly to others, e.g. over email, instead of broadcasting them to the miners, complex contracts can be created. You can use Bitcoin to pay someone, but only if a third party also approves the transaction. Or you can use Bitcoin to pay a deposit / bond where the money comes back to you after an agreed period but the other party can’t spend it in the mean-time. Or you can use Bitcoin to contribute towards a transaction that will go ahead only if enough other people contribute towards it for it to reach a specified sum. Some are using Bitcoin to run a provably-fair lottery. Some are even looking to use Bitcoin to allow for electronic voting.

Concluding remarks

Bitcoin is still relatively new for a payment technology, and I would not pretend that using it is risk-free. Regulation of Bitcoin is still nascent and inconsistent between geographies, it operates in a legally grey area with perhaps half of all Bitcoin transactions being made with gambling services, and Bitcoin-based marketplaces seem to be regularly collapsing.

Even if Bitcoin itself is replaced by one of the other newer “cryptocurrencies” such as LiteCoin, Ripple or dogecoin, I suspect that its invention has opened the door for amazing new ways to transact online.

Pi, Python and I (part 1)

Raspberry PiI’ve been on Facebook for almost six years now, and active for almost five. This is a long time in Internet time.

Facebook has, captured within it, the majority of my interactions with my friends. Many of them have stopped blogging and just share via Facebook, now. (Although, at least two have started blogging actively in the last year or so, and perhaps all is not lost.) At the start, I wasn’t completely convinced it would still be around – these things tended to grow and then fade within just a few years. So, I wasn’t too concerned about all the *stuff* that Facebook would accumulate and control. I don’t expect them to do anything nefarious with it, but I don’t expect them to look after it, either.

However, I’ve had a slowly building sense that I should do something about it. What if Facebook glitched, and accidentally deleted everything? There’s nothing essential in there, but there are plenty of memories I’d like to preserve. I really wanted my own backup of my interactions with my friends, in the same way I have my own copies of emails that I’ve exchanged with people over the years. (Although, fewer people seem to email these days, and again they just share via Facebook.)

The trigger to finally do something about this was when every geek I knew seemed to have got themselves a Raspberry Pi. I tried to think of an excuse to get one myself, and didn’t have to think too hard. I could finally sort out this Facebook backup issue.

Part of the terms of my web host are that I can’t run any “robots” – it’s purely meant to handle incoming web requests. Also, none of the computers at home are on all the time, as we only have tablets, laptops and phones. I didn’t have a server that I could run backup software on.. but a Raspberry Pi could be that server.

For those who came in late, the Raspberry Pi is a tiny, single-board computer that came out last year, is designed and built in the UK, and (above all) is really, really cheap. I ordered mine from the local distributor, Element14, whose prices start at just under $30 for the Model A. To make it work, you need to at least provide a micro-USB power supply ($5 if you just want to plug it into your car, but more like $20 if you want to plug it into the wall) and a Micro SD card ($5-$10) to provide the disk, so it’s close to $60, unless you already have those to hand. You can get the Model B, which is about $12 more and gets you both more memory and an Ethernet port, which is what I did. You’ll need to find an Ethernet cable as well, in that case ($4).

When a computer comes that cheap, you can afford to get one for projects that would otherwise be too expensive to justify. You can give them to kids to tinker with and there’s no huge financial loss if they brick them. Also, while cheap, they can do decent graphics through an HDMI port, and have been compared to a Microsoft Xbox. No wonder they managed to sell a million units in their first year. Really, I’m a bit slow on the uptake with the Raspberry Pi, but I got there in the end.

While you can run other operating systems onto it, if you get a pre-configured SD card, it comes with a form of Linux called Raspbian and has a programming language called Python set up ready to go. Hence, I figured as well as getting my Facebook backup going, I could use this as an excuse to teach myself Python. I’d looked at it briefly a few years back, but this would be the first time I’d used it in anger. I’ll document here the steps I went through to implement my project, in case anyone else wants to do something similar or just wants to learn from this (if only to learn how simple it is).

The first thing to do is to head over to developers.facebook.com and create a new “App” that will have the permissions that I’ll use to read my Facebook  feed. Once I logged in, I chose “Apps” from the toolbar at the top and clicked on “Create New App”. I gave my app a cool name (like “Awesome Backup Thing”) and clicked on “Continue”, passed the security check to keep out robots, and the app was created. The App ID and App secret are important and should be recorded somewhere for later.

Now I just needed to give it the right permissions. Under the Settings menu, I clicked on “Permissions”, then added in the ones needed into the relevant fields. For what I want, I needed: user_about_me, user_status, friends_about_me, friends_status, and read_stream. “Save Changes” and this step is done. Actually, I’m not sure if this is technically needed, given the next step.

Now I needed to get a token that can be used by the software on the server to query Facebook from time to time. The easiest way is to go to the Graph API Explorer, accessible under the “Tools” menu in the toolbar.

I changed the Application specified in the top right corner to Awesome Backup Thing (insert your name here), then clicked on “Get access token”. Now I need to specify the same permissions as before, across the three tabs of User Data Permissions (user_about_me, user_status), Friends Data Permissions (friends_about_me, friends_status) and Extended Permissions (read_stream). Lastly, I clicked on “Get Access Token”, clicked “OK” to the Facebook confirmation page that appeared, and returned to the Graph API explorer where there was a new token waiting for me in the “Access token” textbox. It’ll be needed later, but it’s valid for about two hours. If you need to generate another one, just click “Get access token” again.

Now it’s time to return to the Pi. Once I logged in, I needed to set up some additional Python packages like this:

$ sudo pip install facepy
$ sudo pip install python-dateutil
$ sudo pip install python-crontab

And then I was ready to write some code. The first thing was to write the code that will keep my access token valid. The one that Facebook provides via the Graph API Explorer expires too quickly and can’t be renewed, so it needs to be turned into a renewable access token with a longer life. This new token then needs to be recorded somewhere so that we can use it for the backing-up. Luckily, this is pretty easy to do with those Python packages. The code looks like this (you’ll need to put in the App ID, App Secret, and Access Token that Facebook gave you):

# Write a long-lived Facebook token to a file and setup cron job to maintain it
import facepy
from crontab import CronTab
import datetime

APP_ID = '1234567890' # Replace with yours
APP_SECRET = 'abcdef123456' # Replace with yours

try:
  with open("fbtoken.txt", "r") as f:
  old_token = f.read()
except IOError:
  old_token = ''
if '' == old_token:
  # Need to get old_token from https://developers.facebook.com/tools/explorer/
  old_token = 'FooBarBaz' # Replace with yours

new_token, expires_on = facepy.utils.get_extended_access_token(old_token, APP_ID, APP_SECRET)

with open("fbtoken.txt", "w") as f:
  f.write(new_token)

cron = CronTab() # get crontab for the current user
for oldjob in cron.find_comment("fbtokenrenew"):
  cron.remove(oldjob)
job = cron.new(command="python ~/setupfbtoken.py", comment="fbtokenrenew")
renew_date = expires_on - datetime.timedelta(1)
job.minute.on(0)
job.hour.on(1) # 1:00am
job.dom.on(renew_date.day)
job.month.on(renew_date.month) # on the day before it's meant to expire
cron.write()

Apologies for the pretty rudimentary Python coding, but it was my first program. The only other things to explain are that the program sits in the home directory as the file “setupfbtoken.py” and when it runs, it writes the long-lived token to “fbtoken.txt” then sets up a cron-job to refresh the token before it expires, by running itself again.

I’ll finish off the rest of the code in the next post.

Technology, Finance and Education

Yale Theatre

I have been trying out iTunes U by doing the Open Yale subject ECON252 Financial Markets. What attracted me to the subject was that the lecturer was Robert Shiller, one of the people responsible for the main residential property index in the US and an innovator in that area. Also, it was free. :)

I was interested in seeing what the iTunes U learning experience was like, and I was encouraged by what I found. While it was free, given the amount of enjoyment I got out of doing the subject, I think I’d happily have paid around the cost of a paperback book for it. I could see video recordings of all the lectures, or alternatively, read transcripts of them, plus access reading lists and assessment tasks.

The experience wasn’t exactly what you’d get if you sat the subject as a real student at Yale. Aside from the general campus experience, also missing were the tutorial sessions, professional grading of the assessments (available as self-assessment in iTunes U), an ability to borrow set texts from the library, and an official statement of grading and completion at the end. Also, the material dated from April 2011, so wasn’t as current as if I’d been doing the real subject today.

Of these, the only thing I really missed was access to the texts. I suppose I could’ve bought my own copies, but given I was trying this because it was free, I wasn’t really inclined to. Also, for this subject, the main text (priced at over $180) was actually a complementary learning experience with seemingly little overlap with the lectures.

While I tried both the video and transcript forms of the lectures, and while the video recordings were professionally done, in the end I greatly preferred the transcripts. The transcripts didn’t capture blackboard writing/diagrams well, and I sometimes went back and watched the videos to see them, but the lecturer had checked over the transcripts and they had additions and corrections in them that went beyond what was in the video. Also, I could get through a 1hr lecture in a lot less than an hour if I was reading the transcript.

Putting aside the form of delivery, the content of the subject turned out to be much more interesting that I expected at the beginning. Shiller provided a social context for developments in finance through history, explained the relationships between the major American financial organisations, and provided persuasive arguments for the civilising force of financial innovations (e.g. for resource allocation, risk management and incentive creation), positioning finance as an engineering discipline rather than (say) a tool for clever individuals to make buckets of cash under sometimes somewhat dubious circumstances. I’ll never think of tax or financial markets or insurance in quite the same way again.

I will quote a chunk from one of his lectures (Lecture 22) that illustrates his approach, but also talks about how technology changes resulted in the creation of government pension schemes. I like the idea that technology shifts have resulted in the creation of many things that we wouldn’t ordinarily associate with “technology”. By copying his words in here, I’ll be able to find them more easily in the future (since this is a theme I’d like to pick up again).

In any case, while I didn’t find the iTunes U technology to be a good alternative for university education, I think it’s a good alternative to reading a typical e-book on the subject. Of course, both e-books and online education will continue to evolve, and maybe there wont be a clear distinction in the future. But for now, it’s an enjoyable way to access some non-fiction material in areas of interest.

The German government set up a plan, whereby people would contribute over their working lives to a social security system, and the system would then years later, 30, 40 years later, keep a tab, about how much they’ve contributed, and then pay them a pension for the rest of their lives. So, the Times wondered aloud, are they going to mess this up? They’ve got to keep records for 40 years. They were talking about the government keeping records, and they thought, nobody can really manage to do this, and that it will collapse in ruin. But it didn’t. The Germans managed to do this in the 1880s for the first time, and actually it was an idea that was copied all over the world.

So, why is it that Germany was able to do something like this in the 1880s, when it was not doable anywhere else? It had never been done until that time. I think this has to do ultimately with technology. Technology, particularly information technology, was advancing rapidly in the 19th century. Not as rapidly as in the 20th, but rapidly advancing.

So, what happened in Europe that made it possible to institute these radical new ideas? I just give a list of some things.

Paper. This is information technology, but you don’t think – in the 18th century, paper, ordinary paper was very expensive, because it was made from cloth in those days. They didn’t know how to make paper from wood, and it had to be hand-made. As a result, if you bought a newspaper in, say, 1790, it would be just one page, and it would be printed on the smallest print, because it was just so expensive. It would cost you like $20 in today’s prices to buy one newspaper. Then, they invented the paper machine that made it mechanically, and they made it out of wood pulp, and suddenly the cost of paper went down. …

There was a fundamental economic difference, and so, paper was one of the things.

And you never got a receipt for anything, when you bought something. You go to the store and buy something, you think you get a receipt? Absolutely not, because it’s too – well, they wouldn’t know why, but that’s the ultimate reason – too expensive. And so, they invented paper.

Two, carbon paper. Do you people even know what this is? Anyone here heard of carbon paper? Maybe, I don’t know. It used to be, that, when you wanted to make a copy of something, you didn’t have any copying machines. You would buy this special paper, which was – do you know what – do I have to explain this to you? You know what carbon paper is? You put it between two sheets of paper, and you write on the upper one, and it comes through on the lower one.

This was never invented until the 19th century. Nobody had carbon paper. You couldn’t make copies of anything. There was no way to make a copy. They hadn’t invented photography, yet. They had no way to make a copy. You had to just hand-copy everything. The first copying machine – maybe I mentioned that – didn’t come until the 20th century, and they were photographic.

And the typewriter. That was invented in the 1870s. Now, it may seem like a small thing, but it was a very important thing, because you could make accurate documents, and they were not subject to misinterpretation because of sloppy handwriting. … And you could also make many copies. You could make six copies at once with carbon paper. And they’re all exactly the same. You can file each one in a different filing cabinet.

Four, standardized forms. These were forms that had fill-in-the-blank with a typewriter.

They had filing cabinets.

And finally, bureaucracy developed. They had management school. Particularly in Germany, it was famous for its management schools and its business schools.

Oh, I should add, also, postal service. If you wanted to mail a letter in 1790, you’d have trouble, and it would cost you a lot. Most people in 1790 got maybe one letter a year, or two letters a year. That was it. But in the 19th century, they started setting up post offices all over the world, and the Germans were particularly good at this kind of bureaucratic thing. So, there were post offices in every town, and the social security system operated through the post offices. Because once you have post offices in every town, you would go to make your payments on social security at the post office, and they would give you stamps, and you’d paste them on a card, and that’s how you could show that you had paid.

– Robert Shiller, ECON252 Financial Markets, 2011

Personal and environmental audio – hear hear!

Just before Christmas, a friend brought me a new pair of headphones back from the US. I still haven’t quite decided yet whether they are the future of personal audio or just a step in the right direction, but I am finding them a bit of a revelation.

The headphones are the AfterShokz Sportz M2, which are relatively cheap, bone conduction headphones. Bone conduction means that instead of the headphones sending sound into your ear canal (like in-ear or full size headphones), they sit against the bones of your skull and send vibrations along them to your inner ear. The main advantage is that while listening to audio from these headphones, you can still hear all the environmental sound around you. The main disadvantage is that, of course, you can still hear all the environmental sound around you.

Clearly, this is not desirable for an audiophile. Obviously, you don’t get these sorts of headphones for their audio quality, and while I find them perfectly decent for listening to music or podcasts, the bass is not as good as typical headphones either. That said, if I want to hear the sound better, I can pop a finger in my ear to block out external noise. Sometimes I use the headphones for telephone calls on my mobile when traveling on the tram, and it probably looks a little odd to the other travelers that I am wearing headphones and putting my finger to my ear, but it is very effective.

For the first week or so that I was wearing them, I had strange sensations in my head, very much like when I first get new frames for my glasses. They push on my head in a way that I’m not used to, and it takes a little bit to get used to. The fact that I can hear music playing in my “ears” and yet hear everything around me was also initially a bit surreal – a bit like I was in a movie with a soundtrack – but the strangeness here diminished very quickly and now it is just a delight.

While they are marketed to cyclists or people who need to be able to hear environmental sound for safety reason (like, well, pedestrians crossing roads, so almost everyone I guess), it’s not the safety angle that really enthuses me. I am delighted by being able to fully participate in the world around me while concurrently having access to digital audio. When the announcer at a train station explains that a train is going to be cancelled, I still hear it. When a barista calls out that my coffee is ready, I still hear it. When my wife asks me a question while I’m doing something on the computer, I still hear it.

A couple of years ago, I yearned for this sort of experience:

For example, if I want to watch a TV program on my laptop, while my wife watches some video on the iPod on the couch next to me, we are going to interfere with each other, making it difficult for either of us to listen to our shows.

Being able to engage with people in my physical environment and yet access audio content without interfering with others is very liberating. I had hoped that highly directional speakers were the solution, but bone conduction headphones are a possible alternative.

Initially I had tried headphones that sat in only one ear, leaving the other one free. They were also very light and comfortable. One issue was that these were Bluetooth headphones and had trouble staying paired with several of the devices I had. However, and more importantly, I looked a bit like a real estate agent when I wore them, and was extremely self-conscious. Even trying to go overboard and wear them constantly for a month wasn’t enough to rid me of the sense of embarrassment I felt. Additionally, others would make a similar association and always seemed to assume that I must be on a phone call. If I did interact with others, I always had to explain first that I wasn’t on a call. What should’ve been a highly convenient solution turned out to be quite inconvenient.

The AfterShokz have none of these issues. I did try coupling them with a Bluetooth adaptor, but it had similar Bluetooth pairing issues. I see that AfterShokz have since released headphones with Bluetooth built in, but I haven’t tested these.

One potential new issue with the AfterShokz that I should discuss relates to the ability for others to hear what I’m listening to – this had been mentioned by some other online reviewers. While at higher volumes, others can hear sounds coming from the headphones (although this is not unique to AfterShokz’ headphones), at lower volumes it is actually very private. In any case, I’ve got a niggling sense of a higher risk of damage to my inner ear from listening to music at higher volumes: bone conduction headphones surely need to send sound-waves at higher energy levels than normal headphones because the signal probably attenuates more through bone than through air, and this is coupled with the fact that it needs to be operated at higher levels in order to be heard over background noise that would be otherwise blocked out by normal headphones. So, I try to set it at as low a volume as I can get away with, and block my ear with my finger if I need to hear better. In quiet environments, it’s not an issue.

Perhaps I am worrying about something that isn’t a problem, since I note that some medical professionals who specialise in hearing loss are advocating them. For that matter, the local group that specialises in vision loss is also promoting them. Although, I guess the long term effects of this technology are still unclear.

In any case, I find using this technology to be quite wonderful. I feel that I’ve finally found stereo headphones that aren’t anti-social. I hope if you have the chance to try it, you will also agree.

Technology Forecasting

Several years ago, I bought a book by Richard Feynman about science and the world. The following passage has stuck with me:

Now, another example of  a test of truth, so to speak, that works in the sciences that would probably work in other fields to some extent is that if something is true, really so, if you continue observations and improve the effectiveness of the observations, the effects stand out more obviously. Not less obviously. That is, if there is something really there, and you can’t see good because the glass is foggy, and you polish the glass and look clearer, then it’s more obvious that it’s there, not less.

I love this idea. It’s not just that you test a theory over time and if it hasn’t been disproven then it’s probably true, but that over time a true theory becomes more obviously true.

In forecasting technology trends, this is not necessarily a helpful thing. The more obviously true something is, the less likely it is that other people credit you with having an insight, even if it dates from when it was unclear.

Still, the converse of the idea is definitely helpful. If a theory requires constant tweaking in the face of new evidence, just to maintain the possibility of being true, it most likely isn’t.

I have no trouble coming up with crazy ideas about how technology might develop, but faced with a number of equally crazy ideas, it is difficult to know which are the ones with some merit and which are false. Happily, the above approach gives me a process to help sort them: giving them time. The ideas that are reinforced by various later developments are worth hanging on to, while those that fail to gain any supporting evidence  over time may need to be jettisoned.

Ideas that I initially supported but have been forced by time to jettison include: Java ME on the mobile, RSS news readers, ubiquitous speech recognition, mobile video calling, and the Internet fridge.

One idea that I’m proud to have hung onto was that of mobile browsing. I saw the potential back in the late 1990s when I was involved in the WAP standards, enabling mobile browsing on devices such as the Nokia 7110, even if it was wracked with problems. Several colleagues, friends and family members dismissed the idea. However, over time, mobile browsing received more evidence that it was credible, with the successes in Japan, the appearance of the Opera browser, and then Safari on the iPhone. Now, I regard Safari on the iPad to be the best web browsing experience of all my devices – PCs included.

While Feynman was a great physicist, and his advice has helped me in forecasting technology trends, there’s no guaranteed way to get it right. The last word should belong to another physicist, Niels Bohr, who is reputed to have said: prediction is very difficult, especially about the future.

Contactless Sport

The other week, I got my first contactless credit card – a Visa payWave. You’ve probably seen the ads for payWave and PayPass cards – the banks have been issuing them for a while now – and I was keen for my old card to expire so that I could get a new card with this feature.

That said, I haven’t gotten the chance to use its contactless capabilities yet, but that’s not to say I haven’t noticed anything different. The day after I added the new card to my wallet, my Myki travel card stopped working.

The problem is that both my Myki and my new payWave credit card use a wireless standard called ISO/IEC 14443 that operates at 13.56MHz. Myki uses a technology called MIFARE that complies with this standard, while payWave uses contactless EMV technology. However, while they are sisters in the technology domain, neither card pays any attention to the other when in my wallet, and they interfere when I put the wallet near the reader in a station turnstile.

One solution to this is to replace the wallet with a special RF-shielded one, like this, and place the different cards in the right spots so that interference doesn’t occur. However, while I experimented with some strategically-placed aluminium foil in my wallet, in the end all I needed to do was ensure that the EMV and MIFARE cards were distantly separated by a chunk of other plastic cards and a coin pouch (I know my wallet is chunky, but I can still fit it in my pocket!).

While this may be a first world problem, it’s still something that’s going to occur more and more as new contactless cards are added to the wallet. Today, I have just a travel card and a payment card. But in the future, I am likely to have more payment cards, plus a contactless library card, drivers licence, medicare card, health insurance card, auto club membership card, frequent flyer card, etc. It won’t be possible to distantly separate all these cards from each other, and they won’t play as nicely with each other as I would like.

One of the great advantages of contactless is that it’s so convenient. For example, I don’t need to take my Myki out of my wallet to get through the station turnstiles. However, in the future scenario above, that sort of convenience might apply to one card, but not to the rest.

As a software guy at heart, I see the logical solution being to turn all of these cards into pieces of software running on a single piece of hardware – that way the multiple pieces of hardware won’t conflict at the radio level and, essentially, changing the game. Whether that hardware is a phone, a dongle or just another plastic card, this has got to be the future for contactless.