Old books new looks

I read my first proper e-book 17 years ago. It was 1994, and I enthusiastically devoured Bruce Sterling‘s (free!) digital release of The Hacker Crackdown, scrolling through it line-by-line on a  small CRT display. At the modem speeds of the day, it took around 7 mins to download it, and it must’ve taken me a week to read it.

While it was common to write long form content – books, essays, dissertations, and the like – on computers, it is interesting how uncommon it was (for anyone but the author) to read such content on computers. Essentially, computers were a write-only medium when it came to books.

At the time, I knew it was a bit strange to read a whole book this way, but it was a great experience. The book was about the computer culture of the time, and so it was appropriate to be reading it hunched over a computer. However, the process of discovering that a book exists, getting it, and then beginning to read it – all within half an hour – was very satisfying.

Given how long ago I started to read e-books, I’m a little late to the party regarding the modern generation of them. This has now been rectified.

I had to buy our book-club book as an e-book through the Kindle app on our iPad. Not only was it available in time for us to read it compared with buying it for real or borrowing it from the library (it took much less than 7 minutes to download), but it was also:

  • cheaper (less than half the price),
  • easier to share between Kate and myself (no risk of losing multiple book-marks),
  • easier to read in bed (self-illuminating, so less distracting to the other person), and
  • took up none of the scarce space on our bookshelves (given that the book turned out to be not-very-good).

And with the latest book-club book, Bill Bryson’s At Home, I found myself wishing it was an e-book rather than hard-cover. While it would’ve been a lot lighter (the iPad 2 is 600g versus the book at 900g), it was more that this book mentions all sorts of interesting things in passing that made me want to look into them in more detail before continuing. It would’ve been much more convenient to jump straight to a web browser as I read about them, rather than having to put the book down and find an Internet-connected device in order to indulge my curiosity.

All the various advantages that e-readers and tablets have over their physical book counterparts remind me of the advantages that digital music players had over CDs, cassettes and records. However, as I’ve written about before, the digital music player succeeded when it was able to offer the proposition of carrying all one’s music, but e-readers cannot yet offer this.

Apple’s iPod, supported three sources for acquiring music: (i) importing music from my CDs, (ii) file sharing networks (essentially, everyone else’s CDs), and (iii) purchase of new music from an online shop (iTunes Music Store). For e-books on my iPad, it’s not easy to access anything like the first two sources, while there are at several online shops able to provide new reading material. As a result, the iPad (or any other e-reader) doesn’t really offer any way for me to take my book library with me wherever I go.

Although, that would be pretty amazing, it’s honestly not that compelling. I could go back and re-read any book whenever I want, but that’s not actually something I feel I’m missing. I could search across all my non-fiction books whenever I needed to look something up, but really I just use the Internet for that sort of thing.

The conclusion may be that books aren’t enough like music. The experience of consuming music – whether old media or new digital – is sufficiently similar that the way for technology to offer something more is to, literally, provide more of it – thousands of pieces of music. However, for books, the new digital experience of books may end up being a very different thing to the books of old.

Already, there are books with illustrations that obey gravity and can be interacted with,  books that are like a hybrid of a documentary and allow you to dig deep as you like into the detail, and we’ve only just started. If these are the sort of books that I’m going to have on my iPad in future, why would I want to be putting my old-school book collection on there instead? I can also imagine publishers seeing the chance to get people to re-purchase favourite books, done with all the extras for tablets, in the same way that people re-purchased their VHS collection when they got a DVD player.

So, while my book-club e-book experience wasn’t materially different to the one I had 17 years ago, we are going through a re-imagining of the digital book itself. If it took journalists 40 years from the first email in 1971 to officially decide to drop the hyphen from “e-mail”, then the fact that we still commonly have a hyphen in “e-book” suggests it’s not a mature concept yet, despite the progress of a mere 17 years.

Is mobile video-calling a device thing?

Ever since I’ve been involved in the telecoms industry, it seems that people have been proposing video calling as the next big thing that will revolutionize person-to-person calling. Of course, the industry has been proposing it for even longer than that, as this video from 1969 shows.

YouTube Preview Image

One thing not anticipated by that video is mobile communication, and video calling was meant to be one of the leading services of 3G mobiles. When 3G arrived in Australia in 2003, the mobile carrier Three sold its 3G phones as pairs so that you’d immediately have someone you could make a mobile video call to.

Needless to say, the introduction of 3G didn’t herald a new golden age of person-to-person video calling in Australia. So, despite all the interest in making such video calling available, why hasn’t it taken off? I’ve heard a number of theories over the years, such as:

  • The quality (video resolution, frame rate, audio rate, etc.) isn’t high enough. Once it’s sufficiently good to be able to easily read subtle expressions/ sign language gestures, people will take to it.
  • The size of the picture isn’t big enough. When it is large enough to be close to “actual size”, it will feel like communicating with a person and it will succeed.
  • The camera angle is wrong, eg. mobile phones tend to shoot the video up the nose, and PC webcams tend to look down on the head. If cameras could be positioned close enough to eye-level, people would feel like they are talking directly to each other, and video calling would take off.
  • People don’t actually want to be visible in a call, for various etiquette-related reasons such as: it prevents them multi-tasking which would otherwise appear rude, or it obliges them to spend time looking nice beforehand in order to not appear rude.

But despite the low level of use of video calling on mobiles, there is one area where it is apparently booming: Skype. According to stats from Skype back in 2010, at least a third of Skype calls made use of video, rising to half of all calls during peak times.

One explanation could be that Skype is now so well known for its ability to get video calling working between computers that when people want to do a video call, they choose Skype. Hence, it’s not so much that a third of the time, Skype users find an opportunity to video call, but that a third of Skype users only use Skype for video. Still, it’s an impressive stat, and also suggests that super-high quality video may not be a requirement.

Certainly, I’ve used Skype for video calling many times. I’ve noticed the expected problems with quality and camera angle, but it hasn’t put me off using it. I find that it’s great for sharing the changes in children across my family who are spread around the world, and otherwise difficult to see regularly. But a tiny fraction of my person-to-person calls are Skype video calls.

However, I’ve ordered an Apple iPad 2 (still waiting for delivery) and one of the main reasons for buying it was because of the front-facing camera and the support for video calling. I am hoping, despite all of the historical evidence to the contrary, that this time, I am going to have a device that I want to make video calls from.

The iPad 2 seems to be a device that will have acceptable quality (640×480 at 30fps), and it is large enough to be close to actual size, but not so large that the camera (mounted at the edge of the screen) is too far away from eye line. So, they may have found the sweet spot for video calling devices.

If you know me, be prepared to take some video calls. I hope that doesn’t seem rude.

Metric of the Moment

Being on the technology-side of the telco industry, it’s interesting to see how all the complexity of technological advances is packaged up and sold to the end user. An approach that I’ve seen used often is reducing everything to a single number – a metric that promises to explain the extent of technological prowess hidden “under the hood” of a device.

I can understand why this is appealing, as it tackles two problems with the steady march of technology. Firstly, all the underlying complexity should not need to be understood by a customer in order for them to make a buying decision – there should be a simple way to compare different devices across a range. And secondly, the retail staff should not need to spend hours learning about the workings of new technology every time a new device is brought into the range.

However, an issue with reducing everything to a single number is that it tends to encourage the industry to work to produce a better score (in order to help gain more sales), even when increasing the number doesn’t necessarily relate to any perceptible improvement in the utility of the device. Improvements do tend to track with better scores for a time, but eventually they pass a threshold where better scores don’t result in any great improvement. Reality catches up with such a score after a few months, when the industry as a whole abandons it to focus on another metric. The whole effect is that the industry is obsessed with the metric of the moment, and these metrics change from time to time, long after they have stopped being useful.

Here are some examples of the metrics-of-the-moment that I’ve seen appear in the mobile phone industry:

  • Talk-time / standby-time. Battery types like NiCd and NiMH were initially the norm, and there was great competition to demonstrate the best talk-time or standby-time, which eventually led to the uptake of Li-Ion batteries. It became common to need to charge your phone only once per week, which seemed to be enough for most people.
  • Weight. Increasing talk-time or standby-time could be accomplished by putting larger batteries into devices, but at a cost of weight. A new trend emerged to produce very light handsets (and to even provide weight measurements that didn’t include the battery). The Ericsson T28s came out in 1999 weighing less than 85g, but with a ridiculously small screen and keyboard (an external keyboard was available for purchase separately). Ericsson later came out with the T66 with a better design and which weighed less than 60g, but then the market moved on.
  • Thinness. The Motorola RAZR, announced at the end of 2004, kicked off a trend for thin clamshell phones. It was less than 14mm thick (cf. 1mm thinner than the T28s). Other manufacturers came out with models, shaving off fractions of millimeters, but it all became a bit silly. Does it really matter if one phone is 0.3mm thicker than another?
  • Camera megapixels. While initially mobile phone cameras had rather feeble resolutions, they have since ramped up impressively. For example, the new Nokia N8 has a 12 megapixel camera on board. Though, it is hard to believe that the quality of the lens would justify capturing all of those pixels.
  • Number of apps. Apple started quoting the number of apps in the app store of its iPhone soon after it launched in 2008, and it became common to compare mobile phone platforms by the number of apps they had. According to 148Apps, there are currently over 285,000 apps available to Apple devices. One might think that we’ve got enough apps available now, and it might be time to look at a different measure.

In considering what the industry might look to for its next metric, I came up with the following three candidates:

  • Processor speed. This has been a favourite in the PC world for some time, and as mobiles are becoming little PCs, it could be a natural one to focus on. Given that in both the mobile and PC worlds, clock speed is becoming less relevant as more cores appear on CPUs and graphics processing is handled elsewhere, perhaps we will see a measure like DMIPS being communicated to end customers.
  • Resolution. The iPhone 4 Retina 3.5″ display, with 960×640 pixels and a pixel density of 326 pixels / inch, was a main selling point of the device. Recently Orustech announced a 4.8″ display with 1920×1080 pixels, giving a density of 458 pixels / inch, so perhaps this will be another race.
  • Screen size. The main problem with resolution as a metric is that we may have already passed the point where the human eye can detect any improvement in pixel densities, so screens would have to get larger to provide benefit from improved resolutions. On the other hand, human hands and pockets aren’t getting any larger, so hardware innovations will be required to enable a significant increase in screen size, eg. bendable screens.

But, really, who knows? It may be something that relates to a widespread benefit, or it may be a niche, marketing-related property.

The fact that these metrics also drive the industry to innovate and achieve better scores can be a force for good. Moore’s Law, which was an observation about transistor counts present in commodity chips, is essentially a trend relating to such a metric, and has in turn resulted in revolutionary advances in computing power over the last four decades. We haven’t hit the threshold for it yet – fundamental limits in physical properties of chips – so it is still valid while the industry works to maintain it.

However, it is really the market and the end customers that select the next metric. I hope they choose a good one.

If your TV was like a Book

Last month, Apple released their latest device – the iPad. It is capable of many wonderous things, and has many fabulous properties, but of all of them, for now I am interested in just three: its screen, its weight, and its ability to show video.

As various other manufacturers rush to market with devices to compete in the segment that Apple has just legitimised, they will most likely produce things that share those same three properties. However, as it is still early days, we don’t yet know for sure what people will end up doing with these devices. That’s why it’s so much fun to speculate!

The iPad has a 24cm (diagonal) screen, weighs about 700g (WiFi version) and can deliver TV quality video from the Internet to practically wherever in the house you decide to sit yourself down with it. If you hold it up in front of your face (about 60cm away), it’s as big as if you were watching a 120cm (diagonal) TV from 3m away. And, while lighter than a 120cm TV, it’s going to feel heavy pretty quick.

However, 700g is not very heavy if you’re willing to rest it on your lap, and there’s another category of content consumption “device” that is comparable in this regard: the book. I am willing to spend hours intently focused on a book while reading it, and a quick weigh of some of my books (using the handy kitchen scales) suggests the iPad is not unusual…

Which provides some legitimisation of a “TV-watching” scenario of a family in their lounge room, with everyone watching a show on their tablet device. (Assuming that you have overcome issues like individuals’ TV audio interfering with others and ensuring adequate bandwidth for everyone.) However, this scenario feels strange, even anti-social.

I am perhaps conditioned by the ritual of people coming together to share a TV watching experience. And before we had TVs, people came together to share a radio listening experience. But before broadcasting technologies, what did we do? In reality, this sort of broadcasting experience is a relatively recent phenomenon. Before that, presumably we all sat around in the lounge room and read books.

I’ve previously written on the idea that people prefer the personal, and that a personal TV experience will be preferred to a shared TV experience. The iPad and similar devices have the potential to enable this, through becoming as light and portable as books.

“Netbooks” also have similar attributes to the iPad. However, they tend to weigh at least 1 kg and have screens that are smaller. So, while future Netbooks might have the right form factor, it certainly isn’t common yet. The iPad is the first mass-market device that properly fills this niche.

The issue of the scenario feeling anti-social is still a little troubling. While our ancestors might have looked up over their books and engaged in a casual chat, momentarily pausing their reading, this is harder to accomplish with a video experience. Not only are the eyes and ears otherwise engaged, making casual interruption more difficult, but the act of pausing and resuming is not as easy either.

I suspect that while we’re now reaching the point where hardware can fill the personal TV niche, the software is not yet ready. We may need eye-tracking software that pauses the video when the viewer looks away, integration of text-based messaging alongside video-watching, and other adaptations to the traditional video player software.

I’m keen to see what competition in this new segment produces.

Communications Technologies

After I’d written the previous post on Communications Industries, I worried that the two properties that I’d used as the axes to obtain the set of four industries were perhaps the wrong ones. I suspect that for any 2 x 2 matrix, there are probably an infinite set of alternative axes that produce a given set of contents. For example, instead of the public/private distinction, I might have chosen asymmetric/symmetric. But that said, the properties “feel” like good ones to me, so I’ll stick with them for now, while allowing myself to luxury to throw them away later if they turn out not to work.

Anyway, to continue my musing…

Taking the 2 x 2 matrix from last time, and populating it with examples of the products and businesses within those industries is helpful in building up a picture of the characteristics of each quadrant:

One of the first observations is that all of these examples of communications require some sort of network. The different quadrants have different types of networks, clearly. I’m interested in seeing if there are some common characteristics shared by the examples in the e.g. public or discrete categories.

Another observation is that the older examples make use of analogue or manual methods, while the newer examples are digital or electronic. Analogue technologies are probably better aligned with a continuous approach to communicating, as digital technologies are, almost by definition, discrete approaches. Bits are bundled together into packets, and sent across the network in discrete chunks. However, each chunk may carry something very small, perceptually, and so can effectively emulate a continuous channel.

It seems that in gathering up examples, I have found comparatively fewer examples of continuous communication than distinct communication. An explanation of this could be that discrete communication has historically been based around a particular physical medium (a letter, a pneumatic capsule, a book, a film reel, a CD, etc.) which was the means of expressing the communication. The large variety of physical media possible have resulted in different products and businesses based around them, and the characteristics of those products have been carried forward into the digital world. (Although without the conservative force of those physical constraints, we are seeing some of them merge.)

On the other hand, continuous communication networks developed after their discrete cousins, when technology was finally able to capture and transmit two of the senses we use to experience performances. Aside from the relationship to a particular sense, there was no physical medium to constrain the communication, and hence the communications networks were more versatile. Fewer types of continuous communications networks were needed to accommodate the range of things people wanted to communicate. Any sort of performance at a venue (theatre, concert hall, sporting ground, office, or home) could be conveyed to somewhere else.

That said, there are definitely constraints of various kinds on continuous communications networks. The other senses that might be used if you were physically present are not accommodated. In terms of vision, you are provided with only a window into the other end, rather than the whole vista. In terms of sound, you are provided with a limited frequency and dynamic range as well as a loss of some of the spacial characteristics. But, unlike some of the discrete communications options, there is also a much reduced need for literacy, as the experience aims to provide a “natural interface” through conveying key human senses.

I think I’ll leave this discussion there for now, and pick up the differences between public and private later.

Reblog this post [with Zemanta]

Sixth Sense

Although Pattie Maes doesn’t know of me, I have known of her for over a decade now, since I started my career looking at intelligent agents in the AI group at the Telstra Research Labs. I haven’t been paying any attention to her stuff recently, which turns out to be a bit of an oversight, as her stuff is just so incredible cool.

Her Fluid Interfaces Group has produced a prototype of something they call SixthSense. There’s a demo of it in the video below, from when she spoke at the recent TED conference.

It does pose the question though – if we could pick another sense to add to our existing five, what would it be? I think I’d like to have an electromagnetic sense, able to detect the direction of north, and avoid hammering into power cables in the walls. Or maybe I’d pick “spider sense”, if I didn’t have to wear the costume that came with it.

SixthSense is not really another sense, but more of an augmented reality tool, that supplements the world around us with information we wish was there.

People prefer the personal

Following up on my last post, the reason that the big TV on the living room wall is going to become less relevant is because it’s a shared device. The way of the future is personal devices.

It’s sad but true – we prefer to have our own personal versions of things rather than share them with others. Maybe this is a particularly Western trait, but I suspect not. For example, despite the additional cost, most people prefer to travel in their own car rather than use a taxi or use public transport. Car sales are booming in China, showing it’s not just something that happens here.

When it comes to video devices like TVs, pretty much all actors in the economy are benefiting from move to selling household video devices to individual video devices: the screen manufactures, content providers, telcos, and most of all, the viewers. It’s part of a larger trend. Initially, all households in a city got the pretty much the same video content at the same time, broadcast from TV stations. Then, with the uptake of VCRs, DVDs, PVRs, and so on, different households were able to get different video content at the same time. Now, with PCs and iPods, individuals within the households are getting different content at the same time.

We saw the same thing happen with audio devices. The Consumer Electronics Association in America published this year in their Digital America 2008 report that

U.S. factory-level dollar sales of portable audio products, consisting overwhelmingly of MP3/portable media players (PMPs), exceeded the combined sales of the home audio and aftermarket car audio industries for the first time in history in 2005, and again in 2006 and 2007, according to CEA statistics.

Another aspect to consider is that portable media players and PCs are increasingly becoming connected to the Internet, and support communication as well as media consumption. There will be growth in triggers to watch video content, received over those communication channels (such as friends sending you email, IM, or messages from Twitter or Facebook), and given a desire for immediate gratification, people will not want to wait for a shared device to become free, so will watch the video content on their personal devices, even if the quality of experience is less.

I don’t think shared video devices, like the expensive LCD or Plasma set that takes pride of place on the wall, will ever become completely redundant. They will simply evolve to niche uses when it is more convenient or appropriate to use a shared device, such as when hosting a video / games party with friends, or displaying a loop of video to display in the background.

Wireless Power

Today, The Age is running a story (from AFP) on Intel’s recent demo of wireless power. It’s a great story, but it’s actually a year old. The original story is from June 2007 and was MIT’s demo of wireless power.

The demo involves lighting a 60W light globe across 2m, with 40% efficiency. If the technology could be improved to longer ranges, the applications are phenomenal. For example, you could distribute power throughout your home, and avoid needing batteries in any of your appliances or their remote controls. Or, you could distribute power with radio communications, so you wouldn’t need batteries in mobile phones. Or, you could distribute power across car-parks, so people who leave their lights on could still start their cars. Or, you could distribute power across an office, and people could work with with laptops anywhere, even if there were no power points. Sod that, you could distribute power into parks, where there are never going to be any power points.

The power cord is the “last cord”, as Intel says. I can’t wait til we can cut it. Safely, of course.