My main insight from SXSW Sydney

Last week, I attended the inaugural SXSW Sydney, and the first SXSW outside of Texas. It was different to the regular tech conferences that I’ve attended – it was much more diverse, with the games/film/music streams attracting a broader crowd. The sessions that I made it into were stimulating and sparked a range of ideas.

Of course, topics like AI (particularly Generative AI) and the Future of Work featured heavily in many presentations, and this led me to a realisation that I hadn’t had before, and I feel is likely to be the biggest impact from GenAI in the medium term. Rather than keep it to myself, I am sharing it here so that I can hear from others if it makes sense to them also.

Specifically, GenAI will bring about a huge disruption to the professional workforce and education system, not necessarily because humans will be replaced, but because humans who have been excluded from participation will now have fewer barriers to entry. Proficiency in the English language has been used as a justification for keeping certain people out of certain fields, and GenAI allows anyone from a non-English background to be as creative, smart, and persuasive as they are in their native tongues.

Our current GenAI systems are largely based on the Transformer machine learning architecture, which showed up early in online language translation tools like Google Translate. However, the GPT (T stands for Transformer) systems, particularly ChatGPT, have shown us that only a few words in broken English are able to be turned into paragraphs of words in perfect English, or even the reverse where paragraphs are summarised down to a few points in another language. University-level English spelling, grammar, and comprehension are no longer the exclusive domain of the English fluent.

There’s a fun TV series called Kim’s Convenience about a Korean couple who move to Canada to raise their family. The couple were teachers in Korea, but instead of doing that, they open a convenience store in Toronto. Presumably their lack of English or French language fluency would have been a limitation in getting teaching jobs. However, less than two months ago, OpenAI published their guide for teachers around ChatGPT, and it included the use case of “Reducing friction for non-English speakers”. In this guide, it was to help non-English students, but many of the suggestions could help non-English teachers also.

About 6% of the world’s population are native English speakers, and 75% do not speak English at all. And yet, about a third of the world’s GDP comes from countries where English fluency is required for success. If English is no longer a barrier to success in that market, it will be a significant disruption.

The spread of remote working technologies due to the pandemic has changed the ways of working for many jobs. Many white-collar jobs will likely still have an element of face-to-face contact, even if to come together for celebrations or training. However, where workers can be fully remote, the lack of English fluency as a barrier will enable many countries to export their talent without it leaving their shores.

Before the pandemic hit, over a quarter of University revenues in Australia came from international students. This gives international students some influence over University policies, and currently they face English language proficiency tests as part of their enrolment and visa processes. In the near future, GenAI looks set to be considered a generally-available tool in the workplace, like a calculator or laptop. If prospective students could make use of such a tool to address any gaps in their English language skills post-graduation, is it fair to prevent them from using it before graduation?

Traditionally, those people who had limited English in countries like Australia, UK or USA had been resigned to taking a jobs as an “unskilled” worker. There are already concerns that the number of people willing to do this type of work might not be enough to meet future industry demands. What might happen to wages if a good proportion of these people were able to move out of the unskilled workforce? How readily can the creative and information worker industries expand to take on new talent? What new barriers might be created by unions and professional organisations to help limit a flood of new workers into their industries?

GenAI has been making headlines that AI is taking many people’s creative jobs. After hearing from several panels at SXSW on AI, Long-term Forecasting, Work of the Future, and Education, my conclusion is that a plausible and perhaps more relevant headline would be that GenAI will allow many more people to take on creative jobs.

Why Indigenous Australians are special

In Australia, we are about to vote in a referendum to change the constitution, to add an “Aboriginal and Torres Straight Islander Voice” to the list of government entities. We’ll get to vote Yes or No on the 18th (oops, I mean 14th) October, and it will be the first time in over 20 years that we’ve had the opportunity to do something like that.

I’ve had many discussions with people here about the Voice, and I will probably vote Yes given there are a majority of Indigenous Australians who want it. The idea for it came out of the 2017 First Nations National Constitutional Convention, and had been preceded by many years of discussion of how to recognise Indigenous Australians in the constitution. The “Uluru Statement from the Heart” summarises the majority position of a large number of Elders from this convention, and includes the statement “We call for the establishment of a First Nations Voice enshrined in the Constitution”.

I am not going to present here an argument or evidence for why this should be supported. There are good analyses elsewhere. However, one of the things that has come up when I’ve discussed the Voice with others is that if the Voice is seen as a way of addressing disadvantage (which it is intended to be), and if Indigenous Australians are a significantly disadvantaged group (which they are), why should they get a Voice in the constitution in priority over other disadvantaged groups, e.g. refugees? Why should we call out a particular population in the constitution? In other words, why are Indigenous Australians special?

I may not be qualified to answer this. My school education in Australia was at a time when Indigenous Australians were not well covered in the curriculum. I do not have lived experience when it comes to Indigenous Australian communities. However, I have tried to educate myself. I’ve read all six books in the First Knowledges series, books by Stan Grant, Bruce Pascoe, and Bill Gammage, and even Indigenous Australia for Dummies. I have listened to the 2022 Boyer lectures by Noel Pearson, and I’ve visited many parts of Australia with Indigenous tour guides, and try to listen.

Despite that, I haven’t seen an answer to this question so far in the copious material flying around the Internet on the Voice referendum, and it seems central to the claim of the No case that the proposed constitutional change will create an unwelcome new division in our society, so I’m going to give this a crack.

A first response is that this question is an example of Whataboutism, and raising the disadvantage of other groups doesn’t somehow disprove the need for Indigenous Australians to get better outcomes than they’ve gotten historically. Additionally, presumably all groups should get the support they need to address their disadvantage. It’s not an either-or. We should do better. However, I’ll take on the question as if it was asked sincerely.

Another response is that the question is backwards. That it is instead Indigenous Australians that make Australia so special. The something-around 60,000 years of time spent shaping and learning about the flora, fauna and geography of this country has helped us be what we are today. After European settlement, the Indigenous people have played a role in making early settlers, explorers and farmers succeed. My grandmother was helped into the world by an Indigenous mid-wife, for example. While this is a valid response, I feel it doesn’t treat the question seriously.

I’ve come across two arguments for why First Australians are special enough to merit their own constitutionally-endorsed organisation: a legal one, and a moral one.

The legal one is essentially that they have unique rights that no-one else in Australia has, both recognised by the High Court and covered in Commonwealth legislation, but this uniqueness is ignored by the constitution. What is known as the Mabo Case was a claim of “native title” rights to the Murray Islands – part of the Torres Straight islands, off the coast of Queensland – by Eddie Mabo and others. This was due to the people there continuing their traditional activities since before European settlement, and recognition of the traditional laws and society that underpinned these. While no other population of people who have arrived in Australia since European settlement can claim this, it is not a unique situation internationally. For example, in Canada it is also recognised that Indigenous peoples there have rights that pre-existed any colonisation. Importantly, these rights don’t result simply from genetic lineage or “race”, but due to being part of a society that has continued to exist in Australia for thousands of years.

The moral one is Australian governments (both state and federal) have consistently passed laws to the detriment of Indigenous Australians, and are able to continue to do so because of an imbalance of power between the various governments in power and the Indigenous populations. Until Indigenous people have more say over what is done to them, the situation risks continuing. Some examples of Commonwealth government actions that targeted Indigenous Australians include:

Additionally, one legal expert has claimed that “Australia is the only industrialised nation that allows its parliament to make special detrimental laws for the Indigenous peoples of the land.” If so, Australia is not covering itself in glory here.

To guarantee a say about the stream of regular measures and laws that are targeted towards them by the Commonwealth government requires something that is not entirely subject to the Commonwealth government. Previous entities that represented Indigenous interests (NACC, ADC, and ATSIC) each managed to survive for a few years before being abolished by the Commonwealth. Having a new entity established by the constitution provides more balance and continuity in the relationship.

In conclusion, there is no new division here. Indigenous Australians are set apart from other Australians due to access to unique rights, and due to being uniquely and repeatedly targeted by Commonwealth government activities and laws. If the referendum succeeds, this will not change. But we can hope that other things change for the better.

Making a VRM avatar from Ready Player Me

When I went looking to create an avatar, I discovered that there were a lot of options. There are 2D avatars that look like animated illustrations and 3D avatars that look like video game characters. There are full-body avatars, and half-body avatars (the top half, if you’re wondering). There are avatars tied to a particular app or service, and avatars that use an interoperable standard. There are many standards.

I decided that I wanted a full-body 3D avatar, since this seems to be the way things are headed. If I was using a Windows PC, I would be able to use something like Animaze and have my avatar track to my gestures and expressions. However, I am currently using a Mac and there are fewer options, especially in English. I was able to find the browser-based FaceVTuber service and the application 3tene, though. 3tene requires avatars in the VRM standard, so that made my decision for me.

The easiest way to create a VRM avatar seems to be to use VRoid Studio application, although the resulting avatars look like anime characters. I wanted to create a more realistic looking 3D avatar, and a service like ReadyPlayer.Me would be perfect, as it quickly creates an avatar based on a photo. The catch is that ReadyPlayer.Me does not yet export a VRM file version of their avatars. But there is a way to do it, if you’re willing to jump through some hoops.

This is a guide that I’ve put together based on trial and error, and heavily inspired by ReadyPlayer.Me’s instructions on exporting to a GLB file for Unity and Mada Craiz’s video on converting a ReadyPlayer.Me GLB file into a VRM file.

Firstly, you will need to have downloaded Blender and Unity / Unity Hub. For Unity, you will probably need to also set up an account. This guide was based on using Blender v3.2.1 and Unity 2020.3.39f1 Intel.

You will also need to download the UniVRM package for Unity. I used v0.103.2, which was the latest version at the time. Make sure you download the file named something like UniVRM-0.xxx.x_xxx.unitypackage. You don’t need the other files.

How to create a VRM file from a Ready Player Me avatar

  1. Create a folder that you’re going to store all the avatar assets in, let’s call it vrm_assets.
  2. Create an account on ReadyPlayer.Me, and build an avatar for yourself. It’s pretty fun.
  3. Click on “My Avatars”. You may need to click on Enter Hub to see this menu option.
  4. Click on the 3-dots icon on your avatar, and select “Download avatar .glb”, and store it in vrm_assets (or whatever you called that folder before).
    screenshot of page within Ready Player Me showing the menu to download a GLB file
  5. Open Blender, and start a New File of the General type.
  6. In the Scene Collection menu, right-click the Collection and choose Delete Hierarchy, to get rid of everything in the scene.
  7. Then select File > Import > glTF 2.0 (.glb/.gltf) menu option, pick the avatar GLB file that you downloaded from ReadyPlayer.Me and stored in vrm_assets, and click “Import glTF 2.0”.
  8. If you’re worried that all of the colours and textures are missing, you can get them to appear by pressing “Z” and selecting Material preview, but you can skip this step.
  9. Select the Texture Paint on the top menu bar to enter the Texture Paint workspace.
  10. Change the “Paint” mode to the “View” mode in the menu in the top left of the Texture Paint workspace screen.
    screenshot of Blender showing where the View menu is
  11. Then use the texture drop-down in the menu bar at the top to select each Image_0, Image_1, texture etc. in turn.
  12. For each texture, select the  Image > Save As menu option to save as individual images in your vrm_assets folder. Some of the textures could be JPG files while others are PNG files. Don’t worry about that. Just make sure you save all the images, but you can ignore “Viewer Node” or “Render Result”.
  13. Now select File > Export > FBX (.fbx) and before you save, change the “Path Mode” to “Copy” and click on the button next to it to “Embed Textures”. Then click the “Export FBX” button to save it into vrm_assets as well.
    Screenshot in Blender showing where to set Path Mode to Copy
  14. Close down Blender, and open up Unity Hub.
  15. Create a New Project, and select an Editor Version that begins 2020.3 and using the 3D Core template. Give the project a name that works for you, but I will use “VRM init”. Click “Create project”.
  16. Wait a little while for it to start up, then a blank project will appear. The first thing to do is bring in the UniVRM unitypackage file, so drag that from the file system into the Assets window. You will be shown an import window, with everything selected. Just click Import to bring it all in. After it’s done, UniGLTF, VRM and VRMShaders will be added to the Assets window.
    Screenshot of Blender showing the import of the unity package
  17. Create a new folder in the Assets window called Materials. Open the Materials folder, then drag all the texture files from vrm_assets over into it.
    Screenshot of Unity showing the textures in the Materials folder
  18. Go back out of the Materials folder to the top level of Assets, and drag the FBX file that you exported from Blender into the same Assets window. The model will appear there after a little while.
  19. If at any point you get an error message like “A Material is using the texture as a normal map”, just click “Fix now”.
  20. Click on the model, then in the Inspector window, click on Rig. Choose Animation Type to be “Humanoid”. Click Apply.
  21. Staying in the Inspector window, click on Materials. Choose Material Creation Mode to be “Standard (Legacy)”, choose Location to be “Use External Materials (Legacy)”, and leave the other options at their defaults (Naming as “By Base Texture Name” and Search as “Recursive-Up”). Click Apply.
  22. Drag the model from Assets into the Scene.
  23. If your model is meant to look like an anime figure, do this step, but otherwise (e.g. for more realistic avatars) skip it. Expand the newly created avatar in the Hierarchy window, and for each Material listed (which should be everything but Armature), click on it, then scroll down in the Inspector to the Shader. Click on the Shader drop-down (it may say something like “Standard”) and change it to VRM > MToon. Do this for all the materials in the model.
    Screenshot of Unity showing where to change the material Shader
  24. Alternatively, you can do other tweaks to the materials at this point. I find Unity makes the textures look a little grey, so this can be corrected by going into each Material as described in the previous step, opening up the Shader and changing the colour next to Albedo to use Hexadecimal FFFFFF (instead of CCCCCC). This is completely optional though.
  25. Click on the avatar in the Hierarchy window, and then in the VRM0 top-level menu of Unity, select Export to VRM 0.x resulting in the export window popping up.
    Screenshot of Unity showing the VRM export window
  26. Click on “Make T-Pose”. Scroll down a bit and enter a Title (ie. the name of your avatar), a version (e.g. 1.0) and the Author (i.e. your name). Then click Export. Choose a name like “avatar” and save the VRM file into your vrm_assets folder.
  27. Delete the avatar that you just exported from the Scene by right-clicking it in the Hierarchy and choosing Delete. This just keeps the Scene neat for later.
  28. Now, drag the newly-saved VRM file into the Assets window of your Unity project. It is time to configure the lip synch and facial expressions.
  29. Double-click on the BlendShapes asset (if you had saved the VRM file as avatar.vrm, this asset will be called avatar.BlendShapes) to show all the expressions that can be configured. Clicking on BlendShape will allow you to easily see and configure them in one place.
    Screenshot of Unity showing the configuration of Blend Shape
    Configuring the vowels will allow lip synch to work with your avatar, but you should configure all of it to ensure your avatar doesn’t look too wooden. Note that the vowels are in the Japanese order: A, I, U, E, O. Here are the settings that I used, but different avatars will need different values.
    • A:
      • Wolf3D_Head.viseme_aa 100
      • Wolf3D_Teeth.viseme_aa 100
    • I:
      • Wolf3D_Head.viseme_I 100
    • U:
      • Wolf3D_Head.viseme_U 100
    • E:
      • Wolf3D_Head.viseme_E 100
      • Wolf3D_Teeth.viseme_E 30
    • O:
      • Wolf3D_Head.viseme_O 100
      • Wolf3D_Teeth.viseme_O 100
      • Wolf3D_Teeth.mouthOpen 15
    • Blink:
      • Wolf3D_Head.eyesClosed 100
    • Joy:
      • Wolf3D_Head.mouthOpen 60
      • Wolf3D_Head.mouthSmile 48
      • Wolf3D_Head.browInnerUp 11
    • Angry:
      • Wolf3D_Head.mouthFrownLeft 65
      • Wolf3D_Head.mouthFrownRight 65
      • Wolf3D_Head.browDownLeft 20
      • Wolf3D_Head.browDownRight 20
    • Sorrow:
      • Wolf3D_Head.mouthOpen 60
      • Wolf3D_Head.mouthFrownLeft 50
      • Wolf3D_Head.mouthFrownRight 50
      • Wolf3D_Teeth.mouthOpen 30
    • Fun:
      • Wolf3D_Head.mouthSmile 50
    • LookUp:
      • EyeLeft.eyesLookUp 36
      • EyeRight.eyesLookUp 36
      • Wolf3D_Head.eyeLookUpLeft 75
      • Wolf3D_Head.eyeLookUpRight 75
    • LookDown:
      • EyeLeft.eyesLookDown 40
      • EyeRight.eyesLookDown 40
      • Wolf3D_Head.eyeLookDownLeft 20
      • Wolf3D_Head.eyeLookDownRight 20
    • LookLeft:
      • EyeLeft.eyeLookOutLeft 67
      • EyeRight.eyeLookInRight 41
    • LookRight:
      • EyeLeft.eyeLookInLeft 41
      • EyeRight.eyeLookOutRight 67
    • Blink_L:
      • Wolf3D_Head.eyeBlinkLeft 100
    • Blink_R:
      • Wolf3D_Head.eyeBlinkRight 100
  30. Now go back to the top level of the Assets window and scroll down to the avatar VRM model, then drag it into the Scene.
  31. Just as before, in the VRM0 top-level menu of Unity, select Export to VRM 0.x. You can leave the fields as they are, or update then. Click on Export. Save your VRM file into your vrm_assets folder with a new name to reflect it now has the expressions configured.
  32. Quit and save Unity, in case you want to come back and make further tweaks. You now have a VRM model.

Test out the VRM file in the avatar application of your choice! Good luck.

Turning up for work as an avatar

I don’t think we’re talking enough about avatars. I don’t mean the James Cameron film or the classic anime series. I’m referring to the computer 3D model that can represent you online, instead of a picture or video of the “real you”.

Due to the Covid-19 pandemic, we’ve had something like 5 years of technology uptake in an accelerated timeframe. Remote working has become much more common, with people regularly joining meetings with colleagues or stakeholders via services like Teams, Webex or Zoom rather than meeting up in person.

While pointing a camera at your face and also seeing an array of boxes containing other people’s faces has its merits, it can have a bunch of downsides. It turns out that many of these can be addressed by attending the meeting as an avatar rather via camera.

Interacting with others via avatars is the normal way of things when it comes to computer games. Many people are familiar with avatars from online social settings like Minecraft, Fortnite or Roblox. I’d think that for many kids today, they have spent more hours interacting online with others as an avatar than on camera.

So, it may be there is a generational shift coming as such people come up through our Universities and workplaces. But there are also fair reasons for moving to use avatars for meetings in any case. Here are five reasons why you should consider turning up for work online as an avatar.

1. It’s less stress

Being on camera can be a bit stressful, since your appearance is broadcast to all the other people in the same meeting, and other people can be a bit judgy. Why should your appearance be the concern of people that don’t need to share the same physical space as you?

If you attend a meeting as an avatar, you

  • Don’t have to shave, brush hair, put on makeup
  • Don’t have to worry about a pimple outbreak, or a bad haircut
  • Don’t have to get out of pyjamas, take off a beanie, or cover up a tattoo
  • Know there’s no chance of someone embarrassing wandering past in the background or a pet leaping up in front of you

2. You will appear more engaged

Well, if having the camera on is stressful, why not just turn it off? In some workplaces or schools, it is considered bad etiquette to turn off your camera in a group video call. It is not a great experience to be talking to a screen of black boxes and not seeing anything of your audience. Seeing a participant’s avatar watching back instead of a black box is a definite improvement.

However, sometimes it is a good idea to turn off the camera, such as when eating or having to visit the bathroom. The participant is still engaged in the meeting but for good reasons has turned off the camera. There is no need to do that with an avatar.

An avatar is also able to make eye contact through the meeting. Unfortunately, not everyone with a camera can do this, as the camera position might be to the side, above or below the screen that the participant is actually looking at. This tends to make the participant look distracted, as that would be how such behaviour would be interpreted in a face-to-face meeting. Avatars don’t have this issue.

3. Avatars are more fun

With Teams, Webex or Zoom, you can replace your background with a virtual background for a bit of fun. With an avatar, you can change everything about your look, and make these changes throughout the day.

You don’t even need to be human, or even a living creature. You might want to stick to an avatar that is at least humanoid and has a face, but there’s a huge creative space to work within.

In some online services, avatars are not limited to being displayed in a box (like your camera feed is), but can interact in a 3D space with other avatars. This also means that stereo audio can be used to help position the avatar in a physical space, making it easier to tell who is speaking by just where the sound is coming from, or distinguish a speaker when someone is talking over the top of them.

4. There may be less risk of health issues

Most group video meeting services show a live feed of your own camera during the call. It’s not exactly natural to spend hours of a day looking at yourself in a mirror, especially if the picture of you is (most likely) badly lit, from an odd or unflattering angle, and with a cheap camera lens. Then, if you couple this with seeing amazing pictures of others online, say on social media, it all appears to be a bit unhealthy.

While it’s not an official condition, there is some discussion about what is being called Zoom dysmorphia, where people struggle to cope due to anxiety about how they appear online. These people may go the plastic surgery route in order to deal with this.

Having a camera on all the time may also be generally unhealthy since it ties people to the desk for the duration of the call. Without this, for some meetings, people might instead take a call while walking the dog or taking a stroll around the block.

5. It works well for hybrid meetings

Hybrid is hard. It’s typically not a level playing field to have some meeting participants together in a room and some joining remotely. Having a camera at the front of a room capturing all of the in-person attendees means it is often difficult for the remote participants to see them.

The main alternative is that all the participants in the room have a device in front of them that allows them to join the meeting as a bunch of remote participants who happen to be in the same place. This usually results in a bunch of cameras pointing up people’s noses, as the cameras in a laptop or tablet are not at eye-level.

If the people in the room join as avatars, they can be showed nicely to the other participants, and the individuals’ cameras are often still adequate for animating their avatar to track with their face and body.

However

There are some down-sides to using avatars. It can make it more difficult for hard-of-hearing participants since they can’t rely on lip reading to follow a conversation. There will need to be avatar etiquette discussions so people aren’t made uncomfortable by certain types of avatar turning up to meetings. The technology is still evolving so it can look a bit unnerving if an avatar doesn’t show expected human emotions.

But directionally, avatars solve problems with our current group video meetings, and we can expect to see them become more mainstream over the coming years.

What is a qubit?

I am not a deep expert in quantum computing, but I know several who are. In order to chat to them, I have read quite a few introductory quantum computing articles or online courses. However, I find that these are either pitched at a level where it’s all about the hype, or at a level where you need to have a good background in either mathematics or physics to follow along. So, I have been trying to describe a quantum computer in a useful way to people without the technical background.

This is just such an attempt. If you’re still with me, I hope you find this useful. This is for people that don’t know the difference between Hamiltonians, Hermitians or Hilbert spaces, and aren’t planning to learn.

Let’s start with some definitions. A quantum computer is a type of computing machine that uses qubits to perform its calculations. But this raises the question of what is a qubit?

Digital, or classical, computers use bits to perform their calculations. They run software (applications, operating systems, etc.) that run on hardware (CPUs, disk drives, etc.) that are based on bits, which can be either 0 or 1. The hardware implementation of these bits might be based on magnetised dots on plastic tape, pulses of light, electric current on a wire, or many others.

Qubits are “quantum bits”, and also have a variety of hardware implementations such as photon polarisation, electron spin, or again many others. Any quantum mechanical system that can be in two distinct states might be used to implement a qubit. We can exploit the properties of quantum physics to allow a quantum computer to perform calculations on qubits that aren’t possible on bits.

Before we get to that, it is worth noting that quantum computers are known to be able to perform certain calculations in minutes that even a powerful classical computer could not complete in thousands of years. For these specialised calculations, the incredible speed-up in processing time is why quantum computers are so promising. As a result, quantum computers look to revolutionise many fields from materials engineering to cyber security.

Since a qubit can be made from a variety of two-state quantum systems, let’s consider an analogy where we implement a qubit on something we all have experience with: a coin. (I know this is not an exact analogy since a coin is a classical system not a quantum mechanical system, and it can’t actually implement entanglement or complex amplitudes, but it’s just an analogy so I’m not worried.)

If we consider a coin lying on a table, it can be either heads-up or heads-down (also known as tails). For the purposes of this analogy, let’s call these states 1 and 0. You will recognise that this is like a classical bit.

Maybe this coin has different types of metals on each side, so we could send some kind of electromagnetic pulse at it to cause it to flip over, and this way we could change it from 1 to 0, or visa versa. If there is another coin next to it, we might consider another kind of electromagnetic pulse that reflects off only one of those metals in a way that would flip the adjacent coin if the first coin’s 1 side was up. You might ultimately be able to build a digital computer of sorts on these bits. (You can build a working digital computer within the game of Minecraft, so anything’s possible.)

Let’s now expand our analogy and add a coin flipping robot arm. It is calibrated to send a coin up into the air and land it on the table, such that it always lands with the 0 side up. While the coins are in the air, these are our qubits. When they land on the table, they become bits.

Now we can flip coins into the air, and send electromagnetic pulses at them to change their state. However, unlike bits that can be only either 0 or 1, qubits have probabilities. A pulse at a coin can send it spinning quickly so that when it lands on the table it will be either 0 or 1 with a 50-50 chance. Another pulse might reflect off this spinning coin so that it hits the next coin and spins it only if the pulse happens to hit the 1 side of the first coin. Now when the coins land, they have a 50-50 chance of either being both 0 or both 1.

However, you won’t know this from measuring it just the one time. You will want to perform the coin flips and the same electromagnetic pulses a hundred times or more and measure the number of different results you get. If you do the experiment 200 times, and 100 of those times you get two 0s and the other 100 times you get two 1s, you can be pretty confident that this is what is going on. For more complicated arrangements of pulses, and greater numbers of coins, you might want to do the experiment 1000 times to have a clear idea of what is happening.

This is how quantum computing works. You perform manipulations on qubits (coins in the air), these set up different possible results with different probabilities, the qubits become bits (coins on the table) that can then be read and manipulated by a classical computer, and you repeat it all many times so you can determine things about those probabilities.

Pandemic Life

Over the past couple of years, we’ve all experienced the impacts of pandemic-related restrictions. These changes to how we live, learn and work have been with the goal of protecting society, but they have been severe at times. Here in Melbourne, where I live, we had perhaps the longest time in lockdown experienced anywhere.

Now that we have sufficient vaccinations, tests and treatments to manage Covid-19, it looks like we might be coming out of the pandemic. Before I forget what the last couple of years were like, I wanted to record here some of what our daily experience was. In particular, what we did in order to get through those long lockdown months.

I didn’t want to share this earlier, as I’m aware that many people were just trying to get through the days. Having a list of what we did in our household might have added pressure to others. There was no one way of doing lockdown right. Whatever got you through to the end of the day, and to the end of the week, was sufficient.

Work

My wife and I were both working remotely in lockdown. We immediately converted the spare bedroom / junk room into a study and set up desks for each of us there. This very quickly became frustrating with both of us trying to do video meetings out of the same space at the same time. My wife moved into one of the living areas, converting half of it into an office, and this worked a lot better.

Having a separate space for work and non-work was helpful for when trying to “switch off” at the end of the work day. In addition, I continued to wear work clothes for work, and casual clothes for when work was done, aiding with compartmentalisation. However, I soon switched from trousers to comfy jeans. If it isn’t on camera, it doesn’t count.

Part of the daily starting work routine was collecting a coffee from the local coffee shop. While it’s just a small spend, we also felt we were doing a little bit to help a local business get through lockdown. A benefit was that I ended up getting to know a bunch of the staff there by name, and continue to go there still.

I set up a networked printer server on our old laser printer so everyone could print what they needed from whatever device they were using, as well as an Internet monitor display that could show when the Internet connection was down or behaving poorly. There were a lot of shouted questions about “is your Internet still working?” while we were in lockdown.

Health

Lockdown wasn’t exactly healthy for anyone. I had been doing Body Pump classes at a local gym, and in lockdown there wasn’t even the exercise of walking to a train station or walking between meeting rooms. I got a floor mat and some hand weights, and ended up doing Body Pump-style exercises to random Spotify music 2-3 times a week in the mornings. Even now that lockdown is over, I’ve continued this practice.

As a family, we tried to find exercise we could do together (within our 5 km limit). Initially we looked at the Joe Wicks videos but we didn’t really have the space to do it, and the kids launched a protest as well. So, we ended up doing family bike rides at lunch time. Unfortunately, the association with lockdown has tainted family bike rides in the neighbourhood since then. Still, the kids became really decent riders.

Another nice thing about the rides was that we got to know the neighbourhood better. We’d moved to the area just a couple of months before lockdown, so there was plenty to explore. Also, during one of the lockdown periods, people would put teddy bears in their windows and it was fun to spot them. We also had a bit of a bear arrangement for a while on our verandah.

A downside to the bike rides was that we had to leave the dog behind, which she didn’t like. She was a bit of an escape artist, and I had an ongoing project to fit things to our fence so she wouldn’t be able to climb over it. A complication is that we were renting, so couldn’t attach anything permanently. In the end, I attached some planks to the top of the fence with wire and this was sufficient to prevent her going up and over.

When it came to mental health, the Headspace app got heavily used and we signed up to a family plan. It was my first time I’d stuck with a meditation program, and it was very useful in managing the level of stress.

Education

The kids were both at primary school in the first year, then one went up to high school in the second year. Remote learning in general worked pretty well.

When both were in primary school, they would typically finish off all their learning by the morning, and then amuse themselves in the afternoon, outside of any specialist class meetings. They shared different ends of the dining table, and this was also good for Wi-Fi connectivity. We insisted that they have cameras on for their video meetings, and it seems this was a bit unusual. It did give us an argument for why they needed to be dressed by 9am though.

Their schools did a good job in implementing remote learning for lockdown, but remote socialising was not a focus for schools. We had a virtual substitute for the classroom but not for the playground. Our kids could play with each other a bit, but when the eldest went up to high school, this no longer worked.

Eventually, the parents of the kids in the primary school class were able to join everyone to Discord, and this became the means for them to stay in social contact. It would have been better to have a more age-appropriate solution, but this was the best we could arrange, and the benefits of ongoing social contact outweighed the disadvantages.

The schools used Compass to communicate with parents, and it was a big step up from the level of communication we had before lockdown. Unfortunately, Compass has a number of very annoying quirks, and I ended up developing a script to process the Compass email alerts and turn them into a readable message instead of a message to click a link to a message that later disappears.

As well as the individual classes, we also got into a routine of watching BTN and Science Max together as a family. There was also a little bit of Mark Rober thrown in for good measure.

The school lunch break was able to coincide with the parents’ lunch break, and so we tried to all spend time together at that point, if only for 30 mins. We ended up designating one bedroom as the “lunch room”, since it was a different space from the ones we were working and learning in.

Friends and Family

Many regular evening and weekend activities couldn’t work as normal under lockdown. My local orchestra’s rehearsals and performances couldn’t go ahead, and it switched to a fortnightly online orchestra social get-together instead, with a mix of quiz nights, celebrity interviews and Acapella app selfie-performances.

My monthly friendly dinner party club shifted to a mode where we agreed on a theme and then all cooked it at home for our families and households, but ate together via Zoom, Webex, or Teams. Not quite the same, and having to cook food that the kids would also eat meant it was less adventurous, but still a bit of fun.

The book clubs I was involved in also went online, but there was a bit of a drop-off in attendance. There was definitely a bit of Zoom fatigue going on, and it was hard to be motivated to read serious books when there was enough other serious things to worry about.

The Melbourne-based part of our extended family were unfortunately outside our 5km limit, but we kept in touch with them with weekly video sessions. Plus there were other regular catch ups with friends.

A new weekly tradition was joining a couple of friends virtually for Locked Down Trivia, which raised money for good causes, and gave us a good excuse to try out a variety of cocktails. Some people there got into dress-ups and group challenges, but we were there for the laughs. And possibly to test our ability to confirm our knowledge via Google.

Hobbies

Just like everyone else it seems, we started doing jigsaw puzzles. There was generally a puzzle set up somewhere, and anyone could come past and work on it for a bit if they needed to distract themselves or reset.

We started a weekly tradition of big Sunday lunches. Initially it was Sunday roasts, but it didn’t take too long to widen it to a broader set of cuisines. I remember we did a big dish of lasagne a couple of times and also crepes one day.

I was one lockdown behind the trend in some of my hobbies, and after it boomed in Lockdown 1, I took up sourdough baking in Lockdown 2. I’ve posted before on this blog about my adventures in gluten-free sourdough, and I’m pleased to say that my sourdough starter is still alive!

Also on food, we ordered a few minor luxuries to treat ourselves from time to time. We brought in nice tea from Tea Leaves in Sassafras, nice chocolate from Haighs, and nice gin from all over the place. Occasionally, we’d order a nice dinner to be delivered, doing this at the same time as some friends, and we could have a virtual dinner party together.

The kids found their own ways to cope, and the tough times resulted in an unexpected burst of creativity. There was a lot of Lego building, and we went through a lot of craft kits. In addition, for a few months they made a weekly newspaper called Big House News that chronicled the more dramatic events in the house, as well as poking fun at their parents. There was also a series of stop motion animation videos produced and shared with remote family members. Looking back, some of the videos had rather dark humour, but they were at least all humorous.

We also experimented with playing Dungeons & Dragons. I got out my old AD&D 2nd Edition books and over successive weekends, we ran through a short campaign. It all went a bit silly and we had lots of laughs.

I’d be remiss not to mention the heavy use that the PlayStation 4 got during lockdown, and then the PlayStation 5 (once we could get our hands on one). I think we now own every expansion pack for The Sims 4, and I spent a lot of time in action RPGs like Witcher 3, God of War, and Assassins Creed Valhalla.

Other Stuff

We were forced to switch to ordering our shopping online for home delivery. We had tried this a few years back and stopped after having issues like missing items or strange substitutions. Apparently these are still issues.

With widespread panic buying affecting supermarket shopping, we switched to buying toilet paper on subscription. Happily, gluten-free varieties of products tended to be less affected by panic buying. For some reason, gluten-free pasta and gluten-free flour are not what doomsday preppers want to keep in their stash.

Although, one prepper move we made was to ensure we had at least half a tank of fuel in the car, and at least half a bottle of gas for the BBQ on hand. Pandemic restrictions were randomly hitting different industries, and it was hard to predict which supply chain would be the next to be disrupted.

But let’s hope we don’t have to do all this again.

First word of Wordle

In the last week, I have started playing the online word game Wordle by Josh Wardle. I was lured in after getting curious about some strange Twitter status updates that showed rows of green, grey and yellow blocks. It turns out it’s a fun game, too.

The basic idea is to try to guess a five-letter word, and you get six guesses. Each day there is a new word, and everyone gets to guess the same one. After each guess (which must be an actual word), you get some information on how close the guess was because the letters in a guess are shown as green (correct letter in correct position), yellow (correct letter in incorrect position) or grey (incorrect letter). After you’ve finished guessing the word, you can share a status update that shows how well you went, in a way that doesn’t give away any information about the word. That’s what I was seeing on Twitter.

I’ve done it four times now, and a natural question is what word should be the first guess. At that point in time, there is no information about the daily word, so it makes sense to me that the first guess should be the same each day. However, what is the best word to use for that first guess?

The conclusion I’ve reached is that the best word should have five different letters, together which are the top five most likely letters to match in a word, i.e. maximise the chance of getting yellows. Additionally, those letters should ideally be in a position that is most likely to match the correct position, i.e. maximise the chance of getting greens.

To figure this out properly, I would need to know the word list being used by Wordle, which unfortunately I don’t. In fact, there may be two word lists: the word list used to allow guesses, and the word list used to pick the daily word. So, I’ll make a big assumption and use the Collins Scrabble Words from July 2019.

My tool of choice is going to be zsh on my MacBook Air. It doesn’t require anything sophisticated. Also, I’ve removed any extra headers from my word list, and run it through dos2unix to ensure proper end-of-line treatment.

First job is to extract just the 5 letter words:

% grep '^.....$' words.txt > words5.txt
%

Now we need to figure out how many words each letter of alphabet appears in:

% for letter in {A..Z}
for> do
for> echo $letter:`grep -c -i $letter words5.txt`
for> done | sort -t : -k 2 -n -r | head -n 10
S:5936
E:5705
A:5330
O:3911
R:3909
I:3589
L:3114
T:3033
N:2787
U:2436
%

That wasn’t very efficient, but it doesn’t need to be. We have our answer – the most popular letters are S, E, A, O and R. Putting these letters into a free, online anagram tool, it turns out that there are three words made up from these letters: AEROS, AROSE and SOARE.

Okay, so while only one of these is a word that you’d actually use, it turns out that Wordle accepts them all. It looks like Wordle might use the Scrabble word list for its guesses.

In any case, this looks like a pretty good set of letters, as the words in the word list are highly likely to have one of these letters:

% grep -c . words5.txt
12972
% grep -c -i -e A -e R -e O -e S -e E words5.txt
12395
%

Of the 12,972 words in the word list, 12,395 (96%) will have at least one letter match!

The next job is to figure out which of these three words is most likely to have letters in the same position as other words in the word list.

% grep -c -e A.... -e .E... -e ..R.. -e ...O. -e ....S words5.txt 
6578
% grep -c -e A.... -e .R... -e ..O.. -e ...S. -e ....E words5.txt
3742
% grep -c -e S.... -e .O... -e ..A.. -e ...R. -e ....E words5.txt
5726
%

We have a winner! A letter in AEROS is in the right position for 6,578 words (51%).

So, it looks like using AEROS as your first guess in Wordle is a pretty good choice. Just, don’t tell anyone that’s what you’re doing, or if you share the standard Wordle status update, it will actually contain spoilers.

Messing around with DWeb

You may have heard something about NFTs recently. They are the technology concept that underpins the ability to sell an authoritative version of digital art, sometimes for millions of dollars. It is a bit like selling a signed print for more than the unsigned print sells for, but the unsigned print is free while the signed print is worth $69M. But that’s not really want I wanted to talk about.

If you are the sort of person who pays that much for a bunch of electrons somewhere, you don’t want to wake up tomorrow to find them gone. Many well-known websites have, at various times, been brought down by DDoS attacks or merely defacement attacks, and content has gone missing. A website is a surprisingly brittle thing, and relies on domain name registrars, nameservers, web hosts, ISPs and other parties to all come together to deliver the content that you’re expecting. Since a buyer may expect their newly acquired, expensive digital artwork to be as long-lasting as a statue or painting, traditional web infrastructure is not really the solution.

So, NFTs are now making use of decentralised Web or DWeb technology, where the content delivery has no single points of failure. A lot of the thinking behind this is motivated by free speech ideals and resisting government control, but it can just as easily be put to the service of capitalist art speculators. Or, in my case, blog authors.

I was curious to explore what was involved in putting my humble WordPress blog onto the DWeb, or as it is sometimes called, Web 3.0. It wasn’t too hard, but the material I found explaining it was a little esoteric. Follow along if you would like to do this too!

There are basically two things that I needed to do: host the content somewhere (equivalent to using a web host, or perhaps a CDN) and register a name that could point to that content (equivalent to registering and hosting a domain name). In theory, you don’t need the name, but the address for the content then will not be human-readable or memorable.

IPFS (or InterPlanetary File System) is a technology for hosting content in a decentralised fashion, a bit like peer-to-peer file sharing. The main catch is that the files are all static, which means that they can’t run a platform like WordPress. I had to begin by creating a static mirror of my website as plain HTML, CSS, JavaScript and images, without any dynamic content. If I wanted to do it properly, I’d also have replaced the backend system that allows people to leave comments, but instead that feature will simply be disabled.

At the Terminal prompt of my Mac (which suffices for a Unix shell), I used these commands in an empty directory:

% wget -k -r -p -N -E -w 0.5 -nH -e robots=off -R "*\?feed=*" -R "*\?rest_route=*" -R "*&*" https://www.aes.id.au
% git init
% git add *
% git commit -m "Initial commit"
% git remote add origin https://github.com/aesidau/www.git
% git push -u origin master

and within a couple of hours, I had a static copy of my website stored in GitHub. This is a necessary first step to make use of Fleek, which handily takes a GitHub repo and deploys it to the IPFS. It is also free to use for personal purposes if you use less than 3GB of storage!

At this stage, my blog was now available at https://ipfs.fleek.co/ipfs/Qmahm66pomdqppz71abMixDnHWr9b1HmqXhv1iTGrEWjb2/ which is a bit of a mouthful. That last part is the IPFS Hash that is used to uniquely refer to my blog content. Ideally, I could share something short like https://aes.id.au so the next step was registering a suitable name.

There are a few contenders for the name service of the DWeb, including Handshake and NameCoin, but currently the most popular one seems to be ENS which uses the Ethereum blockchain. To buy a name via ENS, you’ll need some Ether currency and a supported wallet to store it in – MetaMask, Portis, Authereum, Torus, WalletConnect and MEW are the various options at the moment. I chose the option of using the Chrome browser together with the MetaMask extension. The amount of Ether you need to buy will fluctuate based on the price of Ether and exchange rates, but it will probably be in the tens of dollars. Also, if you want to buy a name that is 3 or 4 characters long, it will be a lot more expensive. Additionally, every time there is any update to the ENS name record, it will cost some Ether.

After I’d installed MetaMask, set up a wallet with it, and put in some Ether, I needed to go to the ENS App site and click on “Connect”. Then it’s just a matter of following the instructions to register a name. Once the name is registered, click on the option to manage the name, and click on the option to edit the record. I also updated the entries for ETH and BCN addresses, since the changes will all be covered by the same fee, but the main one to edit is “Content”. I put “ipfs://Qmahm…” here with the full IPFS hash, and saved the record.

That’s it. So, now I can refer to that static mirror of my blog by ipns://aesid.eth (in the Brave browser) or aesid.eth/ (in the Chrome browser with MetaMask installed) or https://aesid.eth.link (which uses an IPFS gateway and should work in every browser). Unfortunately, while it is now protected against my WordPress blog disappearing, it is already out-of-date, as this blog post isn’t there!

Working without the commute

This post originally appeared over on Medium.com

The regular way of (white-collar / office) working in the coming years will be quite different to that of the last decade, but also quite different to how most people are predicting.

This is going to be one of those articles where the author talks about what may happen in the future, after some of the Covid-19-related restrictions ease. Why add to the steaming pile? In my case, I want to put forward a view that is different from most of what I am seeing, and also because it is good to have documentation of a prediction so that it can be tested in the future for how well it tracked to reality.

The majority of articles I’ve read about future ways of working seem to predict that the future will be like the present. Principally, that we are currently working from home and that this will continue. Bloomberg reports that Google has seen the equivalent of $1b a year in savings from not having people working and travelling between offices. Additionally, around two thirds of employees in US companies would rather work from home than get a $30,000 raise. So, it seems there are benefits to both employers and employees in letting the current situation continue.

I’ve noticed that in my own experience, there has been a significant productivity improvement in working remotely. Firstly, I work longer, as some of the time I would have spent commuting is spent working instead. Secondly, unproductive time at work spent in travelling between meeting rooms, and waiting for meeting rooms to be vacated, has been eliminated when using online meetings. Lastly, I am able to multi-task more effectively during remote meetings, as I can triage and process emails in a way that would have been more difficult to do in person. So, even putting costs like property rental, cleaning and energy for lighting/heating/cooling aside, I expect many employers will experience a productivity hit if all their workers return to the office full-time.

However, these benefits to employers exist regardless of where the remote employee is working from. Significantly, many employees do not have ideal conditions at their homes to work remotely. For example, if there are multiple people trying to work remotely from the one dwelling and there aren’t enough working spaces to share. Or if the dwelling is too small to have a working space that is ergonomically set up for healthy long-term working — I have seen some people working from small bedrooms. However, the reason such people are still working from home is largely due to restrictions relating to Covid-19, and as these restrictions ease, it’s fair to ask, where would they prefer to work?

The answer seems to be that they’d prefer to work somewhere without a commute. Even before Covid-19, research showed that people hate commuting. One study referenced by Forbes back in 2016 equated removal of a commute with a $40,000 raise, which is an interesting correlation with the work-from-home survey result above.

Additionally, people do value and benefit from the social interaction that they receive in a workspace. This is often at odds with remote working, where online social interaction may need to be a scheduled activity rather than happening informally as part of bumping into people in corridors or kitchens. Meals are a traditional social occasion, but food and drink can’t be readily shared over a video call.

If you put together the value of remote working, the desire for a minimal commute, and the social benefits from working alongside other people, the natural conclusion is that we will see a surge in interest in co-working spaces near people’s homes, once Covid-19 backs off. In Australia at least, many of the co-working spaces have been in the same geographical areas that major corporate offices have been, as the proposition has been in providing flexible offices near to corporates. However, we can expect the proposition to shift to providing flexible offices near to employees.

While co-working businesses have taken a big hit during the Covid-19 lockdowns, up until 2020 there had been a strong trend of growth in adoption of co-working spaces. A March 2020 study by Coworking Resources showed 17% growth in number of users and number of spaces over the previous two years.

Additionally, there are spaces that are similar to co-working spaces that will likely support this demand. Many local libraries supply internet connectivity and bookable desks. Similarly, some cafes are also happy for locals to work from their premises and use their Wi-Fi if they are buying food and drink. In a world where remote working is normalised, arrangements like these might be made more official.

Of course, such spaces do not offer a free alternative to employers providing offices. The costs will need to be borne by someone. Perhaps reduced commuting costs (vehicles, fuel, parking, tickets, etc.) and home costs (energy, internet, etc.) could offer some compensation to employees. Perhaps employers will see the cost savings and productivity gain compared to offices and also provide financial support, or even get into the co-working space business themselves.

There are also questions about how to maintain business confidentiality in a space where there may be employees from competing organisations also working there. Co-working space designs can help mitigate this, as well as suitable IT solutions, but it will remain a risk to be managed. It is not dissimilar to working from an airport lounge or having work discussions in a taxi or cafe, so shouldn’t be considered a new risk.

I hope that we will see more forecasts about the future ways of working that go beyond working from home or even hybrid working. While many people and businesses want to retain the benefits of remote working, working from home is not going to be the only solution. Shared working spaces, close to people’s homes, will almost certainly be part of it.

Can we talk about the back button?

I am one of those people who is not loyal to a particular smartphone platform. There are some people who say this, but truly, I switch between having an iOS based phone and an Android based phone every couple of years. I feel it is my professional obligation to ensure I am aware of the trends relating to smartphones in general, and so I switch.

I have recently switched to using the iPhone 12 Mini after using Android devices for the last couple of years. I love that Apple has added a smaller phone again to their current range, as I like to be able to fully use a phone one-handed as I walk along. It is great that this phone supports 5G. Unfortunately, I am also deeply missing having a back button on the device.

It is a little bizarre to me that I need to explain this, as I’ve come to realise that some people who have exclusively used Apple iOS devices for their entire lives don’t even realise that Android devices have a back button. This is an on-screen, virtual button (it used to be a physical button) that you can tap to take you to the previous screen you were on, and can keep tapping it until you get back to the home screen. It is conceptually the same as the back button in a web browser. Now, I am aware that some recent Android devices have started to do away with the back button also, but I am choosing to believe that this is just a short-lived fad.

The Android back button is just a simple user interface element, that it is only when it goes missing do I realise how much navigational heavy-lifting it provides. At any point, in any app, you know exactly where to tap to exit the screen you’ve ended up in. There is no need to figure it out based on visual cues that an app might choose to show. Just like in a word processor (or really any application that allows you to create things), you know you can always Undo, and it’s always the same mechanism. There’s not a different way to Undo a typo compared with an accidental deletion or a formatting glitch.

On the iPhone, the way to leave a given screen is up to the app and can be quite inconsistent. The emerging approach is to use the left-to-right swipe gesture, which is quite elegant, although there is no visual indicator that this will work so you need to be told about it, and also be prepared for it not to work at all. It would be great if it simply worked all the time, the way the Android back button does. So, this post is also a little bit of a plea for something like that to happen.

I suspect that people who are regular Android users don’t need to be convinced, so my audience is more iPhone users who don’t realise how inelegant the user experience is. Hence the rest of this post will be actual examples showing what I’m talking about using screenshots from my current iPhone device.

a screenshot of the Messages app on the iPhone

Above is a screen within the Messages app. It has a place on the screen in the top left corner with a “<” symbol so that we know that we can go back to the previous screen in the app by tapping this. We can also do a left-to-right swipe to achieve the same thing. So far, so good.

However, say we arrived at the Messages app by searching for the app rather than tapping on its icon in the home screen…

a screenshot of the Messages app on the iPhone

In this case, there is now also a little label “◀ Search”, that, if we tap on, takes us back to the search box. Tapping the “<” takes us to a different screen in the Messages app, and so does left-to-right swipe. So, it’s a little bit messier, but at least there’s a convention that the “going back” options are in the top-left corner, and left-to-right swipe does the same as “<“. Or maybe not.

a screenshot of the Photos app on the iPhone

This is a screen within the Photos app, displaying a cute pic of my parents’ dog. There is a “<” in the top-left corner to take us back to the photo Library within Photos. However, doing a left-to-right swipe doesn’t do the same thing. Instead, it scrolls to the photos immediately to the left of the displayed photo. So, the swipe gesture isn’t reliable, but is the position of the “going back” option in the top-left of the screen reliable?

a screenshot of the Safari app on the iPhone

Well, this screenshot is from the Safari app, where the “<” symbol is shown at the bottom-left. Although, this little bar of symbols disappears as we scroll through a page, and is shown only when we then scroll up. However, in this case the left-to-right swipe does perform the same action.

Now, tapping on the rightmost icon to show the open tabs…

a screenshot of the Safari app on the iPhone

This takes us to a visual display of the open tabs, but to exit this and return to the previous browser screen, we need to tap “Done” in the bottom-right corner. Additionally, left-to-right swipe doesn’t navigate us anywhere, and risks closing one of the open tabs if we’re not careful. We’ve now found exit prompts in three out of the four corners, but can we find an example of it in the top-right corner? Why, yes.

a screenshot of the App Store app on the iPhone

This is a screenshot from within the App Store app. If you are on the Search screen, and have searched for something of interest, but then change your mind, the only way to navigate back to the main Search screen is to tap “Cancel” in the top-right corner. Left-to-right swipe doesn’t do it either, unfortunately.

There are other examples we could look at where there is instead an “X” symbol or the word “Done” in the top-left corner, and the left-to-right swipe doesn’t work in these cases either. I hope you’ve gotten the idea.

There is no consistency around which corner the “no, I want to stop and go back to where I was before” symbol or word appears, or even what the symbol or word should be. Sometimes the left-to-right swipe works, sometimes it doesn’t, and sometimes it could scroll within the content or even delete it. There is actually an alternative that provides a single, consistent mechanism, and it’s called a back button.

Long ago, in 1987, Apple introduced something called HyperCard, which was software for the Apple Mac computers of the time. HyperCard was a huge thing and has influenced many aspects of computing we still use today, including web browsers. Instead of screens or pages, HyperCard displayed “cards”, and the cards were arranged into what it called “stacks” (although we would call them apps or web sites). Most relevant to our discussion, looking at the HyperCard user manual from 1987, there is this interesting snippet on page 5:

You can always go back: Another way to see the previous card is to press the Tilde key. Stacks in HyperCard often link to each other (a concept you’ll learn more about later). While the left arrow brings you to previous card in the stack you’re looking at, the Tilde key brings you to the last card you saw, no matter what stack it was in.

So, yes, Apple pioneered the concept of a back button. It is time to bring it back.