Apple Vision Pro – down but not out

There’s currently a spate of negative articles about the Apple Vision Pro. I think these overstate any “failure” of this device, and I’m going to tell you why. Assuming you read on. 😀

Zuckerberg got in early with his own negative take, but recently we’ve had some others. You might have seen articles like the one on the Disconnect blog, or the one from Forbes. It seems to be time for a pile-on.

It’s tempting to think back to the launch of the original iPhone 📱 , which was a 2G feature-phone, without copy-and-paste functionality or an app store. It lacked many features of what were already standard smartphone features at the time. However, it was received generally very positively by a customer base hungry for what even this limited phone delivered.

What is different this time is that the AVP 🕶 delivers very well against a use case that not many people want. Specifically, the use case of watching movies on a big, high-def screen no matter where you are. It has great resolution and sound, and the pass-through capabilities allow you to virtually hang a big screen in your environment wherever you want. If that’s what you’re after, it’s truly amazing.

Unfortunately, that isn’t one of the three areas that I’ve previously written about where VR headsets have a strong advantage. These are in fitness applications, virtual office environments, and immersive training. And, no, I don’t consider general games/entertainment to be an area where VR is particularly strong at the moment.

This initial version of the AVP is a bit heavy to wear on the head, has a UI optimised for consumption (eye-tracking based), lacks controllers, doesn’t support multiple monitor extensions for a laptop/desktop, has detectable motion blur when moving your head, and each AVP needs to be personally tailored for the individual who will wear it making it harder for a single unit to be shared by multiple people. (I’m going to ignore the issues around the high price, external battery pack, and the odd EyeSight feature that makes eyes visible through the headset, as these aren’t relevant to my argument.)

Together, these design decisions make it hard to have a great fitness application or virtual office environment. Immersive training is still possible for the situation where the user is downloading their own training apps, but the custom fit procedure makes it harder for a training organisation to use them.

Given the lack of fit between the product and the strongest VR use cases, I’m not surprised that the first version of the AVP hasn’t been a runaway success. However, I don’t think this means Apple has failed. The AVP is an amazing technical achievement, and the many of the issues can be corrected in a future version.

It appears that Zuckerberg hasn’t written-off Apple either, as he recently announced Meta Horizon OS, where the operating system for the Meta Quest headsets is being licensed to other hardware makers for their own headsets. It’s a clear play to position Meta as the “open” alternative to Apple in the VR headset space, similar to the Android ecosystem versus the iPhone, and makes sense only if Meta feels a competitive threat.

The next version of AVP should ensure it supports some of the three areas of VR strength. For instance,

  • To better support virtual office environments, provide an alternative to the eye-tracking based UI, fix the motion blur, and allow multiple virtual monitors to be extended into the AVP (there are rumours that this is coming).
  • To better support fitness applications, make the unit on the head lighter (e.g. move more weight to the battery pack, remove the components for EyeSight), fix the motion blur, and add support for controllers.
  • To better support immersive training, make the unit on the head lighter, fix motion blur, and add support for controllers, but most of all, remove the need for custom fit components (i.e. different Light Seals, Light Seal Cushions and Optical Inserts).

What is probably the simplest one is listed first, which might be addressed through software updates. The others will require hardware changes. Still, the possibility to support the virtual office environment use case better through software changes alone gives me confidence that Apple will get there.

So, I wouldn’t call the AVP a failure. Perhaps a misfire? The initial version of Apple TV wasn’t a roaring success either. Like Zuckerberg, I believe Apple has proven their technical capabilities, and will evolve their product into something compelling.

5 Basic Prompt Engineering Learnings

Writing prompts for Generative AI systems will be a skill as critical as writing good web search queries has been over the past two decades. If you aim to be effective in making use of GenAI tools, “prompt engineering” is a skill that you should develop. A search box has become ubiquitous in modern online applications, and it is becoming common for applications to now offer a way to prompt a GenAI system to use them.

The ways that applications take this prompt are evolving, and are currently inconsistent across different types of application, e.g. image generation prompts for MidJourney can be very different to code generation prompts for GitHub CoPilot. I’m going to focus here on text chat services like ChatGPT, Gemini, Meta AI, and Claude as these are widely-used given their free access and broad applicability.

There are plenty of good guides out there that provide tips on prompt engineering. It’s a good idea to take a look at a range of these, as they are pretty good, quick to read, and cover different perspectives. For example, here are guides from OpenAI, Google, Microsoft, IBM, DigitalOcean and Cohere. You might also like to do a introductory prompt engineering course, such as Coursera’s one (or just read the paper it is based on).

I’m not going to duplicate these, but instead give a different perspective based on these guides and courses, and my own experiences in following them. The following are five things that I’ve learned, and while they are simple, they are basic principles that can guide your own personal skill-building in prompt engineering.

I’m interested in hearing what other people have found in their own prompt engineering journeys.

1. Prompts do not travel

If you’ve looked at a lot of the prompt engineering guides out there, you’ll have noticed that most of them focus on versions of ChatGPT. However, different GenAI chat services can respond very differently to the same prompt.

As shown in the screenshots above that, in response to the simple prompt “to be or not to be, that”, ChatGPT 3.5 has carried on with quoting Shakespeare, while Claude 3 Sonnet asserts that it is unable to.

Similarly, the same prompt can behave differently between different versions of the same chat service, e.g. ChatGPT 3.5 vs ChatGPT 4 Turbo, or with the same version but at different times.

The first screenshot above is from the Coursera prompt engineering course by Jules White, and used the ChatGPT 3.5 chat service from March 2023. The second screenshot is one that I recently took of the ChatGPT 3.5 chat service in April 2024. ChatGPT is being asked to solve a problem, and comes up with the answer “YES” in the first case, but an answer “NO” in the second case (which is the correct answer), doesn’t follow the formatting of the examples given, and breaks down the reasoning in the course of solving the problem. In the year since the course was recorded, OpenAI has updated the model and system prompt, and it behaves differently as a result.

Conclusion: The take-away from all this is that you shouldn’t assume a prompt that works today on one service will work identically tomorrow or work identically on a different service. If you are hard wiring a prompt into a software application, or putting it into a guide for people, that will have a life of more than a couple of months, you should take this into account. Add a monitoring function to your software application, or put an explanatory note into your guide, so when things change, people are not too surprised. Also, note that developing a prompt that works only on a single Gen AI service may be locking you in to using that service.

2. Responses are not predictable

Each word in the response from a Gen AI chat service is generated from a random choice across a probability distribution based on all the words seen up to that point. Given it is a random choice, it is quite possible that a response to a given prompt could be different each time that a prompt is given.

These screenshots show Gemini providing a different response to the same prompt of “What is the best flavour of Coca Cola?”. While they have many similarities, the first response includes Coca-Cola Spiced, which isn’t mentioned in the second. In theory, no matter how many times you give the same prompt, there’s a chance that the Gen AI chat service can give you something you haven’t seen before.

This is lack of determinism is a bit strange if you’re used to writing functions in Excel, or a software application, and have an expectation that computers do what you tell them to do. A spell checker in Word will give the same suggestions each time, but this isn’t guaranteed with Gen AI. It’s not a bug, it’s a feature!

Some Gen AI tools allow a “temperature” parameter to be set to zero, or a particular random seed to be chosen, which can limit the variability when repeating the same prompt. However, these may not be available on all tools, on the free tiers, or maybe not unless you use APIs to interact with them. They might also not stop all randomness, if different prompt requests are load-balanced across different servers which, in turn, may generate different random numbers due to their different internal states.

Conclusion: If repeatability is important (and often it is), the debugging process of prompt engineering involves presenting the same prompt multiple times to ensure it does give the desired result. Using parameters or flags to reduce randomness will also be valuable, assuming you have access to them, but may not be reliable. A software application that is using Gen AI may benefit from having checks to ensure that the response is in the expected form. (See also the OpenAI Evals framework.)

3. Structure gives more useful sessions

While prompts are made up of words, they can also be made up of symbols, or follow a structured pattern. For example, all of the Gen AI chat services I have used understand Markdown, which is a standardised way to add formatting to text documents.

This is particularly useful where a prompt specifies that a response should be in a particular format, since that particular format can include Markdown styling. It can highlight information in the result that is interesting, or just be used to make the response look nicer.

The second example shows the use of a template, where a field to populate is specified with angled brackets, e.g. <Country_name>. There is nothing special about using angled brackets, and a field could be specified in a variety of ways, including other punctuation, using capital letters, or using the word “field”. If it would be recognisable to a human as a field, the Gen AI chat service will probably pick up that it should be replaced by content in the output.

As shown above, it is helpful to clearly delineate between instructional and non-instructional material in the prompt. For example, if the prompt is to find an answer in a block of text, or to summarise a block of text, that block of text should be clearly defined as different to any prompt instructions. You might wrap the block of text in triple quotes (“””), or put a clear barrier between the instructions and the other material such as a blank line or triple hashes (###).

Lastly, if the output is going to be long and there may be more interactions to come in the chat session, it can be useful to specify that the structure of the output should include repeating some of the context. This can minimise the chance of the Gen AI chat session drifting away from the purpose of the chat as more tokens (words) are used. It might work well where the Gen AI prompt involves asking questions of the user. For example, adding something like “After doing this, restate why you have written this.”

Conclusion: Familiarise yourself with Markdown, as it can be handy in helping format the responses in a Gen AI chat. Additionally, consider how structure in your prompt and in the response can assist with having more reliable prompts and chat sessions.

4. Words encapsulate meanings

Every prompt engineering guide seems to have a comment about making sure your prompt is very clear. However, this understates how important the selection of words is. Each word is an opportunity to steer the Gen AI towards the right behaviour. For example, some words are used in only particular academic, business or social contexts, so choosing these words will shift the Gen AI into that context.

A common example is where the Gen AI is instructed to “act as” or to be a particular persona. In the screenshot above, the prompt included “Act as a creative brainstormer”. You can also work technical jargon into the prompt to encourage it to use jargon in its response rather than give a generalist answer.

The screenshots above show the same prompt with one significant word difference. The first screenshot asks about a “headache”, while the second asks about “cephalagia”. By using medical jargon in the prompt, the Gen AI session has responded with information about prescription medications, and combined some treatment options under a new category of lifestyle modifications.

If using templates, like shown previously, the words used for the field names can help the Gen AI with the task. Instead of vague or ambiguous words for placeholders, e.g. “output” or “result”, use words that have a clear meaning, e.g. “capital city name” or “task completion time”. These will be replaced by the Gen AI in the response, so it’s ok if they are a bit verbose.

Conclusion: In developing a prompt, try a range of different words, and in particular try words that have specific definitions in domains that relate to how you want the Gen AI to respond. You can also ask the Gen AI to provide wording suggestions, and try those out also.

5. In the absence of the right words, examples are good

In the parlance of machine learning, a system that can complete a new task without any additional training examples is called “zero shot learning“. Similarly, if you provide one example, that’s “one shot learning”, or just a few examples is “few shot learning”. Providing examples in a prompt to a Gen AI is not learning in the same way that is meant by those terms, but the terms have come to stand in for a similar approach.

Sometimes it’s easier just to give examples to the Gen AI for what you’d like it to do rather than experiment with a range of prompts until you hit upon the perfect one. Frequently, one example is not enough, so a few examples are required. You may also need to combine it with a short prompt to make clear that the Gen AI is to follow the examples as a template for how to respond.

Since the Gen AI first has to guess what to do before it can do it, there’s a risk it will guess the wrong thing. For example, the first screenshot required around five attempts before the Gen AI tool gave the correct response. Another possible approach is to use some examples to have the Gen AI generate the instructions to use in a prompt. This way, the first guessing step can be skipped in future interactions.

It may take a bit of experimentation to have the Gen AI provide a repeatable set of instructions that does what you want.

Conclusion: The ability for current Gen AI tools to be able to infer an action from examples is rather impressive, but using examples can increase the risk that the prompt is not consistently followed. Providing structure, and descriptive words in any field names and in any introductory instructions can help. Also, perhaps examples can be used to generate a new prompt rather than be used directly.

Exciting times for EdTech

I deliberated putting a question mark at the end of that title. Education Technology is a topic of contradictions, and has a history of hype, so it is clearly bold of me to be asserting This Time It is Different. But, maybe this time it is different.

The recent State of Australian Startup Funding 2023 report showed that EdTech is currently out of favour with local Venture Capital firms and Angel Investors. By the end of 2023, the EdTech/Training category had fallen out of the top 10 most exciting areas to watch, behind Deep Tech, Legaltech, and Design/Publish/Collab solutions (see page 25). Additionally, EdTech startups raised $108m in 2023, again outside of the top 10, a reduction from 2022, and less than was raised by cryptocurrency startups (page 24). Angel investors were perhaps slightly more optimistic about EdTech, with the sector reaching position 10 of the top 10 most exciting sectors for 2024 when male angel investors were polled, and yet female angel investors didn’t have it in their top 10 (page 96).

Despite this lack of interest, Australian EdTech-related startups have been doing pretty well recently. Brisbane-based Go1 aggregates workforce learning and development content, and was valued at $3b last year. In 2021, online training startup A Cloud Guru achieved a $2b exit. While not dedicated to EdTech, SafetyCulture includes a training component, and was valued at $2.7b last year. Similarly, cybersecurity firm Secure Code Warrior includes training, and raised US$50m last year, and international student recruitment platform Adventus.io raised $22m. Outside of startups, ASX-listed international student placement and English language testing company IDP Education is valued at around $5.5b. And while it deserves more than a postscript, it’s worth mentioning the Australia-born learning management system (LMS) Moodle with its 377m users, 45m courses, and is one of the top 5 LMSs in North America.

Education is Australia’s largest services export industry, and the fourth largest export industry overall, behind the resources industries of Coal, Iron Ore and Natural Gas. The sector brought in $36b in the most recent year, already back to the level it was before the Covid pandemic. Australia also has prominence in global University rankings. On the current Times Higher Education world University rankings, Australia has 6 Universities listed in the global 100, which is 1/6th of what the USA has, while having less than 1/10th of its population. It’s also more than any of Canada, France, Japan, or South Korea, despite having a smaller population. (Although, special call-out to Singapore, with 2 listings in the top 100 with only 6m people.) If you look at the current QS Top Universities rankings, Australia does even better, with 9 in the top 100.

So, on one hand Australia is a success story, with an ecosystem able to produce wins in both international EdTech companies and selling education itself, but on the other hand, there is relatively little investor interest in backing emerging EdTech companies at the moment. What might make this change?

While there are ongoing trends that have underpinned interest in EdTech for a while, e.g. see this article from 2001 from Archangel Ventures, I see two near-term demand-side forces and two supply-side forces that will drive the creation of new technology solutions in Education.

On the demand side, there is an urgent need to reskill people into the tech sector. In May 2023, the Tech Council of Australia forecast that around 295,000 people would need to reskill into tech jobs, to meet expected demand for 1.2m Australian tech workers by 2030. To put it in perspective, this is approximately twice the number of people they expect to come into entry level roles in the tech industry via Universities. A traditional University pathway, taking 3 or more years out of the workforce to obtain a qualification, will not work for many people. What is needed are faster, high volume methods of reskilling that companies can rely on to produce capable tech workers in areas like data science/analysis, software engineering and product management.

Another driver of demand is due to the recent rise of Generative AI. Most forms of training and education include an assessment component, and Generative AI has been shown to pass a range of academic tests as well as people do. In early 2023, one study found 89% of adult students had used ChatGPT on their homework. By this point, I expect it’s higher. Previously solid tech tools for detecting students copying work from elsewhere are no longer reliable, and can penalise people who aren’t native speakers. It’s not feasible to have everyone go back to doing assignments in person without access to the Internet, either. There is a pressing need for a better way, or the benefit of having qualifications will quickly erode.

On the supply side, coming out of the Covid pandemic lockdowns, a generation of students has experienced the delivery of education mediated entirely by technology. While for many years, a portion of students have experienced distance/remote education, this time it was the whole cohort. Australia, and in particular the states of NSW and Victoria, had some of the most lengthly and/or restrictive lockdowns in the world. For instance, Victorian lockdowns over 2020 and 2021 covered 262 days, and while it caused many issues for students, parents and teachers (some that are ongoing), it also trained every student and educator on what works and what does not. Those people who were receiving education at the time have been coming into the workforce for the past 3 years. In addition, there are all the educators who had been there and done that. There’s nothing like first-hand experience with a problem to help design good solutions. This set of talent will be invaluable in producing new EdTech startups.

In addition to talent is a new supply of capital. Dan at Square Peg Capital has written about how startup success breeds startup success. For instance, when an executive or founder at one business has learned the recipe for success, has made some money, then gone on to help form a new startup. One model is that 15% of outstanding shares may be used in compensating top employees, with shares vesting over four years. Atlassian’s IPO made many early employees millionaires, and there have been a bunch of startups formed off the back of this. Similarly, Canva’s secondary share sale, delayed until this year, is expected to make many employees wealthy, and result in new startups being formed in the visual media space. A Cloud Guru’s 2021 exit will have brought wealth to the founders and early employees. While less structured, both Go1 and SafetyCulture appear to support secondary share sales by employees. As capital is put in the hands of people with a passion for EdTech, it will catalyse the EdTech startup ecosystem.

While I didn’t begin with a question, I think I’ve answered it anyway. Australia is being pushed and pulled in the direction of more EdTech startup success, and it’s going to be an exciting ride in the coming years.

A Simple AI Strategy

Artificial Intelligence (or AI) has meant different things at different times, all through my career. I started working in AI back in the 1990s, when the most prominent use of a neural network was to decode hand-written post code (zip code) digits on letters, and if an organisation was using AI, they had probably implemented an expert system.

This was during an AI winter, when the hype of AI had overtaken expectations, and calling something AI was not considered a positive. Things like the discipline of data science and the technology of speech recognition emerged from this period without being explicitly labelled as AI, and organisations stopped talking about “using AI”.

I worked on implementing intelligent, autonomous agents, and then speech recognition-based personal assistant services. Think of a rudimentary Siri that could arrange meetings by calling people up and asking simple questions about their availability. I also developed a speech-based recommender system that would match people to local restaurants. It didn’t end up going anywhere though.

But AI itself came back in a big way, and organisations started talking about “using AI” when deep learning burst onto the scene in the 2010s. This use of multi-layer neural networks, trained on huge amounts of data with readily-available GPUs, was able to produce results that met or exceeded the results of humans. Seemingly overnight, AI had been redefined to mean deep learning, and all of the data scientists had to wearily explain why their statistical methods should be considered AI too.

My teams used this new AI for a range of novel applications, including training smart cameras on drones to find people lost in the wilderness, detecting when car doors were being opened in front of cyclists, and counting the number of desks in an office that were in use during the day. Additionally, we explored the ethical implications of these new AI capabilities and how an organisation can use them responsibly.

Now it seems AI has been redefined all over again, and generative AI is what people mean when I talk to them about AI. Which is a lot at the moment. Almost every professional conversation seems to turn to AI at some point. It’s a very exciting time, and there seem to be revolutionary announcements every month concerning generative AI.

Of course, this hasn’t escaped the notice of Boards and CEOs, who are asking their people to come up with an AI strategy for their organisations. Key suppliers are also putting pressure on these organisations to adopt their AI-enabled products and services, often with additional fees involved, and no CEO wants to fall behind competitors who are presumably “using AI” in everything.

It reminds me of the quip about teenagers and sex – and there are similar incentives here to talk about doing it, even if you’re not sure about it, and in fact aren’t doing it at all.

Actually, most organisations don’t need to get too worked up about it. It will be an evolutionary technology adoption for them rather than a revolutionary one, assuming they are already on the AI journey (AI meaning data science and deep learning).

This post is an outline of what a simple AI strategy can be for many organisations. Essentially, if an organisation is (i) not building software itself that appears in the user interface of its products and services, and (ii) has already adopted best practices for the previous generation of AI, it can likely keep things simple.

What’s new?

Generative AI can be considered an application of deep learning where new content is created, specifically audio, imagery or text that is similar to what a human would create. The recent AI boom has been brought about through a technology called a transformer architecture – the T in GPT stands for Transformer. Even before the excitement around OpenAI’s DALL-E 2 or ChatGPT services, you may have unknowingly used this technology in Google’s Translate service or Grammarly’s authoring tool.

While previous AI technology has been used in enterprises in decision-making tools, Gen AI has obvious application in creative tools. In a real way, the latest form of AI simply brings human-level capable AI-enabled features to a new set of enterprise tools. The insight is that you can treat this latest AI revolution as an update in enterprise tools. It may even be less disruptive than the time when enterprise tools moved to the cloud to be provided under SaaS (Software as a Service) arrangements.

When I say creative tools and decision-making tools, here’s what I mean:

  • Creative tools are not just tools used by “creatives” but any tool used by people to create something new for the organisation. They include software development tools, word processing tools, graphical design tools and inter-personal messaging tools.
  • Decision-making tools are any tool that provides data and insights that aid in making a business decision, such as to find the correct policy document, highlight the best applicants for a role, or report on monthly financial figures. The enterprise document repository, timesheeting system, or monthly dashboard are decision-making tools.

There are also some tools that are a mix of these two, for example Microsoft Excel allows people to create new financial models for their organisation that aid in making business decisions. That said, this hybrid category can be practically treated as a subset of decision-making tools.

In this discussion, I am assuming that the organisation in question has already done the usual things for the previous generation of AI. For example,

  • evolved the data warehouse into a data lake that is able to store both structured and unstructured data ingested from operational and customer-facing platforms,
  • established data governance processes and data management/ownership policies consistent with a relevant responsible AI framework (e.g. the Australian government ethical AI framework), and
  • provided training around privacy, data sovereignty, and cyber security practices to people who handle business and customer data, or develop and test applications using it.

It is likely that the responsibility for doing all those things was with a part of the organisation that also had responsibility for the decision-making tools used in the enterprise, namely the IT team. Understandably, the IT team is probably where the Board and CEO are looking to get the AI strategy from.

Before we continue, let’s be clear about what AI will bring to creative tools. The following table provides examples of AI-enabled features used in different types of enterprise tools:

Type of toolExample AI-enabled feature
Decision-making toolForecasting
Classification
Recommendation
Search
Anomaly detection
Clustering
Creative toolSummarisation
Translation
Transcription
Composition

What a particular feature does in a particular tool will be very tool-dependent. For example, in Adobe Suite, a composition feature might in-fill a region of an image to seamlessly replace the part that has been removed, while in Microsoft Powerpoint, a composition feature might provide initial layout of text and images on a slide. However, the high-level user experience is the same in both cases: the user provides a text prompt and receives new content in response.

Some decision-making tools are gaining a creative layer on top of their existing AI-enabled features, such as summarisation being added to search tools to save the user having to click on results, or language translation being added to recommendations to supported a wider user base. However, existing AI policies and procedures that have focused on decision-making tools will have likely picked-up these cases, and those tools that are a hybrid of decision-making and creative tools are well.

So what?

Organisations that produce creative tools will already have had to include Gen AI features in their products, driven by the customer/market demand for these and competitive pressures. These organisations will have had to skill-up in Gen AI already and have a good handle on the technologies and issues. This post is not for them.

Additionally, organisations that develop customer-facing software outside of creative tools will be considering how and whether AI-enhanced features like summarisation and translation could be incorporated in their user interfaces. The speed of innovation in this area is daunting. A year ago Meta’s foundation Gen AI model called Llama was leaked, initiating widespread development of such models in the research and startup communities, and now alternative models are beating OpenAI’s own models on public leaderboards (see here or here). There also also many complex factors to be considered. At the very least, such organisations should be performing upskilling in this area for their people and have a Gen AI sandpit environment for experiments. Given the speed of change in the marketplace, most organisations will need extremely quick ROI on any Gen AI projects or risk a waste of their investments. Due to all of that, this post is not for these organisations either.

If an organisation doesn’t build software that appears in the user-interface of its products and services, and given that Gen AI created text, imagery or audio will appear in user-interfaces, such organisations will be consumers of Gen AI rather than producers of it. I contend that the most common way for such organisations to consume Gen AI will be via tools that embed Gen AI, and hence avoid the costs and risks of building their own custom tools. Hence Gen AI technology adoption becomes a question of tool adoption and migration, and if an organisation has already tackled the question of AI before, it will have covered decision-making tools, leaving only creative tools to be dealt with in its plans.

Focusing on AI-enabled creative tools, these will have a number of common issues that an organisation will need to consider as part of adopting them:

  1. Copyright. New content is covered by copyright laws, which are similar around the world, but are not identical, and AI tends to play in the parts that are not globally consistent or well-defined, such as the concept of “fair use“. The data that has been used to train Gen AI models might turn out to have legal issues in some countries, impacting the use or cost of a model. The output of a Gen AI model may not be copyrightable, and hence others will be able to copy it without breaching copyright. This may limit how such AI models are able to be used in an organisation.
  2. New users. While the IT team has had its arm around the enterprise data warehouse and data lake, when it comes to creative tools, the IT team may not have been so involved, and adopted more of a light touch approach. The users of creative tools may not have received the previous round of data training, and may not be enrolled in data access systems intended to comply with data sovereignty controls, etc. From the point of view of AI, a Word document or corporate video is just as much “data” as the feed from the CRM.
  3. Data leakage. The latest Gen AI features in creative tools currently do not typically run on the desktop or on a smartphone, but are a SaaS feature that involves sending content to the cloud, and possibly off-shore. This is in many ways a standard SaaS issue rather than something new, but the nature of AI models is that they improve through training on data, so many tool providers seek to use what might be confidential content in the training of their models in order to continue to stay competitive. For example, Zoom modified their terms of service so that if a meeting host opts-in, the other participants in a meeting may have their meeting summary data used for training. Organisations are having to implement measures to manage this risk, such as Samsung choosing to restrict the use of ChatGPT after employees leaked confidential data to the tool last year.
  4. Misrepresentation. AI-enhanced creative tools might be used to produce content that others mistakenly think was produced by people or was otherwise authentic content. In the worst case, “deepfakes” may be created of an organisation’s public figures in order to dupe investors, customers or employees into bad actions. Scammers used this technique to trick a Hong Kong employee into transferring HK$200M. Still, a simpler case is where a chatbot on the Air Canada website made a mistake in summarising a company policy, a customer relied on this, and Air Canada was liable. Some organisations are taking care to carefully distinguish AI content from human-created content to help limit risks here.

Despite these issues, there is some optimism that AI-enhanced creative tools will bring a productivity boost to their users. The finger-in-the-air number is typically something like a 20% improvement. Microsoft’s recent New Future of Work Report (always very interesting!) includes some findings that Microsoft hopes will lead to uptake of their new AI-enhanced tools called Copilot:

  • Copilot reduces the effort required. Effects on quality are mostly neutral.
  • New or low-skilled workers benefit the most.
  • As people get better at communicating with [AI tools], they are getting better results.

The Wall Street Journal covered some scepticism about the benefits of AI, highlighting that errors in AI output take additional effort to catch and correct, and there was a 20% drop in usage of some AI tools after the initial month of enthusiasm. This indicates that early adopters need to go into this with their eyes open.

Now what?

For organisations not building software that surfaces in the user interface of its products and services, the main impact of Gen AI will be on how and when to migrate to AI-enabled creative tools that their employees will use. Since the previous AI boom will have resulted in foundational AI procedures and governance in the organisation that can be reused for Gen AI, a simple AI strategy is to treat this shift to a new toolset as a change management exercise.

Further, instead of treating the migration of each tool as a separate exercise, it is worth managing this in a single program. There is a lot that will be common around managing the issues and conducting the training, so it will be more efficient to do it together.

An organisation will typically have a standard or preferred change management approach or blueprint for implementing technology change. This can be re-used for driving the migration to AI-enabled creative tools. No need to reinvent the wheel. (As an example, see the Bonus Content below for how the Kotter 8-step process might be tailored for this.) Note that the existing data governance processes will need to be leveraged in this process exercise. Additionally, the IT team will be fundamental in driving good use of Gen AI adoption.

In tackling the issues mentioned above, here are some questions to help work through the right path:

  1. Copyright. Which legal jurisdictions does the organisation and its creative tool suppliers operate in, and how do copyright laws vary (particularly the concept of “fair use”)? How important is having copyright over the output of creative tools, and are there other IP protection measures (e.g. trademarks) that mitigate any risks?
  2. New users. What degree is an organisation’s creative work done within the organisation, or done using external agencies/firms? How well do the legal agreements covering this work (whether employment or agency agreements) anticipate the issues of Gen AI? Is there consistency between how creative tools and decision-making tools are treated and managed in the organisation?
  3. Data leakage. Do people in the organisation understand how prompts and images given to Gen AI tools can leak out? What regulatory data compliance rules apply to data shared with or generated by these tools? How well do either “fine tuning” or “RAG” approaches to AI model customisation sit within an organisation’s risk appetite?
  4. Misrepresentation. How well do the official communications channels used by the organisation provide authentication? Are human and AI generated watermarking standards in use, e.g. Adobe Content Credentials or IPTC Photo Metadata standards? To what extent are misrepresentations of people at the organisation tracked and detected on social media? Which Gen AI web-scraping tools are blocked from ingesting the organisation’s public content?

You don’t need to over-bake it. For many organisations, the adoption of Gen AI will be through its enterprise tools, so it can be treated like a migration exercise. Just keep it simple.

(Thanks to Sami Makelainen, who provided comments on an earlier version of this post.)

Bonus content – Kotter 8-step process example

Here’s an example of how you might include activities within the Kotter 8-step change management process to help an organisation migrate to AI-enabled creative tools:

  1. Create a sense of urgency. Identify how the use of Gen AI tools links to the organisational strategy (improve staff experience, greater productivity, etc.) and an answer to “why now” (CEO directive, culture of leadership, existing strategic program, etc.).
  2. Build a guiding coalition. Ensure senior stakeholders have bought in to this rationale, with a single influential stakeholder willing to represent the activity. Ensure parts of the organisation outside of IT are represented, such as vendor management, legal, and parts of the organisation that use creative tools, e.g. anyone with “manager” in their title. Ensure the working group is suitable trained about Generative AI technology and its emerging issues, such as those outlined above.
  3. Form a strategic vision. With the stakeholder group, develop a view of how the organisation will be different once it has migrated to new AI-enabled tools, e.g. include use cases. This should be tangible and time-bounded, so should ideally be informed by previous tool migration exercises.
  4. Enlist a volunteer army. Leverage internal organisational communications tools to promote the vision, build a cross-organisation community of supporters. People are generally pretty excited about this new application of AI. The stakeholders and community can together help expand the community so it is truly cross-organisational. Task them to identify the creative tools that are used across the organisation (including “free” tools), which ones already have AI-enabled features, what types of data are consumed and generated by these tools, which suppliers provide them, and where the data is processed. Identify simple metrics that would highlight if the features of these tools successfully bring the expected organisational benefits.
  5. Enable action by removing barriers. Ensure the community gets training about the issues relating to AI-enabled creative tools. Leverage the community to consider the risks of different uses of these tools in their different parts of the organisation, determine what constraints should be applied around the use of these tools, e.g. when can confidential information be shared with the tool. If the constraints are onerous, identify if alternative tools exist that could have fewer constraints.
  6. Generate short-term wins. Focus on one or two tools, prioritising those with the most benefit and easiest to migrate. It may be that it is easiest to start with something like GitHub Copilot and some software engineering teams, or maybe it will be easiest to use something like Microsoft 365 Copilot and some people with “manager” in their titles. Gain agreement to migrate these initial tools and learn from them. Ensure the users of these tools are trained to use the tools under the constraints, and specifically on writing good prompts. People who are already using AI-enhanced tools in the community may be a good source of training information.
  7. Sustain acceleration. Track the metrics to see where the migration to AI-enhanced tools has brought the expected benefit. Use the learnings to build a business case for migrating more tools and leverage the stakeholders to drive the wider adoption of AI-enabled creative tools.
  8. Institute change. Not everything will have gone smoothly. Update policies and procurement practices to accommodate learnings. Provide organisation-wide training on Generative AI technology, and use the community, stakeholders and metric data to bring the rest of the organisation up to speed on the new tools.

Finally, VR has arrived

While we’ve been talking about Virtual Reality for ages, this year it looks to have finally arrived. The term “virtual reality” dates from the 1980s, and the first mainstream VR headset in the form of Google Cardboard arrived in 2014, so it’s hardly a new thing for many people. However, it’s been languishing in a state of unfulfilled expectations. But that may now be about to change.

The above chart shows a Google Trends analysis of global web search interest in the topics: Virtual reality (blue), Augmented reality (yellow), and other related topics that rate as insignificant in comparison (Mixed reality, Extended reality, and Spatial computing). Looking at the period of time since Facebook rebranded as Meta and announced it would double-down on this area, there doesn’t seem to be a greater level of excitement since then, and if anything, it has declined to about half of the peak.

This is despite strong progress in this area that has clarified technology direction as well as the value of particular use cases. Hence the sector is primed for Apple to come in and make a splash with their new Vision Pro headset that will start arriving in customers’ hands in February. Apple has a good track record of entering a consumer electronics category and massively increasing its size, e.g. portable music players (iPod), smart phones (iPhone), tablets (iPhone), smart watches (Apple Watch), Bluetooth headphones (AirPods), and Bluetooth trackers (AirTag). Whether this has been due to clever market timing, market power, or bringing unique innovations to market is not important to this analysis, but suffice to say, they have solid form even despite some examples to the contrary that haven’t been immediate category disruptors, e.g. in set-top-boxes (Apple TV) or smart speakers (HomePod).

While the initial product from Apple is priced at a premium (US$3,500 to preorder the basic model), it is the usual approach by the company to start high and then bring prices down over time with later product releases. This approach hasn’t harmed their earlier successes, so there is reason enough to feel Apple will be able to repeat their previous examples of significantly growing a category that they enter.

This is, of course, assuming that the category offers real value to customers and end users – a point where there is still some scepticism around. I mentioned before that both technology direction and use cases have been clarified, which is why I am optimistic about success.

A key technology question was around “augmented reality” (AR) versus “virtual reality” (VR). While these can both be considered points on a spectrum of what might be called “mixed reality” (MR) or “extended reality” (XR), in practice, the key question was whether the the screen in the device was transparent or not. However, those products based on transparent screens suffered significant user experience issues, e.g. Microsoft Hololens, Magic Leap, and the original Meta (not the Facebook one). It has now become clear that the same cameras that allow a VR headset to perform inside-out tracking can also be used to provide passthrough of the camera feed to the headset screen, and enable the wearer to see something of the outside world. Unless and until there’s an incredible new technology discovery, the VR world has “won” and AR is now just a feature that can be delivered on a VR device.

The applications for the current generation of VR headsets has shown where the valuable use cases are. I have been a user of the Meta Quest 2 (previously Oculus Quest 2) VR headset, and have found it a lot of fun. At an entry level price of about US$300, the Quest 2 has become the market leader. There were 10 million units sold in early 2023 giving it 75% market share, had reached 18 million units by mid 2023, and continues to sell strongly. So, while my experience of current applications is skewed to what this device has enabled, it is highly representative of the general experience.

“Gaming” is sometimes stated as the main application for these headsets, but I find this is too high-level to explain where the value proposition is. I see three key areas where VR headsets have a strong advantage compared to other platforms:

  1. Fitness applications. Just like the Nintendo Wii enabled games to be created that got people off the couch, resulting in a burst of mainstream adoption for fitness purposes, VR headsets like the Quest 2 or the Vision Pro are not tethered to any other devices, support “six degrees of freedom“, and so enable the user to move around while using the device. Given this, exercise games like Beat Saber and Supernatural are very popular, and even exercise brands like LesMills have apps that provide a familiar workout based around their Bodycombat program. This sort of experience can’t easily be achieved with devices like smartphones, laptops or game consoles, and provides a solid reason to use a VR headset.
  2. Virtual office environments. Many people wish they had another monitor connected to their computer, but cost or desk space considerations prevent it. Additional monitors provide screen real-estate that results in better productivity, with less need to flip between windows or scroll around a screen. In a virtual office, the real-world constraints go away, and you can have almost as many additional monitors as you wish. Meta allows this use case in their Horizon Workrooms app, but I recommend the Immersed app which does this very well.
  3. Immersive training. Just as jet pilots train to fly by using immersive flight simulators, VR enables unique training experiences for those situations where emotions and senses might be overwhelmed. This is shown by the success of Virtual Reality Exposure Therapy to treat Post Traumatic Stress Disorder. Similarly, I’ve seen how VR is effective at training people to respond to disasters in underground mines, fire-fighters to attend burning buildings, and the like. Perhaps in the future, people will get credit for hours that a car or truck is driven on virtual roads given that VR can ensure exposure to situations like night or bad weather. For now, you can try out a Car Parking Simulator. The ability to take over all of the vision of the wearer enables training experiences that are possible only with VR.

In conclusion, I am convinced that VR has valuable use cases and that the technology to deliver them is feasible. Subsequent generations of technology now will continue to improve on the experience through things like increased resolution and frame-rate, improved comfort (including better support for corrective lenses), accuracy of hand-tracking, headset weight, battery life, and so on. However, the sales numbers for Quest 2 devices have shown that technology that is good enough has already arrived. The introduction of Apple to this device category will lift it to a new level of maturity. Finally.

Solving a real problem on quantum computers

This is the final of a series of four articles based on my Jupyter Notebooks exploring quantum computing as a tool for generating random number distributions.

Generating random numbers from a variety of specific probability distributions is interesting, and so is implementing digital computer operations on a quantum computer so that multiple operations are performed simultaneously. However neither of these is enough to justify the hype around quantum computers. Let’s now take a look at an example of something where quantum computers can significantly outperform digital computers. It’s known as Grover’s algorithm, and allows a quantum computer to take a function that might be performed on digital computers, and wraps it in some quantum goodness that quickly solves for it. “Solving for it” in this context means creating a probability distribution that is skewed in a controlled way so that the right answer comes up most often when measuring the qubits at the end.

Also, we will solve for a function of the type where it has one (or just a couple) of solutions, and where the approach to solve for it on a digital computer would be to “brute force” the answer by trying every possible solution to check if it works. On a quantum computer, there are some tricks to try the function fewer times, or even just once, and yet still figure out the solution. This demonstrates quantum advantage for using a quantum computer to solve problems of this type.

Functions of the type we’re interested in, that have one specific solution, exist all over the place. So, this is a potentially highly useful application for quantum computers. For example, a function that checks a possible set of numbers to a particular Sudoku puzzle to verify if it is a correct solution, a function that checks a password to see if it matches an encrypted password entry for a user, or a function that confirms the colour of a particular pixel in an image is correct for a given 3D scene with a particular set of objects and lighting. Many different problems can be rewritten in terms of a function that checks if a given answer is correct.

However, before seeing how to do this on a quantum computer, we need to introduce a couple of new operations.

Z operation

The Z operation works on all pairs of rows in the state vector associated with outcomes where there are different values of only a particular qubit, and flips the sign on the second row of each pair. We can call it the “flip” operation. Let’s have a quick look at an example.

In a two qubit scenario, if we start with an H(0) operator and an H(1) operator, as we did in the first notebook, we have the same value on each row of the state vector. If we then do a Z(0) followed by a Z(1), you can see the signs flip but the numbers otherwise stay the same.

QubitsInitial state vectorH(0)H(1)Z(0)Z(1)
|00>1.01/√21/21/21/2
|01>0.01/√21/2-1/2-1/2
|10>0.00.01/21/2-1/2
|11>0.00.01/2-1/21/2

We saw negative probabilities in the article where we introduced the RY operation, and here they are again. They are the key to how Grover’s algorithm works.

Note that we could have created a Z operation out of the operations that we already have, using a neat trick. The Z operation produces the same result as using the H, X, and H operations in sequence. If you remember, H takes a pair of rows with values a and b, and turns them into (a+b)/√2 and (ab)/√2. X then swaps these, so performing H again results in the pair of rows becoming 2a/(√2 x √2) and -2b/(√2 x √2) – which is just a and –b. However, Z is a common enough thing to want to do that it is useful to have it as a standalone operation rather than do H, X and H each time.

CCZ operation

Similarly to CCX, the CCZ operation is “doubly constrained”. In this case, it is a “doubly constrained flip” operation. Constrained to just those rows where the two specified qubits are |1>, it flips the sign of the second row of all pairs where the third qubit is the only one changing. Since the second row of these pairs is also the row where the third qubit is |1>, another way to think about this operation is flipping the sign of all rows where the three specified qubits are |1>.

Here’s an example of CCZ in practice:

QubitsInitial state vectorH(0)H(1)H(2)CCZ(0, 1, 2)
|000> (|0>)1.01/√21/21/√81/√8
|001> (|1>)0.01/√21/21/√81/√8
|010> (|2>)0.00.01/21/√81/√8
|011> (|3>)0.00.01/21/√81/√8
|100> (|4>)0.00.00.01/√81/√8
|101> (|5>)0.00.00.01/√81/√8
|110> (|6>)0.00.00.01/√81/√8
|111> (|7>)0.00.00.01/√8-1/√8

Implementing a verifier

The other thing that Grover’s algorithm needs is a function that verifies whether a value is a valid solution to some problem. All it needs to do is take a potential solution, and tell us “yes” or “no”.

We can do this by considering some of the qubits in the outcome to represent a proposed solution, and one other qubit to represent “yes” if it is |1> or “no” if it is |0>. In the example implemented here, qubits 0 and 1 will represent potential solutions, and qubit 2 will represent the result of validating it.

On a digital computer, we would think about this as bits. We would implement some logical operations that take two bits representing potential solutions and return another bit with the validation result. As we saw in the last notebook, we can implement the deterministic operations of a digital computer on a quantum computer by using X, CX, CCX, etc. operations.

Let’s say we want our verifier function to take state vectors where one of |000>, |001>, |010>, or |011> rows has the value 1.0 (100%), and only if it’s the “right” one, will the state vector be changed so that the corresponding row where qubit 2 is |1> becomes 1.0. For example, if |011> is the right solution, this would be implemented simply with the function CCX(0, 1, 2) which would swap the 1.0 value from |011> over to |111>.

Firstly, let’s use X operations to encode the value 3 into the state vector, by putting the value 1.0 in the |011> (|3>) row. (You can grab the complete Python script from here, or just type in the code below.)

import numpy as np
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute, BasicAer
from qiskit.visualization import plot_histogram
from qiskit.quantum_info import Statevector
import matplotlib.pyplot as plt
backend = BasicAer.get_backend('qasm_simulator')

q = QuantumRegister(3)    # We want 3 qubits
algo1 = QuantumCircuit(q) # Construct an algorithm on a quantum computer

# Start in the |3> row
algo1.x(0)
algo1.x(1)

v1 = Statevector(algo1)
print(np.real_if_close(v1.data))

$$\begin{bmatrix}
0.0 \\
0.0 \\
0.0 \\
1.0 \\
0.0 \\
0.0 \\
0.0 \\
0.0
\end{bmatrix}$$

Now if we perform CCX(0, 1, 2), the values in |011> (|3>) and |111> (|7>) will be swapped, moving the 1.0 value to the final row, where qubit 2 has a value of |1> . Since we know that CCX is constrained to work only on these rows, we know that only where the |011> potential solution is given the 1.0 value will the state vector be changed to have 1.0 on a row where qubit 2 is |1>. The other three potential solutions will result in no change.

# Apply CX operation, constrained to rows where qubit 0 and 1 are |1>, 
# swapping qubit 2's rows
algo1.ccx(0, 1, 2) 
v2 = Statevector(algo1)
print(np.real_if_close(v2.data))

$$\begin{bmatrix}
0.0 \\
0.0 \\
0.0 \\
0.0 \\
0.0 \\
0.0 \\
0.0 \\
1.0
\end{bmatrix}$$

To create different verifier functions, we can use X operations and specify either qubit 0 or qubit 1. For example, to create a function that will answer “yes” for the potential solution |01> and “no” the other three potential solutions, we simply do X(1) before doing CCX(0, 1, 2). We will also do X(1) again after the CCX to “undo” the first X, and ensure the state vector has 1.0 in the |101> (|5>) row:

QubitsInitial state vectorX(1)CCX(0, 1, 2)X(1)
|000> (|0>)0.00.00.00.0
|001> (|1>)1.00.00.00.0
|010> (|2>)0.00.00.00.0
|011> (|3>)0.01.00.00.0
|100> (|4>)0.00.00.00.0
|101> (|5>)0.00.00.01.0
|110> (|6>)0.00.00.00.0
|111> (|7>)0.00.01.00.0

Implementing this in Qiskit:

# Verifies that a proposed solution is correct only when it is |10>
def add_verify(algo):
    algo.x(1)
    algo.ccx(0, 1, 2)
    algo.x(1)

algo2 = QuantumCircuit(q) # Construct an algorithm on a quantum computer

# Ensure the state vector has 100% in the |001> row
algo2.x(0) 

add_verify(algo2) # Add the verify function to the algorithm
v3 = Statevector(algo2)
print(np.real_if_close(v3.data))

$$\begin{bmatrix}
0.0 \\
0.0 \\
0.0 \\
0.0 \\
0.0 \\
1.0 \\
0.0 \\
0.0
\end{bmatrix}$$

However, we need to modify the verifier function a little before we use it in Grover’s algorithm. We are going to apply the H operation on the result qubit (qubit 2) before running the function, and then again afterwards.

This is a little trick that turns an X operation into a Z operation. So, the CCX operation effectively becomes like a CCZ operation. And yes, the verifier function could have just been written with a CCZ instead of a CCX and we could skip the H operations, but digital computer operations don’t use Z type operations, and this way the algorithm is more general.

# Flips the sign of the row corresponding to the outcome that 
# the verify function would indicate is correct
def add_verify_with_h(algo):
    algo.h(2)
    add_verify(algo)
    algo.h(2)

This version of the function will now flip the sign of the state vector row with the answer, so if the state vector was fully populated with positive values, the solution will be revealed as the one that’s negative. Unfortunately, we can’t stop here with the job done, because in practice we can’t read the state vector out of the quantum computer. All we can do is take measurements of the qubits, and while we can have a negative value in a row of a state vector, we won’t see a negative probability appear in measurements.

Grover’s algorithm is about amplifying the negative row so it will have a higher probability in the measurements.

Grover’s algorithm

Normally the verification function will be quite complicated, and difficult to figure out from just looking at it. Our verification function is simple, but that’s fine for learning how Grover’s algorithm works.

The basic strategy for using Grover’s algorithm is to:

  1. Prepare the state vector so it has the same value on every row, i.e. no row has a zero value.
  2. Apply the verification function, which will flip the sign of the row corresponding to the right answer.
  3. Amplify the negative rows compared to the non-negative rows.

Then we measure the qubits, and the most likely result should be the right answer. For larger numbers of qubits, the steps 2 and 3 will typically be repeated to make the right answer clearer, but we shouldn’t need to do that for our example.

We’ve already defined the verification function, but here’s the state preparation function:

# Creates a uniform probability distribution across the state vector
def add_prepare(algo):
    algo.h(0)
    algo.h(1)
    algo.h(2)

It is just the approach to creating a uniform probability distribution that we saw in the first notebook.

We can see how the verification function just flips the sign of the answer row |101> when given a state vector with 1/√8 values in all of its rows:

algo3 = QuantumCircuit(q) # Construct an algorithm on a quantum computer
add_prepare(algo3)        # Add the operations to prepare the state vector
add_verify_with_h(algo3)  # Add the sign-flipping version of verify 
v4 = Statevector(algo3)
print(np.real_if_close(v4.data))

$$\begin{bmatrix}
\frac{1}{\sqrt{8}} \\
\frac{1}{\sqrt{8}} \\
\frac{1}{\sqrt{8}} \\
\frac{1}{\sqrt{8}} \\
\frac{1}{\sqrt{8}}\\
-\frac{1}{\sqrt{8}} \\
\frac{1}{\sqrt{8}} \\
\frac{1}{\sqrt{8}}
\end{bmatrix}$$

Step 3 – the amplification function – requires a bit of explanation.

The idea now is to make everything a bit more negative, and because one row is already negative, that row becomes much more negative than the other rows. As the probability that the outcome of a measurement being a given row is equal to the square of the value of that row, it doesn’t matter than the values are negative. The row that is more negative than the other rows will end up becoming a more likely outcome.

The workhorse of the procedure is the H operation. As we discussed in the first notebook, it is like a “half” operation, where it works on all pairs of rows that differ only by a specific qubit, and turns the first of these into the sum of the pairs divided by the root of a half, and the second into the difference of the pairs divided by the root of a half.

There are two observations worth noting here. Firstly, it puts the sums of the pairs into the first row, i.e. the row where the specific qubit has a |0> outcome. Secondly, it is its own inverse, i.e. that it you perform two identical H operations in sequence, the second operation undoes the first one.

Using these two observations, the amplification function applies the H operation for each of the qubits in turn, resulting in all rows being summed into the first row of the state vector, i.e. corresponding to the |000> outcome, although this sum will be divided by the root of 8, which is the result of dividing by √2 in the calculations three times. However, this row will be a large, positive value compared to the others.

Then the amplification function flips the sign on the |000> row, making it a large, negative value. Lastly, the H operation is applied for each qubit in turn, reversing the earlier H operations, but spreading the amount “taken” from the |000> row evenly across all of the rows.

Let’s see it in action. Firstly, let’s apply H for each qubit. We can do this by reusing the prepare function:

add_prepare(algo3)
v5 = Statevector(algo3)
print(np.real_if_close(v5.data))

$$\begin{bmatrix}
3/4 \\
1/4 \\
-1/4 \\
1/4 \\
1/4 \\
-1/4 \\
1/4 \\
-1/4
\end{bmatrix}$$

After the verify function was performed, all rows were 1/√8, except for the solution row which was negative. The sum of all rows is 6/√8 and this value divided by √8 is 3/4, which is what we’ve ended up with in row |000> after the first part of the amplification procedure.

Next we flip the sign on that row so it becomes negative. We have an operation – CCZ – that flips the sign on the |111> row, but not one for the |000> row. Still, we can do this by first using the X operation for each qubit, to reverse the order of the state vector. On a digital computer, to reverse a vector like this, you’d need to perform an operation for each row in the first half of the vector, swapping it with its counterpart row in the second half of the vector. Quantum computers are much more efficient at this.

X(0) swaps groups of rows separated by one row, X(1) swaps groups of rows separated by two rows, and X(2) swaps groups of rows separated by four rows. Once we’ve performed each of these, the state vector has been reversed:

# Reverses the rows of the state vector
def add_reverse(algo):
    algo.x(0)
    algo.x(1)
    algo.x(2)

Using this reverse routine, we can following it by using CCZ to flip the sign on row |111>, then reverse the state vector again to put the state vector back in the original order.

add_reverse(algo3) # Add the operations to reverse the state vector
algo3.ccz(0, 1, 2) # Apply the CCZ operation to flip the sign on row |111>
add_reverse(algo3) # Add the operations to reverse the state vector again
v6 = Statevector(algo3)
print(np.real_if_close(v6.data))

$$\begin{bmatrix}
-3/4 \\
1/4 \\
-1/4 \\
1/4 \\
1/4 \\
-1/4 \\
1/4 \\
-1/4
\end{bmatrix}$$

Note that in flipping the sign on the 3/4 value in row |000>, we have effectively deducted an amount equal to 6/4 (or 3/2) from this row. This reduction will now be spread back across all the rows by using the H operation for each qubit again.

add_prepare(algo3)
v7 = Statevector(algo3)
print(np.real_if_close(v7.data))

$$\begin{bmatrix}
-\frac{1}{\sqrt{32}} \\
-\frac{1}{\sqrt{32}} \\
-\frac{1}{\sqrt{32}} \\
-\frac{1}{\sqrt{32}} \\
-\frac{1}{\sqrt{32}} \\
-\frac{5}{\sqrt{32}} \\
-\frac{1}{\sqrt{32}} \\
-\frac{1}{\sqrt{32}}
\end{bmatrix}$$

The reduction by 3/2 has been divided by √8 again, so the difference between these values and the ones after the verify function is just 3/√32. All of the rows that were 1/√8 are now -1/√32, and the single row that was -1/√8 is now -5/√32. If you’re following along with Python yourself, the output probably doesn’t show it, and just shows -0.17678 (or similar) for all rows except one that shows -0.88388 (or similar).

Now that we’ve worked through the operation of the amplify function, we can define it as a Python function:

# Amplifies the row with a negative value to become more negative
def add_amplify(algo):
    add_prepare(algo)
    add_reverse(algo)
    algo.ccz(0, 1, 2)
    add_reverse(algo)
    add_prepare(algo)

Let’s now see the whole thing in action, and what gets measured at the end. (You can grab a simplified version of the Python script from here that does only this bit, or just type in the code below.)

c = ClassicalRegister(2)     # The solution at the end has only 2 bits
algo4 = QuantumCircuit(q, c) # Construct an algorithm on a quantum computer

add_prepare(algo4)           # Step 1 of Grover's: prepare the state vector
add_verify_with_h(algo4)     # Step 2 of Grover's: flip the solution row 
add_amplify(algo4)           # Step 3 of Grover's: amplify negative rows

algo4.measure(q[0:2], c)     # Measure the two qubits 0 and 1, get some bits
result = execute(algo4, backend, shots=1000).result() # Run this all 1,000 x
plot_histogram(result.get_counts(algo4))              # Show a histogram 
plt.show()

You can go back and set up the verify function differently, and you’ll see that the algorithm will still reveal the correct solution in the measurements.

In this way, the quantum computer hasn’t needed to brute force the answer by trying the verification function over and over again until it finds the answer. The fact that the verification function can be used to make a row in the state vector negative was enough to allow this negative value to be amplified, and set up a probability distribution that makes the answer pop out more often in the measurements.

As I mentioned at the start, it may be that the

add_verify_with_h(algo4)
add_amplify(algo4)

steps need to be repeated as the number of qubits increases. However, it won’t need to be done as frequently as once per qubit, so it will continue to be more efficient than the brute force approach that a digital computer has to use.

In conclusion

We added another two operations to our set, and have seen how to use them on a quantum computer to quickly figure out the solution to the digital computing function that verifies solutions to a problem. Here is the complete set of operations over these four articles:

OperationShort-hand descriptionSpecified byDetailed description
H“half”1 qubitFor all pairs of rows that differ only by the value of a specific qubit in the outcome, replace the first row value with a new value that is the sum of the original values divided by √2, and the second row value with the difference between the original values divided by √2.
CX“constrained swap”2 qubitsFor all pairs of rows where the first qubit specified is in the |1> state in the outcome, and where otherwise the rows differ only by the value of the second qubit specified, swap the rows in the pair.
RY“relative swap”1 angle and 1 qubitFor all pairs of rows that differ only by the value of a specific qubit in the outcome, swap a fraction “f” of the value from the first row to the second, and bring the opposite fraction (i.e. 1-f) from the second row but with the sign flipped, where “f” is specified as the angle 2 x arcsin(√f). If “f” is 1.0, the angle will be 𝜋.
X“swap”1 qubitFor all pairs of rows that differ only by the value of a specific qubit in the outcome, swap the values in the pair.
CCX“doubly constrained swap”3 qubitsFor all pairs of rows where both the first and second qubit specified are in the |1⟩ state in the outcome, and where otherwise the rows differ only by the value of the third qubit specified, swap the rows in the pair
Z“flip”1 qubitFor all pairs of rows that differ only by the value of a specific qubit in the outcome, flip the sign on the second row of each pair.
CCZ“doubly constrained flip”3 qubitsFor all pairs of rows where both the first and second qubit specified are in the |1> state in the outcome, and where otherwise the rows differ only by the value of the third qubit specified, flip the sign on the second row of each pair.

That’s all for now. Hope you’ve enjoyed working along with me in seeing how quantum computers can perform computation by changing the probabilities of the different possible outcomes of their qubits, and how often this approach allows quantum computers to solve a problem more efficiently than a digital computer.

Digital operations on quantum computers

This is the third in a series of four articles based on my Jupyter Notebooks exploring quantum computing as a tool for generating random number distributions.

Generating random numbers from a variety of specific probability distributions shows us how the quantum state vector reflects the desired probability distribution, and the previous article showed how a variety of such distributions could be achieved. However, quantum computers can simulate a digital computer also. Even though bits are certain and qubits are uncertain, computing on a digital computer can be thought of like working with a special kind of probability distribution: one where there is a row on the state vector with a 100% probability, and all the rest are zero. This reflects how digital computers are deterministic.

Let’s look at how we might perform digital computing operations on a quantum computer, sticking with high-school level maths. First, we need to introduce some new operations.

X operation

We have already seen the CX, or “constrained swap”, operation. There is a simpler one called the X operation which does a swap within all pairs of rows in the state vector where the only difference is in a specific qubit. So, where the CX operation required specifying two qubits to determine the rows it affects, the X operation requires specifying just one qubit. Where you might think of CX as a “constrained swap”, you can think of X as just a “swap”.

To clarify the X operation, here is an example of how it might be used:

QubitsInitial state vectorX(0)X(1)
|00>1.00.00.0
|01>0.01.00.0
|10>0.00.00.0
|11>0.00.01.0

The first X swaps the first two rows, as these differ only in qubit 0 (the rightmost qubit), and while it also swaps the second two rows, these were the same, so we don’t see a difference there. The second X swaps rows |01> and |11>, as these differ only in qubit 1 (the leftmost qubit), and while it also swaps the remaining two rows, again these were the same value, so we don’t see any difference after the operation.

CCX operation

Now that we know about X and CX, you might be wondering if there are more constraints that can be added to X. Yes, a common operation is a “doubly constrained” version of X, sometimes known as a Toffoli operation.

The CCX operation is constrained to operate only on pairs of rows where two specified qubits are |1>, and it swaps pairs of rows where only a third qubit changes, i.e. a “doubly constrained swap” operation. Here’s what some CCX operations look like on a state vector consisting of three qubits:

QubitsInitial state vectorX(1)CCX(0, 1, 2)X(0)CCX(0, 1, 2)
|000> (|0>)1.00.00.00.00.0
|001> (|1>)0.00.00.00.00.0
|010> (|2>)0.01.01.00.00.0
|011> (|3>)0.00.00.01.00.0
|100> (|4>)0.00.00.00.00.0
|101> (|5>)0.00.00.00.00.0
|110> (|6>)0.00.00.00.00.0
|111> (|7>)0.00.00.00.01.0

Since our examples use Qiskit, qubits are numbered from the right. Qubit 0 is the rightmost one, then qubit 1 is in the middle, and qubit 2 is the leftmost one. In the above table, as it is starting to get long, next to the qubits identifier for the row, I’ve also written the row number in brackets. The qubits identifier is a binary number, and corresponds to a decimal number, which is the row number, e.g. “011” is the binary number for 3, so I’ve written this as |011> (|3>).

In this example, the CCX(0, 1, 2) operation swaps rows where qubits 0 (rightmost) and 1 (middle) are |1>, i.e. those rows ending in |11>: rows |3> and |7>. The first time this operation is performed, both of those rows are 0.0, so it looks like nothing happens, but the second time, we see the effect of the swap performed.

Incrementing a 3-bit number

A very common operation on a digital computer is incrementing a number, or in other words, adding one to it. Incrementing 3 results in 4, incrementing 6 results in 7, and so on.

Each row of the state vector represents a different number, i.e. the decimal number corresponding to the binary number for that arrangement of qubits. For a state vector that represents 3 qubits, row |100> is row |4>, while row |110> is row |6>. Incrementing a number can be thought of as taking a state vector with a specific number encoded in it – the row with 100% probability – and turning it into a state vector with a new number encoded in it, specifically the original number plus one. For example, if we start with a state vector with row |4> with 100% probability, incrementing this would result in a new state vector with row |5> having 100% probability.

To implement this sort of algorithm, where a row has 100% probability, and we make another row 100% probability, we simply need to use variants of the X operation. The X, CX and CCX operations only swap rows around, so will always leave the state vector having a single row with 100% probability. In this case, they can simulate the deterministic operations of a digital computer.

To increment a number encoded on the state vector using variants of the X operation, it is quite straightforward, but we need to think about it in binary notation. If we add one to a number ending in |0>, it becomes |1>. While if we add one to a number ending in |1>, it will become |0> and carry a one to the next place. To achieve this, we can use X to swap from a row |0> to a |1> row, or visa versa, and a CX to manage the carrying of the one. Similarly, we can use a CCX to manage the carrying of the one to the final place.

Implementing in Qiskit

Let’s create this increment operation as a Python function. (You can grab the complete Python script from here, or just type in the code below.)

import numpy as np
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit.quantum_info import Statevector

# Add the operations to an algorithm that increments the number 
# encoded on a 3 qubit state vector
def add_increment(algo):
    algo.ccx(0, 1, 2) # Carry the one to qubit 2, when qubits 0 and 1 are |11>
    algo.cx(0, 1)     # Carry the one to qubit 1, when qubit 0 is |1>
    algo.x(0)         # Add one to qubit 0

Now we can test it out.

q = QuantumRegister(3)    # We want 3 qubits
algo1 = QuantumCircuit(q) # Construct an algorithm on a quantum computer

# Start in the |2> row
algo1.x(1)
v1 = Statevector(algo1)
print(np.real_if_close(v1.data))

$$\begin{bmatrix}
0.0 \\
0.0 \\
1.0 \\
0.0 \\
0.0 \\
0.0 \\
0.0 \\
0.0
\end{bmatrix}$$

# Increment the number encoded in the state vector
add_increment(algo1)
v2 = Statevector(algo1)
print(np.real_if_close(v2.data))

$$\begin{bmatrix}
0.0 \\
0.0 \\
0.0 \\
1.0 \\
0.0 \\
0.0 \\
0.0 \\
0.0
\end{bmatrix}$$

# Increment the number once more
add_increment(algo1)
v3 = Statevector(algo1)
print(np.real_if_close(v3.data))

$$\begin{bmatrix}
0.0 \\
0.0 \\
0.0 \\
0.0 \\
1.0 \\
0.0 \\
0.0 \\
0.0
\end{bmatrix}$$

We are successfully incrementing the number encoded in the state vector each time.

Doing multiple increments simultaneously

What if multiple numbers were encoded in the state vector? Actually, the same algorithm will continue to work.

Let’s start by encoding two numbers, so rather than one row having a 100% probability, the state vector will have two rows each with 1/√2 . Remember that we square this to get the probability, which will be 1/2 or 50%.

In Qiskit, we will encode both |0> and |3>, and apply the increment operation.

algo2 = QuantumCircuit(q) # Construct an algorithm on a quantum computer

# Start with |0> and |3> rows having equal probability
algo2.h(2)
v4 = Statevector(algo2)
print(np.real_if_close(v4.data))

$$\begin{bmatrix}
\frac{1}{\sqrt{2}} \\
0.0 \\
0.0 \\
0.0 \\
\frac{1}{\sqrt{2}} \\
0.0 \\
0.0 \\
0.0
\end{bmatrix}$$

# Increment the numbers encoded in the state vector
add_increment(algo2)

v5 = Statevector(algo2)
print(np.real_if_close(v5.data))

$$\begin{bmatrix}
0.0 \\
\frac{1}{\sqrt{2}} \\
0.0 \\
0.0 \\
0.0 \\
\frac{1}{\sqrt{2}} \\
0.0 \\
0.0
\end{bmatrix}$$

Now it has rows |1> and |4> with equal probability. Two increments have been performed simultaneously, without changing the increment operation at all. (In fact, the increment operation can also be thought of as a rotation operation, where the values are rotated through all of the rows of state vector, a row at a time.)

It is this sort of capability that highlights the power of quantum computers to rapidly speed up some types of computation.

In conclusion

We have added another two more operations to our set, and seen how to use them on a quantum computer to perform a traditional digital computer functions (incrementing a number). We’ve also seen how quantum computers can enhance digital functions, like performing multiple increments at once. Here is the set of operations we’ve talked about so far:

OperationShort-hand descriptionSpecified byDetailed description
H“half”1 qubitFor all pairs of rows that differ only by the value of a specific qubit in the outcome, replace the first row value with a new value that is the sum of the original values divided by √2, and the second row value with the difference between the original values divided by √2.
CX“constrained swap”2 qubitsFor all pairs of rows where the first qubit specified is in the |1> state in the outcome, and where otherwise the rows differ only by the value of the second qubit specified, swap the rows in the pair.
RY“relative swap”1 angle and 1 qubitFor all pairs of rows that differ only by the value of a specific qubit in the outcome, swap a fraction “f” of the value from the first row to the second, and bring the opposite fraction (i.e. 1-f) from the second row but with the sign flipped, where “f” is specified as the angle 2 x arcsin(√f). If “f” is 1.0, the angle will be 𝜋.
X“swap”1 qubitFor all pairs of rows that differ only by the value of a specific qubit in the outcome, swap the values in the pair.
CCX“doubly constrained swap”3 qubitsFor all pairs of rows where both the first and second qubit specified are in the |1⟩ state in the outcome, and where otherwise the rows differ only by the value of the third qubit specific, swap the rows in the pair

The next article will look at a well-known algorithm that performs a task that is complex on a digital computer but is very efficient on a quantum computer.

Further adventures in quantum randomness

This is the second in a series of four articles based on my Jupyter Notebooks exploring quantum computing as a tool for generating random number distributions.

The first article showed how a quantum computer could be programmed to generate a uniform random distribution of two bits using operations on qubits. It was a pretty trivial algorithm, and compared with the complexity of generating pseudo-random numbers on a digital computer, showed the advantage of using quantum computers for this application. However, given that I discussed how quantum computers can manipulate probabilities, it’s natural to consider how other, non-uniform, random number distributions might be calculated using a quantum computer. As with the first article, I’m sticking with high-school level maths.

Bell state

A special type of quantum state is known as the Bell state. There are actually four Bell states, but for simplicity, we’ll just pick one. To put a two qubit quantum computer into a Bell state, we will manipulate it to have the state vector

$$\begin{bmatrix}
\frac{1}{\sqrt{2}} \\
0.0 \\
0.0 \\
\frac{1}{\sqrt{2}}
\end{bmatrix}$$

which means that a measurement will get either the |00> or |11> outcomes with equal probability, but the |01> and |10> outcomes won’t appear at all. Another way to think of this is flipping two coins, and having them always end up heads-heads or tails-tails, but never getting a heads-tails result.

To get this state vector, it’s not enough to use the H operation, but we need something called the CX operation.

CX operation

The CX operation can be thought of as a “constrained swap” operation which affects pairs of rows in the state vector specified by the states of two qubits (rather than specified by just one qubit, like we saw with the H operation). It will cause the values of those pairs of rows to swap, constrained to those pairs of possible outcomes where the first qubit specified is in the |1> state and that otherwise differ only by the value of the second qubit.

For example, if we start with the usual initial state vector for two qubits:

QubitsInitial state vector
|00>1.0
|01>0.0
|10>0.0
|11>0.0

where the |00> outcome has a 100% probability, and now apply the CX operation against the right-most qubit then the left-most qubit, or CX(0,1) to use the Qiskit numbering for qubits, the state vector wouldn’t change at all, since the pair of rows where the right-most qubit is |1> are both the same, i.e. 0.0, so swapping doesn’t change anything.

However, if we firstly use the H operator on rows associated with the right-most qubit, or an H(0) operation, and then perform the same CX(0,1) operation, we get a more interesting result:

QubitsInitial state vectorWorking out H(0)Result of H(0)Working out CX(0,1)Result of CX(0,1)
|00>1.0=(1.0+0.0)/√21/√2unchanged1/√2
|01>0.0=(1.0-0.0)/√21/√2=0.00.0
|10>0.0=(0.0+0.0)/√20.0unchanged0.0
|11>0.0=(0.0-0.0)/√20.0=1/√21/√2

Swapping the rows made a change this time, and we have ended up with the Bell state that we were talking about above.

Implementing this on Qiskit

Now, let’s create a histogram of the results we get from performing this on a (simulated) quantum computer, and check that it does what we expect. We’ll use the same approach with Qiskit as we did last time. (You can grab the complete Python script from here, or just type in the code below.)

import numpy as np
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute, BasicAer
from qiskit.visualization import plot_histogram
import matplotlib.pyplot as plt
backend = BasicAer.get_backend('qasm_simulator')

q = QuantumRegister(2)   # We want to use 2 qubits
algo = QuantumCircuit(q) # Readies us to construct an algorithm to run on the quantum computer

algo.h(0)          # Apply H operation on pairs of rows related to qubit 0
algo.cx(0,1)       # Apply CX operation, constrained where qubit 0 is |1>
algo.measure_all() # Measure the qubits and get some bits

result = execute(algo, backend, shots=1000).result()
plot_histogram(result.get_counts(algo))
plt.show()

Yes, this is the random distribution we were hoping to get. It is just “00” and “11” with no “01” or “10” results.

RY operation

We’ve achieved a non-uniform distribution, but it’s not a very interesting one. It’s a 50-50 outcome, and we could have achieved that with 1 qubit. We didn’t really need 2 qubits. To create more interesting distributions, we will need another operation. Let’s take a look at the RY operation.

RY adjusts the pairs of state vector rows applying to a specified qubit, and adjusts them by a specified “angle”. If the angle is pi (𝜋), which is an amount in radians equivalent to 180 degrees, the adjustment results in a swap of values and flipping the sign of the first value (we’ll come back to this). But the swap is modified relative to the angle, so we can think of it like a “relative swap” operation.

Let’s have a look at at how it would work on the standard initial state vector, with the specific qubit being the right-most one (or, qubit 0), and for some different angles:

QubitsInitial state vectorR(0.0, 0)R(𝜋, 0)R(𝜋, 0) againR(𝜋/2, 0)
|00>1.01.00.0-1.0-1/√2
|01>0.00.01.00.0-1/√2
|10>0.00.00.00.00.0
|11>0.00.00.00.00.0

The first time the RY operation is used, it is given a specified angle of 0.0, and it does absolutely nothing to the state vector. This is correct – with an angle of 0.0, RY will not change anything.

Next, we can see that when the RY(𝜋, 0) operation happens, it swaps the values where the right-most qubit (qubit 0) differ, i.e. the first and second row, and the third and fourth row. In addition, it flips the sign on the first of each pair of rows. The first time RY happens, it simply moves the 100% outcome from |00> to |01>. The second time RY happens, it moves this outcome back to |00> and flips the sign to negative.

What does -100% mean? How can this be a probability? Well, each row of the state vector is a probability amplitude rather than a probability. If a probability amplitude is a real number, i.e. no imaginary component, you can turn it into its corresponding probability by just squaring it. -1.0 x -1.0 is 1.0, so -100% as a probability amplitude is equivalent to a 100% probability. Note that this isn’t just some oddity, but actually part of what makes quantum computers powerful.

The final application of the RY operation in the table is with a specified angle that is 𝜋/2 which corresponds to 90 degrees. It’s mid-way between 0.0 and 𝜋, and produces a result that is also mid-way between the previous results. Where the 0.0 angle didn’t move any of the probability amplitude values between the pairs, and the 𝜋 angle moved all of the probability amplitude values to the alternate row in each pair, the 𝜋/2 angle is halfway between those angles and it moved half the probability amplitude, in the same way the H operator did in the previous notebook.

In fact, we can pick an angle to give to the RY operation that will move a desired fraction of the probability amplitude value between the rows. To swap a fraction “f” of the value from the first row to the second, and bring the opposite fraction (i.e. 1-f) from the second row but with the sign flipped, you use the angle calculated by 2 x arcsin(√f). For our final application of RY above, it had the fraction f=1/2, and it turns out that 2 x arcsin(√(1/2)) = 𝜋/2 which is the angle used in the operation.

We can now use this knowledge to create a range of specific probability distributions for our random bits. The set of operations we have talked about so far – H, CX and RY – should allow us to create any probability distribution. For example, if we want to create a probability distribution where it is equally likely that any of the first three outcomes (|00>, |01>, and |10>) happen and yet the last outcome (|11>) shouldn’t happen, the state vector we’d want to create is:

$$\begin{bmatrix}
\sqrt{\frac{1}{3}} \\
\sqrt{\frac{1}{3}} \\
\sqrt{\frac{1}{3}} \\
0.0
\end{bmatrix}$$

A way to get this is to recognise that if we look at the state vector as two pairs of rows, the first pair of outcomes are twice as likely in total as the second pair of outcomes. We can use the RY operation to swap (the square root) of a third of the overall probability to the second pair. We can then use a sequence of H, RY, CX and RY operations to spread the probabilities within each pair. This looks like:

QubitsInitial state vectorRY(2 x arcsin(√(1/3)), 1)H(0)RY(𝜋/4, 0)CX(1, 0)RY(-𝜋/4, 0)
|00>1.0√(2/3)√(1/3)0.31250.3125√(1/3)
|01>0.00.0√(1/3)0.75430.7543√(1/3)
|10>0.0√(1/3)√(1/6)0.22090.5334√(1/3)
|11>0.00.0√(1/6)0.53340.22090.0

You can see here that after the H(0) operation, the first two rows have the values we want, but the final two rows had the desired values before the H(0). The operations following the H(0) have the effect of undoing the H(0) operation on the final two rows but leaving the first two rows alone. Note that the final two RY operations are opposite signs to each other, so they should cancel each other out, but a CX(1,0) operation has been done in the middle. This CX operation, in swapping the final two rows, has the effect of making it as if the first of the final two RY operations was also a negative angle for those rows, so instead of cancelling out (like happened on the first two rows), the two RY operations on those rows add together as if it was an RY operation of -𝜋/2. As we saw above, an RY operation with the angle 𝜋/2 is similar to an H operation, and with the negative angle, the RY operation acts to reverse the H.

Don’t worry if you didn’t fully follow that. This sort of procedure is called “amplitude embedding” or “state preparation”, and there are various algorithms to do this, many of which get quite esoteric. The above procedure was inspired by a paper by Mottonen, Vartiainen, Bergholm, and Salomaa. The main thing to note is that quantum computers allow arbitrary non-uniform distributions to be constructed.

Implementing this on Qiskit

Let’s test the above procedure and see if it does what we expect. (You can grab the complete Python script from here, or just type in the code below.)

import numpy as np
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute, BasicAer
from qiskit.visualization import plot_histogram
import matplotlib.pyplot as plt
backend = BasicAer.get_backend('qasm_simulator')

q = QuantumRegister(2)   # We want to use 2 qubits

angle1 = 2 * np.arcsin(np.sqrt(1.0/3.0))
angle2 = np.pi / 4

algo = QuantumCircuit(q) # Readies us to construct an algorithm to run on the quantum computer

algo.ry(angle1, 1)       # Apply RY operation to swap 1/3 of qubit 1's value 
algo.h(0)                # Apply H operation on pairs of rows related to qubit 0
algo.ry(angle2, 0)       # Apply RY operation to perform a half of H on qubit 0
algo.cx(1,0)             # Apply CX operation, constrained to where qubit 1=|1>
algo.ry(-angle2, 0)      # Apply RY operation to undoing half of H on qubit 0

algo.measure_all()       # Measure the qubits and get some bits

result = execute(algo, backend, shots=1000).result()
plot_histogram(result.get_counts(algo))              
plt.show()

This is exactly what we were hoping to see. It is “00”, “01” and “10” split three ways, and with no “11” results.

In conclusion

We have added two more operations to our set, and seen how to use them on a quantum computer to create a variety of random distributions, such as the Bell state:

OperationShort-hand descriptionSpecified byDetailed description
H“half”1 qubitFor all pairs of rows that differ only by the value of a specific qubit in the outcome, replace the first row value with a new value that is the sum of the original values divided by √2, and the second row value with the difference between the original values divided by √2.
CX“constrained swap”2 qubitsFor all pairs of rows where the first qubit specified is in the |1> state in the outcome, and where otherwise the rows differ only by the value of the second qubit specified, swap the rows in the pair.
RY“relative swap”1 angle and 1 qubitFor all pairs of rows that differ only by the value of a specific qubit in the outcome, swap a fraction “f” of the value from the first row to the second, and bring the opposite fraction (i.e. 1-f) from the second row but with the sign flipped, where “f” is specified as the angle 2 x arcsin(√f). If “f” is 1.0, the angle will be 𝜋.

The next article will look at how to implement digital computing functions through operations on the state vector.

A Quantum Computer is a random number generator

This is the first in a series of four articles based on my Jupyter Notebooks exploring quantum computing as a tool for generating random number distributions.

Many of the introductory quantum computing articles and courses out there are not quite right. They either quickly head deep into details that require a University-level physics or mathematics background, or sit at a high level based on analogies that are out of step with how quantum computers actually work. I want to try something different, and introduce some useful quantum computing algorithms using high-school level maths. I will avoid much (but maybe not all) of the jargon, and show how the algorithms can be implemented on the commonly available Qiskit platform.

In my earlier article on quantum computing, I introduced an analogy to describe quantum computers, which are based on qubits rather than bits. The analogy was of a coin-flipping robot arm that is flipping a coin that lands on a table. A qubit is like the coin when it is in the air, and a bit is like the coin when it has ended up on the table. When it’s in the air, the coin is in a kind of probabilistic state where it may end up heads or tails, but once it’s on the table, it’s in a certain state where it is definitely one of either heads or tails. Quantum computers work in the realm of probabilities, and can manipulate the coin while it’s still spinning in the air. The spinning coin in the air is the qubit. But at some point it will land on the table and be measured as either heads or tails. At that point it becomes a bit.

To write about quantum states, a notation is used where the name of the state is put between a vertical bar and an angle bracket. Just like a single bit can be in the “0” state or the “1” state, for a single qubit, we might say it can be in the |0> state or the |1> state. Our hypothetical robot-arm is well-calibrated, so it consistently flips a coin that lands with the same side facing up, and the resulting coin is like having a qubit in one of these states. The coin is in a probabilistic state, but the probability of it having a particular result is 100%. Similarly, if a qubit is in the |0> state, when it is measured, you will get a “0” result 100% of the time.

However, a quantum computer can manipulate the probabilities of the qubit, so even if the qubit started in the |0> state, after manipulation it enters a new state where if the qubit is measured, it will get “0” outcome 50% of the time (and “1” outcome the other 50% of the time, of course). This is done using a Hadamard operation, usually just written as H. We will use this operation to create truly random numbers.

Creating truly random numbers

Mostly when you have a computer give you a random number, such as using the RAND function in Microsoft Excel or when you’re playing a computer game and the enemy does something unexpected, the computer is actually producing a pseudo-random number. If you could create a perfect snapshot of everything in your computer, then get it to do something “random”, and return to that snapshot again, it will do exactly the same random thing each time. So, it’s not actually random, but it looks random unless you peer too closely.

For most applications, that’s fine. But if you are doing cryptography, having truly random numbers is important. You want to generate a secret key that no-one else can guess. Ideally, even if someone could take a snapshot of your computer, they still couldn’t predict what secret key is generated. There are special hardware random number generators that can create truly random numbers (Cloudflare uses lava lamps!), and quantum computers create truly random numbers too.

Let’s say we are going to generate a 2 bit random number. We’ll use 2 qubits, and the starting state of the qubits will be |00>, which means the outcome of measuring them both as “0” is 100%. We’ll use a quantum computer to manipulate the qubits so that all four possible outcomes |00>, |01>, |10>, or |11> are equally likely. Then when the qubits are measured, we will have some truly random bits.

We can write the four possibilities as a vector, with each row consisting of the probability of that outcome. Quantum computers perform their calculations using complex numbers rather than real numbers, and this is because complex numbers are needed to accurately describe how things work at the quantum level. We can simplify things, and just use real numbers, but we will need to calculate probabilities by squaring the values in each row of the vector.

We call this vector the quantum state vector (or just state vector), and it starts with being

$$\begin{bmatrix}
1.0 \\
0.0 \\
0.0 \\
0.0
\end{bmatrix}$$

Each row of the state vector corresponds to a different outcome, with the outcomes for two qubits being |00>, |01>, |10>, and |11> as we go down the vector. So this state vector represents a 100% probability of getting the |00> outcome.

We want each outcome to have a 25% probability, so we want to change the state vector to be:

$$\begin{bmatrix}
\frac{1}{2} \\
\frac{1}{2} \\
\frac{1}{2} \\
\frac{1}{2}
\end{bmatrix}$$

Of course, when you square 1/2, you get 1/4, or 25%.

The H operation is a standard operation on quantum computers, and works on all pairs of rows of the quantum state vector where that outcome differs only by the value of a specific qubit, e.g. where one outcome has the |0> for that qubit and the other row has |1>. For each pair, it turns the first value into a new value that is the sum of the original values divided by \(\sqrt{2}\), and the second value into the difference between the original values divided by \(\sqrt{2}\). While it is a division by \(\sqrt{2}\) rather than a division by 2, you can think of H like a “half” operation, where it calculates half the sum and half the difference and is scaled by a normalisation constant so that when the values are squared, the probabilities add up to 1.0. Written out mathematically, if the first row value is \(a\) and the second row value is \(b\), the first row value becomes \(\frac{a+b}{\sqrt{2}}\) and the second row value becomes \(\frac{a-b}{\sqrt{2}}\).

To get the desired final state vector from the initial state vector, we can apply H first to the pair of rows associated with a difference in the right-most qubit, then apply H to the pair of rows associated with a difference in the left-most qubit. Here’s how it would go:

QubitsInitial state vectorWorking out first HResult of first HWorking out second HResult of second H
|00>1.0=\(\frac{1.0+0.0}{\sqrt{2}}\)\(\frac{1}{\sqrt{2}}\)=\(\frac{\frac{1}{\sqrt{2}}+0.0}{\sqrt{2}}\)\(\frac{1}{2}\)
|01>0.0=\(\frac{1.0-0.0}{\sqrt{2}}\)\(\frac{1}{\sqrt{2}}\)=\(\frac{\frac{1}{\sqrt{2}}-0.0}{\sqrt{2}}\)\(\frac{1}{2}\)
|10>0.0=\(\frac{0.0+0.0}{\sqrt{2}}\)0.0=\(\frac{\frac{1}{\sqrt{2}}+0.0}{\sqrt{2}}\)\(\frac{1}{2}\)
|11>0.0=\(\frac{0.0-0.0}{\sqrt{2}}\)0.0=\(\frac{\frac{1}{\sqrt{2}}-0.0}{\sqrt{2}}\)\(\frac{1}{2}\)

Now that we’ve covered the process, let’s look at how this would be written programmatically using the Qiskit library from IBM.

Implementing this on Qiskit

We’re going to set up a (simulated) quantum computer with 2 qubits. (You can grab the complete Python script from here, or just type in the code below.)

import numpy as np
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, execute, BasicAer
from qiskit.quantum_info import Statevector
from qiskit.visualization import plot_histogram
import matplotlib.pyplot as plt
backend = BasicAer.get_backend('qasm_simulator')

q = QuantumRegister(2)   # We want to use 2 qubits
algo = QuantumCircuit(q) # Readies us to construct an algorithm to run on the quantum computer

By convention, all qubits begin in the lowest-energy state, so without doing anything, the qubits of our quantum computer should be set to |00>. We can check the state vector and see.

v1 = Statevector(algo)
print(np.real_if_close(v1.data))

Which will print “[1. 0. 0. 0.]” and shows the |00> row is 1.0 and the other possible outcomes are 0.0.

Qiskit numbers the right-most qubit as qubit 0, and the one to the left of it as qubit 1, with the next as qubit 2, and so on. You may have come across this as being called little-endian. Let’s start by using the H operator on pairs of rows associated with the |0> and |1> outcomes on qubit 0 (the right-most qubit).

algo.h(0)  # Apply H operation on pairs of rows related to qubit 0
v2 = Statevector(algo)
print(np.real_if_close(v2.data))

Which will print “[0.70710678 0.70710678 0. 0. ]”, and given 0.70710678 is \(\frac{1}{\sqrt{2}}\), it is what we were expecting. Now to do the H operation on the pairs of rows associated with the other qubit (qubit 1).

algo.h(1)  # Apply H operation on pairs of rows related to qubit 1
v3 = Statevector(algo)
print(np.real_if_close(v3.data))

Which will print “[0.5 0.5 0.5 0.5]”. The application of the H operations has set up the state vector so that the quantum computer should give us different randomly generated 2 bit values with uniform distribution. Let’s add a measurement to the end of our algorithm, and have the quantum computer do this 1,000 times and see what we get.

algo.measure_all()  # Measure the qubits and get some bits
result = execute(algo, backend, shots=1000).result()  # Run this all 1,000 times
plot_histogram(result.get_counts(algo))
plt.show()
chart with four columns of similar height, labelled with 00, 01, 10 and 11

This shows that of the 1,000 times this was performed (1,000 “shots”), the different 2-bit results occurred approximately the same number of times. It is what you would expect of a uniform distribution, noting that it is unlikely for every possibility to occur exactly the same number of times.

You can extend this process to as many random bits as you want, by having a qubit for each and applying the H operation in turn for each qubit. Quantum computers are still not very big, so you’ll run out of available qubits quickly. Or, you may want to just to re-run this process and get another two random bits each time.

We used a quantum computer simulator here, so it’s still a pseudo-random result. To use an actual quantum computer, you would need to set up an account on IBM Quantum, get an API key, and change the backend to point at an instance of a quantum computer from their cloud. This is easy enough to do, but is an unnecessarily complication for this article.

You can then access true random bits that can be fed into any software that needs it. With all that, you have seen how to create a simple quantum algorithm and make it do something useful that is not easily done on a digital computer.

Please let me know… Were you able to follow this description of quantum computation? Do you feel confident that you could get this working on a real quantum computer? Would you prefer if there was more linear algebra, matrices and complex numbers in this article?

My main insight from SXSW Sydney

Last week, I attended the inaugural SXSW Sydney, and the first SXSW outside of Texas. It was different to the regular tech conferences that I’ve attended – it was much more diverse, with the games/film/music streams attracting a broader crowd. The sessions that I made it into were stimulating and sparked a range of ideas.

Of course, topics like AI (particularly Generative AI) and the Future of Work featured heavily in many presentations, and this led me to a realisation that I hadn’t had before, and I feel is likely to be the biggest impact from GenAI in the medium term. Rather than keep it to myself, I am sharing it here so that I can hear from others if it makes sense to them also.

Specifically, GenAI will bring about a huge disruption to the professional workforce and education system, not necessarily because humans will be replaced, but because humans who have been excluded from participation will now have fewer barriers to entry. Proficiency in the English language has been used as a justification for keeping certain people out of certain fields, and GenAI allows anyone from a non-English background to be as creative, smart, and persuasive as they are in their native tongues.

Our current GenAI systems are largely based on the Transformer machine learning architecture, which showed up early in online language translation tools like Google Translate. However, the GPT (T stands for Transformer) systems, particularly ChatGPT, have shown us that only a few words in broken English are able to be turned into paragraphs of words in perfect English, or even the reverse where paragraphs are summarised down to a few points in another language. University-level English spelling, grammar, and comprehension are no longer the exclusive domain of the English fluent.

There’s a fun TV series called Kim’s Convenience about a Korean couple who move to Canada to raise their family. The couple were teachers in Korea, but instead of doing that, they open a convenience store in Toronto. Presumably their lack of English or French language fluency would have been a limitation in getting teaching jobs. However, less than two months ago, OpenAI published their guide for teachers around ChatGPT, and it included the use case of “Reducing friction for non-English speakers”. In this guide, it was to help non-English students, but many of the suggestions could help non-English teachers also.

About 6% of the world’s population are native English speakers, and 75% do not speak English at all. And yet, about a third of the world’s GDP comes from countries where English fluency is required for success. If English is no longer a barrier to success in that market, it will be a significant disruption.

The spread of remote working technologies due to the pandemic has changed the ways of working for many jobs. Many white-collar jobs will likely still have an element of face-to-face contact, even if to come together for celebrations or training. However, where workers can be fully remote, the lack of English fluency as a barrier will enable many countries to export their talent without it leaving their shores.

Before the pandemic hit, over a quarter of University revenues in Australia came from international students. This gives international students some influence over University policies, and currently they face English language proficiency tests as part of their enrolment and visa processes. In the near future, GenAI looks set to be considered a generally-available tool in the workplace, like a calculator or laptop. If prospective students could make use of such a tool to address any gaps in their English language skills post-graduation, is it fair to prevent them from using it before graduation?

Traditionally, those people who had limited English in countries like Australia, UK or USA had been resigned to taking a jobs as an “unskilled” worker. There are already concerns that the number of people willing to do this type of work might not be enough to meet future industry demands. What might happen to wages if a good proportion of these people were able to move out of the unskilled workforce? How readily can the creative and information worker industries expand to take on new talent? What new barriers might be created by unions and professional organisations to help limit a flood of new workers into their industries?

GenAI has been making headlines that AI is taking many people’s creative jobs. After hearing from several panels at SXSW on AI, Long-term Forecasting, Work of the Future, and Education, my conclusion is that a plausible and perhaps more relevant headline would be that GenAI will allow many more people to take on creative jobs.