mod04lec22 - Quantum Generative Adversarial Networks (QGANs)

NPTEL-NOC IITM
11 Oct 202226:29

Summary

TLDRThe video script delves into quantum machine learning, specifically focusing on quantum generative adversarial networks (GANs) applied to option pricing in finance. It explains the concept of GANs with a generator and discriminator, aiming to produce data indistinguishable from real samples. The script discusses a quantum approach to address data loading challenges, using a quantum generator and classical discriminator. It showcases how this framework is applied to European call option pricing, demonstrating faster convergence compared to classical Monte Carlo simulations. The script also includes a demo of quantum GANs for finance and a simulation of molecules using the Variational Quantum Eigensolver (VQE), emphasizing the practical application and potential of quantum computing in various fields.

Takeaways

  • 🧠 The script discusses the application of Quantum Generative Adversarial Networks (GANs) in finance, specifically for option pricing.
  • 🤖 It explains the concept of GANs, which involve two neural networks: a generator that creates data samples and a discriminator that tries to distinguish between real and generated samples.
  • 📈 The goal of the generator is to produce samples that are indistinguishable from real data, thus 'fooling' the discriminator.
  • 💡 The script highlights a paper that addresses the 'data loading problem' in quantum computing, aiming to mimic data distributions without needing to load the exact classical data into quantum states.
  • 🌐 It introduces a quantum framework where the generator is quantum (using variational quantum circuits) and the discriminator remains classical.
  • 💼 The application demonstrated is European call option pricing, where the quantum approach is used to simulate the distribution of spot prices and calculate expected payoffs.
  • 📊 The script shows a comparison between quantum computing and Monte Carlo simulations, with quantum showing faster convergence and lower estimation errors.
  • 🔬 The demo includes a practical example of using the quantum GAN framework to price options, with the potential for significant efficiency gains over classical methods.
  • 🔄 The script also touches on the broader context of quantum computing, including the NISQ (Noisy Intermediate-Scale Quantum) era and the importance of variational quantum algorithms.
  • 🚀 Lastly, it emphasizes the rapid evolution of quantum hardware and the active research in areas like cost function optimization, Hamiltonian mapping, and quantum-aware classical optimizers.

Q & A

  • What is the main concept behind Generative Adversarial Networks (GANs)?

    -The main concept behind GANs is to have two neural networks, a generator and a discriminator, competing against each other. The generator creates data samples, while the discriminator tries to distinguish between these generated samples and real data samples. The generator's goal is to produce samples that are indistinguishable from real data, effectively 'fooling' the discriminator.

  • How does the quantum version of a GAN differ from the classical one?

    -In the quantum version of a GAN, the generator is a quantum circuit that produces quantum states approximating the distribution of the training data, while the discriminator remains a classical neural network. The quantum generator uses parameterized quantum circuits to generate states that encode the probability distribution of the data.

  • What is the significance of the data loading problem in quantum computing?

    -The data loading problem refers to the challenge of efficiently translating classical data into quantum states. It's significant because even if quantum algorithms can provide exponential speedups, if loading classical data into quantum states requires an exponential number of gates, it negates the potential speedup. The paper discussed in the script addresses this problem by aiming to mimic the data distribution rather than loading the exact data.

  • How does the quantum GAN framework handle the data loading problem?

    -The quantum GAN framework handles the data loading problem by focusing on mimicking the distribution of the data rather than loading the exact classical data into quantum states. This approach allows for a more efficient translation of data into a quantum context, sidestepping the need for complex gate operations.

  • What is the role of the variational quantum circuit in the quantum GAN?

    -The variational quantum circuit in the quantum GAN serves as the quantum generator. It is a parameterized quantum circuit that is trained to generate quantum states whose probability distributions closely resemble the training data. This circuit is essential for encoding the data distribution into quantum amplitudes.

  • How is the payoff in a European call option calculated?

    -The payoff in a European call option is calculated as the maximum of the difference between the spot price at maturity (S_t) and the strike price (K), or zero. If the spot price at maturity is higher than the strike price, the payoff is positive; otherwise, it is zero.

  • What is the advantage of using a quantum approach over classical Monte Carlo simulations for option pricing?

    -The quantum approach has a convergence rate that scales polynomially faster than the Monte Carlo method, which scales with 1/√n. This means that the quantum approach can achieve the same level of accuracy with significantly fewer samples, offering a potential quantum advantage in computational efficiency.

  • What is the key takeaway from the paper on quantum GANs for finance?

    -The key takeaway is that the quantum GAN framework can effectively mimic the distribution of financial data, allowing for more efficient computation of expected payoffs in option pricing. This approach demonstrates a potential quantum advantage over classical methods, particularly in the context of complex financial simulations.

  • How does the quantum GAN framework encode the spot price information?

    -The quantum GAN framework encodes the spot price information into the quantum state's basis states, where each basis state represents a number in the domain of the data. The probability distribution of the spot price is then embedded into the amplitudes of these states, allowing for the generation of a distribution that can be used for financial calculations.

  • What are some of the active research areas in variational quantum algorithms?

    -Active research areas in variational quantum algorithms include defining cost functions, mapping Hamiltonians, exploring useful ansatze, utilizing gradients, and developing quantum-aware classical optimizers. These areas are crucial for improving the efficiency and applicability of variational quantum algorithms.

Outlines

00:00

🧠 Introduction to Quantum Generative Adversarial Networks in Finance

The paragraph introduces the application of quantum generative adversarial networks (GANs) in the context of finance, specifically for option pricing. It explains the concept of GANs where a generator network creates data samples, and a discriminator network tries to distinguish between real and fake samples. The goal is for the generator to produce samples so realistic that the discriminator cannot differentiate them from actual data. This framework is then applied to finance, aiming to mimic data distributions without needing to load the exact classical data into quantum states, addressing the data loading problem that plagues quantum algorithms. The paper referenced discusses a quantum generator and a classical discriminator, using variational quantum circuits to generate data distributions that are then used in financial modeling.

05:00

🔄 Quantum Circuit Design for Generative Models

This section delves into the specifics of the quantum circuit used in the generative model. It describes a parameterized quantum circuit with rotations and entangling gates, which is repeated multiple times to refine the output. The circuit's output, represented by a trial state, is then measured to obtain a probability distribution. This distribution is encoded in the amplitudes of quantum states, allowing for the generation of data samples that mimic a given distribution. The paragraph also discusses how this quantum-generated distribution is applied to European call option pricing, where the spot price is encoded into the quantum state, and the expected payoff is calculated using amplitude estimation techniques. The results show that the quantum approach can converge faster than classical Monte Carlo simulations, offering a potential quantum advantage.

10:01

📈 Demonstrating Quantum Advantage in Option Pricing

The paragraph showcases a demonstration of using the quantum generative model for option pricing. It presents a graph that compares the payoff and probability distribution of the spot price between quantum and Monte Carlo simulations. The quantum simulation is shown to converge much faster to the expected value with fewer samples, highlighting the potential efficiency of quantum computing in financial applications. The paragraph emphasizes the practical implications of this faster convergence, suggesting that quantum computing could significantly reduce the computational resources needed for financial modeling and decision-making.

15:02

🌐 Simulating Molecules Using Variational Quantum Eigensolver (VQE)

This section shifts focus to another application of quantum computing: simulating molecules using the Variational Quantum Eigensolver (VQE) algorithm. It provides a brief overview of the process, including initializing the quantum circuit, encoding the molecular information, and running the VQE algorithm to calculate the energy of the molecule at different distances. The paragraph also touches on the challenges and considerations in running VQE, such as choosing the type of entanglement and dealing with noise in quantum hardware. The narrative encourages hands-on experimentation with VQE through provided links to Jupyter notebooks, allowing users to run simulations and explore the practical aspects of quantum chemistry.

20:04

🔬 The Evolution and Challenges of Variational Quantum Algorithms

The final paragraph summarizes the current state and future prospects of variational quantum algorithms (VQAs). It acknowledges the rapid evolution of quantum hardware and the ongoing research into VQAs, which are crucial for handling noise and errors in the current noisy intermediate-scale quantum (NISQ) era. The paragraph outlines active research areas such as cost function design, Hamiltonian mapping, and quantum-aware classical optimizers. It also discusses the challenges faced by VQAs, including trainability, measurement efficiency, and accuracy. The speaker expresses optimism about the potential of VQAs to revolutionize various fields and encourages further exploration and career development in quantum computing.

Mindmap

Keywords

💡Quantum Generative Adversarial Networks (QGANs)

Quantum Generative Adversarial Networks (QGANs) are a type of quantum machine learning model inspired by classical GANs. In the video, QGANs are used to mimic complex data distributions, which is crucial for applications like option pricing in finance. The script discusses how QGANs consist of a quantum generator and a classical discriminator, where the generator's role is to produce data samples that are indistinguishable from real data, as per the adversarial training process.

💡Generator

In the context of GANs, the 'generator' is a neural network that creates new data samples. In the video, the generator is adapted to a quantum framework, suggesting a quantum version that can generate data samples with distributions close to real-world data. This is showcased in the application of QGANs to finance, where the quantum generator learns to mimic financial data distributions.

💡Discriminator

The 'discriminator' is another neural network in GANs that evaluates data samples, distinguishing between those generated by the generator and real data. In the video, the discriminator remains a classical neural network, tasked with identifying whether data samples are real or fake, thus providing feedback to the quantum generator.

💡Data Loading Problem

The 'data loading problem' refers to the challenge of efficiently translating classical data into a quantum state. The script highlights this as a significant issue in quantum computing, especially when trying to leverage quantum speedups. The paper discussed in the video addresses this problem by focusing on mimicking data distributions rather than exact data loading.

💡Variational Quantum Algorithms (VQAs)

Variational Quantum Algorithms are a class of quantum algorithms that use parameterized quantum circuits to find solutions to problems. In the video, VQAs are central to the quantum generator's operation in QGANs, where they are used to create a quantum state that encodes the data distribution. VQAs are highlighted as a practical approach in the current noisy intermediate-scale quantum (NISQ) era.

💡European Call Option

A 'European Call Option' is a type of financial derivative that gives the holder the right, but not the obligation, to buy an asset at a specified price (the strike price) on or before a certain date. In the video, the application of QGANs to finance involves pricing this type of option, where the quantum approach is used to model the underlying asset's price distribution and calculate the option's expected payoff.

💡Amplitude Encoding

In quantum computing, 'amplitude encoding' is a method of representing classical data in quantum states through the amplitudes of quantum bits (qubits). The video describes how amplitude encoding is used in the quantum generator to embed the probability distribution of financial data into the quantum state, which is then measured to sample from the learned distribution.

💡Quantum Advantage

The term 'quantum advantage' refers to the ability of quantum computers to solve certain problems more efficiently than classical computers. The video script discusses the potential quantum advantage in the context of option pricing, where the quantum approach is shown to converge faster than classical Monte Carlo simulations, indicating a polynomial speedup.

💡Convergence Rate

The 'convergence rate' in the context of algorithms is the speed at which a solution or output approaches the true or desired value. The video emphasizes the superior convergence rate of the quantum approach compared to classical Monte Carlo methods, showcasing how quantum algorithms can achieve more accurate results with fewer samples or iterations.

💡Quantum Hardware

The script mentions 'quantum hardware' as the physical devices that implement quantum computing, such as quantum processors. The video demonstrates how these devices can be used to run quantum algorithms, like those for option pricing, potentially offering advantages over classical computing hardware due to their ability to perform certain calculations more efficiently.

💡Noisy Intermediate-Scale Quantum (NISQ) Era

The 'NISQ era' refers to the current phase of quantum computing development, characterized by devices that are noisy and have a limited number of qubits. The video discusses how variational quantum algorithms are particularly suited for this era, as they can operate with the shallow circuits necessary to mitigate the effects of noise in current quantum hardware.

Highlights

Overview of quantum generative adversarial networks applied to option pricing and finance.

Introduction to generative adversarial networks (GANs) with two neural networks: generator and discriminator.

The generator's goal to produce samples indistinguishable from real data.

The discriminator's role to identify real from fake data samples.

Loss functions for the generator and discriminator in the GAN framework.

Quantum approach to address the data loading problem in quantum computing.

Quantum GAN framework with a quantum generator and a classical discriminator.

Variational quantum circuit design for the quantum generator.

Encoding the probability distribution into the quantum state amplitudes.

Application of the quantum GAN framework to European call option pricing.

Demonstration of faster convergence in quantum simulations compared to Monte Carlo methods.

Quantum advantage in finance through polynomially faster convergence rates.

Practical demonstration of quantum GANs for finance with IBM Quantum Lab.

Variational Quantum Eigensolver (VQE) for simulating molecules.

Programming quantum circuits in Python for VQE.

Challenges and active research areas in variational quantum algorithms.

Potential use cases of variational quantum algorithms across various industries.

The current state of practical quantum computing and its future trajectory.

Transcripts

play00:01

[Music]

play00:14

now that we have seen the variation of

play00:16

quantum migrant solver in detail we

play00:18

cannot do a quick overview of quantum

play00:20

generate generative adverse serial

play00:22

networks as applied to option pricing

play00:24

and finance this will be a very high

play00:26

level cursor review uh the idea being

play00:29

that we want to give a flavor of how

play00:31

this gets applied in an application

play00:33

context in finance

play00:36

so as a background um the idea of

play00:39

generative advice serial networks is uh

play00:42

is to have two neural networks naturally

play00:44

this comes from the machine learning

play00:46

space you have two neural two neural

play00:48

networks called the generator and the

play00:50

discriminator

play00:52

generator generates the data samples the

play00:55

discriminator

play00:56

takes the data samples generated by the

play00:58

generator along with the training or

play01:00

real data samples and it's supposed to

play01:02

be able to distinguish between the data

play01:04

samples that it got

play01:05

mark it real or fake

play01:07

what generator wants to do is ultimately

play01:10

generate the samples such that the

play01:12

distribution is as close to the real

play01:14

data samples and so the discriminator is

play01:16

not able to tell whether it is a real

play01:18

data or a fake data

play01:20

and so this goes about

play01:22

in you know in a it's much like a two

play01:25

player game in game theory and uh

play01:28

eventually the generator learns the

play01:30

distribution and is able to

play01:33

make the training data as close as

play01:35

possible

play01:36

so the loss functions defined here um

play01:38

our

play01:40

the expectation value of the prior uh

play01:42

where

play01:43

z or z

play01:44

is from the prior distribution this is

play01:46

the generator's loss function uh this is

play01:49

in the non-saturating loss kind of

play01:52

framework

play01:53

where theta

play01:55

is for the generator distribution

play01:58

and then the results the data samples

play02:00

generated then goes into the

play02:01

discriminator and what

play02:05

what this wants to do is

play02:07

maximize the chance of the discriminator

play02:10

tagging it as real which means that it's

play02:12

able to fool the discriminator saying

play02:14

that the data samples are actually real

play02:17

in a sense the discriminator is not able

play02:20

to tell

play02:21

the two distributions apart and you want

play02:23

to maximize that

play02:25

from a generator standpoint and

play02:27

discriminator standpoint is the other

play02:29

one so we want to be able to identify

play02:31

the fake ones as fake and the real ones

play02:33

as real so now you have a conflicting

play02:36

thing so you have a push and pull coming

play02:38

in and which is what mimics the

play02:40

two-player

play02:41

game from game theory and it is

play02:44

equivalent to the national equilibrium

play02:46

where the two of these players are

play02:48

trying to maximize

play02:49

their objectives and eventually the

play02:51

result

play02:52

will be such that the generator is able

play02:55

to generate samples so that it's as

play02:57

close to the training data as possible

play03:01

so the paper that we are referring to is

play03:03

uh

play03:04

we have the snapshot on the top left in

play03:06

the orange box that's the actual paper

play03:09

um so the idea the key takeaways

play03:12

is the context is very important to

play03:14

understand to this paper this is our the

play03:16

context is slightly different from the

play03:17

application itself

play03:19

there are many algorithms that have come

play03:22

before that

play03:23

involve what is called as a data loading

play03:25

problem

play03:27

some of the algorithms like hl

play03:29

have exponential speed up compared to

play03:32

the classical counterpart but there is a

play03:34

big hole in that argument that being

play03:36

that data loading so when you have the

play03:38

classical data

play03:40

can you load that into a quantum state

play03:42

efficiently or not what was shown later

play03:44

was that that loading particular problem

play03:46

uh takes exponentially many number of

play03:48

gates to translate the classical data

play03:51

into quantum and thus um

play03:53

eventually

play03:54

negating the exponential speed up that

play03:56

potentially some of these algorithms

play03:57

like etcetera

play04:00

so this data loading problem is an open

play04:03

problem and this particular paper

play04:05

addresses that particular problem at

play04:07

hand um so what it's trying to do with

play04:09

the quantum grants framework is uh the

play04:11

data set that we have at hand can i

play04:14

mimic it at least in the distribution

play04:16

sense as close as possible rather than

play04:18

having to actually load the real data

play04:21

in in exact form so at least can i mimic

play04:23

the distribution as close as pos

play04:25

possible and the framework that they

play04:27

adopted in this paper was

play04:29

the same generated discriminatory type

play04:31

framework um

play04:33

here the generator is

play04:36

quantum

play04:37

that discriminator remains classical

play04:39

neural network so the classical neural

play04:41

network discriminator remains but the

play04:43

generator is replaced by a quantum

play04:45

framework so you can see at the bottom

play04:47

left um a quantum generator uh you can

play04:51

see that it's a very

play04:53

by now um a very common variational

play04:56

quantum algorithm that you would have

play04:58

seen you have these

play05:00

rotation the parameter parameterized

play05:02

quantum circuit with rotations and they

play05:04

use the ry rotation and then you have

play05:06

the entangling gate followed by the r

play05:08

way rotation and there are n number of

play05:11

times that you do it so um you have the

play05:13

initial rotation followed by the

play05:15

sandwich of these entangling followed by

play05:18

single qubit gates

play05:20

k times and that's what this one is this

play05:22

is what we had seen in the previous

play05:24

sections as well

play05:25

and you get the

play05:27

g of theta which is the trial state at

play05:29

the end and in order for you to get the

play05:32

result so you just do the measurement so

play05:34

the way it is encoded is each basis

play05:36

state so um if you look at the

play05:39

indices here it goes from 0 to 2 power n

play05:41

minus 1. so this particular state

play05:42

encodes

play05:44

the natural domain value that it encodes

play05:46

goes is assumed to go from 0 to 2 power

play05:48

n

play05:49

minus 1 it directly encodes that p theta

play05:52

of j is essentially telling you for that

play05:54

particular basis where each of them

play05:56

represents

play05:57

each basis state represents a number for

play05:59

example

play06:00

in the sampling x um

play06:04

sample data from 0 to 2 power n minus 1

play06:06

each basis states corresponds to a

play06:08

particular number there and p of theta

play06:10

of j represents the probability of the

play06:13

distribution of that particular value in

play06:15

it so that's how it gets um the

play06:17

distribution part gets embedded into the

play06:19

amplitude of that particular state and

play06:21

therefore the state gets encoded here

play06:23

and then use when you do measurement you

play06:25

get the right probability distribution

play06:27

for the values

play06:29

that you had at hand

play06:31

so what they did then is that once you

play06:34

have this generator framework and

play06:35

eventually it gets trained and you you

play06:37

generate the distribution uh you know

play06:40

finally after training you get the g of

play06:42

theta the distribution what they did was

play06:44

they applied this in the european call

play06:46

option

play06:47

spot pricing contacts what it is is less

play06:50

important i have a link here that you

play06:51

can go read up on what this call option

play06:53

is all about but the idea there is there

play06:56

is a

play06:57

in the financial market sense

play07:00

there is uh

play07:02

you you

play07:04

price k is what you

play07:05

buy the spot option you are not

play07:07

necessarily um

play07:10

and the value of that option can go up

play07:12

in time the maturity time is t

play07:15

so the value of it at the maturity st

play07:18

the difference of it means

play07:20

the

play07:20

the payoff

play07:22

if it is

play07:23

more the sft is more than what you are

play07:26

buying for then you get a payoff which

play07:28

is positive and if it is negative you

play07:30

you are guaranteed that you don't

play07:32

necessarily have to buy that one so it

play07:34

is

play07:34

zero here so

play07:36

what this payoff is is essentially

play07:38

maximize um the spot price at maturity

play07:42

which is s of t to the current price

play07:44

that you are quoting and the difference

play07:46

of it and you want to maximize that

play07:47

that's what this particular option

play07:49

pricing is about

play07:50

what they did was to encode

play07:53

the spot rising

play07:54

part of it in g of theta remember i

play07:57

mentioned that the actual data the

play07:59

domain of it goes from 0 to 2 power n

play08:02

minus 1 that goes directly encoded into

play08:03

the state the basis state

play08:06

and note that the

play08:08

n qubits means that the basis state has

play08:10

2 power n combinations so you can sort

play08:13

of each state represent a particular

play08:14

number so you can sort of encode it into

play08:16

it

play08:17

and they had a prior work which

play08:19

basically can do based on amplitude

play08:21

estimation approaches um go from

play08:25

the spot price value and they and then

play08:27

you get the results which is essentially

play08:29

the expected payoff uh which is what you

play08:31

get at the end so what they did is they

play08:34

use the data loading problem what they

play08:36

solved here

play08:37

they applied the data loading in this

play08:39

case they loaded the spot price

play08:41

information into the earlier work that

play08:43

they had done where

play08:45

they had shown how to do the computation

play08:47

of expected payoff

play08:49

this is the circuit that is based on the

play08:51

amplitude estimation approach

play08:54

so they encoded they get the payoff and

play08:56

then they showed that you could then

play08:57

simulate the results to see

play08:59

what is the value of payoff

play09:01

the key important value proposition is

play09:04

that the earlier work that they did

play09:06

here uh all these analysis uh invariably

play09:10

involves when the distribution is

play09:11

becomes fairly complex the only way to

play09:14

do that in the classical sense is to do

play09:16

a multicordless simulation

play09:19

we know from

play09:20

our prior knowledge the monte carlo

play09:21

simulation the convergence goes like 1

play09:23

over square root of n

play09:25

in this work um

play09:27

the prior work that i shown here they

play09:30

showed that

play09:31

using this quantum approach the

play09:33

convergence goes more like 1 over n

play09:36

which means that the convergence is

play09:38

polynomially faster

play09:40

than the monte carlo approach

play09:42

which is the key value proposition of

play09:44

all of this so

play09:46

so now you have an end to end thing

play09:49

where you can load the data from the

play09:51

classical data into the quantum at least

play09:53

in the distribution sense and then

play09:55

perform your expectation value

play09:56

calculation in the quantum sense and

play09:58

generate the result and we know from the

play10:01

prior work that this computation is

play10:03

polynomially faster than um the monte

play10:07

carlo which means that the convergence

play10:09

is much sooner and you're likely to get

play10:11

to the results with less number of

play10:12

sampling than you would have to do with

play10:14

the monte carlo approach uh we will see

play10:16

uh

play10:17

couple of demonstrations uh this

play10:19

demonstration for this uh shortly uh

play10:21

very intuitive one uh the picture here

play10:24

is showing on the uh

play10:26

the second um y-axis is the payoff which

play10:29

is the blue line and this is the

play10:31

probability distribution for the spot

play10:34

price which is the x axis here

play10:36

the in this particular example the

play10:38

initial spot price was fixed at 2

play10:41

and then you have the expected gain and

play10:44

as you can see it grows up so in the

play10:46

simulation uh i will show you uh

play10:49

this particular simulation and also it

play10:51

shows uh the convergence how quickly it

play10:54

converges uh to the value of interest

play10:58

and thus demonstrating the value

play10:59

proposition

play11:00

or a potential quantum advantage that

play11:02

you can get

play11:03

using the quantum hardware to solve this

play11:05

particular problem

play11:08

now we're going to go into the demo part

play11:09

of it

play11:10

i'm going to cover a couple of them um

play11:12

so i'm going to first talk about

play11:16

the

play11:17

the quantum gangs since we just covered

play11:19

it um i will show the demo of this you

play11:22

can find this in this link and then i

play11:23

will show you the vqa

play11:26

code uh just to indicate um that it is

play11:29

highly programmable and it's easy to go

play11:32

play with so i encourage you all to go

play11:34

take a look at that

play11:36

so i've opened the link that i had in

play11:38

that chart here this indicates uh this

play11:40

is uh

play11:41

for the work for the quantum grants work

play11:43

that i just talked about so we're going

play11:45

to cover that first the finance one

play11:48

um so the example um

play11:50

we talked about in this in this case is

play11:52

you can choose the spot price and what

play11:55

you are doing is in the future when the

play11:57

price increases you're gonna get profit

play11:59

but if the price reduces you're not

play12:01

gonna you have the option of not selling

play12:03

it for that particular price which means

play12:05

that you don't lose it and that's the

play12:07

idea of this call european call option

play12:09

so here for example um the spot price is

play12:12

set at 1.9

play12:14

dollar 1.9

play12:16

so let's run the

play12:18

test circuit

play12:20

so

play12:21

this one is the payoff payoff is the

play12:23

expected

play12:26

gain of doing that

play12:28

the important part is

play12:30

the white line is the

play12:31

quantum and the gray line is monte carlo

play12:34

and

play12:35

since these numbers these simulations

play12:37

are for very small set of values we can

play12:39

do an exact calculation too which is the

play12:41

pink reference line

play12:42

so here you can see the

play12:44

quantum

play12:45

calculation converges pretty quickly the

play12:48

monte carlo goes up and down a little

play12:50

bit and then converges

play12:52

on the right is the estimation error

play12:54

which is more relevant on the x axis in

play12:56

this particular plot is the number of

play12:57

samples

play12:59

what you can see is the white line is

play13:00

the quantum and the gray line is the

play13:01

monte carlo simulation

play13:04

so you can see that the the estimation

play13:07

error from quantum drops quite rapidly

play13:09

compared to the monte carlo i told you

play13:11

that the earlier result had demonstrated

play13:13

the convergence rate of mono rain for

play13:15

quantum as against one over square root

play13:17

of n for monte carlo so that is evident

play13:21

in this particular example here so for

play13:23

example if your error threshold i don't

play13:25

know if you can see this number

play13:27

were

play13:29

0.009

play13:31

i think that's what this one is

play13:33

let's say you had that as the threshold

play13:36

you would get that hit the threshold in

play13:38

this particular example with the

play13:40

sampling of 256

play13:42

and you would get to that

play13:44

error threshold only in the classical

play13:46

monte carlo case only when you sample

play13:49

2048

play13:50

samples so this shows a big difference

play13:54

this could be

play13:55

really important in the context of

play13:58

finance sector so

play14:01

so based on the threshold and the

play14:03

convergence is much faster compared to

play14:04

the

play14:05

classical monte carlo approach

play14:07

you can go play with different values

play14:09

of

play14:10

the spot price and give it a run and you

play14:13

will see a consistent behavior in terms

play14:15

of the convergence

play14:17

in this example you can see that the

play14:19

quantum converges soon

play14:21

the variance in the classical is much

play14:23

broader here

play14:24

and you can see that it achieves the

play14:27

lower threshold

play14:29

we run it for 2048 samples in both but

play14:31

you can see if we had a threshold

play14:33

value for example in this horizontal you

play14:36

get it in 256 but to get to that in

play14:38

monte carlo actual monte carlo you have

play14:40

to sample 2048

play14:43

samples

play14:44

the structure of the algorithm there is

play14:47

a broader

play14:48

structure you have the payoff circuit

play14:50

that i was telling you about from before

play14:51

remember the amplitude distribution

play14:53

based approach and then you have the

play14:56

distribution

play14:57

technique which is what we talked about

play14:59

in quantum gas to get the data in the

play15:02

data loading problem portion which is

play15:04

what this one is and then you have the

play15:06

payoff calculation portion which is the

play15:07

second part

play15:09

now i'm going to go into

play15:10

the simulation simulating

play15:13

molecules using vqe we talked about this

play15:16

in quite detail

play15:17

in our presentation i will just go over

play15:20

this particular

play15:23

thing you can open this as a jupiter

play15:25

notebook and actually run them so all

play15:27

you have to do is click it and it opens

play15:30

in ibm quantum lab if you haven't uh if

play15:32

you create an id with

play15:34

quantum computing

play15:35

ibm.com um it's as simple as logging

play15:39

into it and then

play15:40

you you can go run them i'm not going to

play15:43

do that now but i just will show you

play15:45

walk you through this particular

play15:46

textbook page we have covered all this

play15:48

in quite detail so i'm not going to talk

play15:50

about these technical part of it as much

play15:53

but i want to show the programming part

play15:55

of it a little bit

play15:57

um so here is where the quantum circuit

play15:59

gets initialized these are all one time

play16:01

thing these are all abstracted as you

play16:03

can see in the

play16:05

in a function

play16:08

so this is the variational form that we

play16:10

talked about you can program it is the

play16:11

quantum and this is the classical

play16:13

register this is the quantum circuit

play16:14

that you are defining and these are the

play16:16

parameters these are the u3 parameters

play16:18

and you measure them

play16:20

remember um

play16:22

the big level picture is you have the

play16:24

classical optimizer and that's what this

play16:26

one is here they are using cobala as the

play16:28

optimizer um

play16:31

you get the variational form this is the

play16:32

quantum circuit that you want to run you

play16:34

then compile it and assemble it and you

play16:36

set up

play16:37

the background on the back end that you

play16:39

want to run it in

play16:41

and then you get the distribution

play16:43

okay

play16:44

so note that

play16:46

you had the variation part and then the

play16:49

entangling variation and dangling and so

play16:51

forth the parameter is part and then the

play16:53

entangling part the parameter is part

play16:54

and tangling part so there are two

play16:56

different kinds of entangling that you

play16:58

can do a linear or a full and this is

play17:00

something that you can play with

play17:02

so linear means that uh as you can see

play17:05

you have in this example they are

play17:07

showing c naught so you have the c

play17:08

naught um with in this example they are

play17:11

showing four qubits uh so you have c

play17:13

naught uh one two zero two one one two

play17:16

two two two three full means all

play17:19

combinations of us not so

play17:21

one two two one two three one two four

play17:23

two two three two two four two two so

play17:25

this is what the full ways uh naturally

play17:28

the linear uh has lesser number of c

play17:30

naught which means it's less erroneous

play17:33

however the full entanglement

play17:35

will take you to much more complex

play17:38

entanglement and as you can see with

play17:40

when you do these

play17:41

mini c naught in frontal entanglement it

play17:43

gets much more deeper the number of

play17:45

layers of quantum circuit increases

play17:48

we want to keep it shallow as well so

play17:50

there is a trade-off

play17:51

between these things

play17:53

if your hardware is

play17:55

more

play17:56

has a more longer lifetime you can

play17:59

potentially look at full entanglement if

play18:00

you have a shorter lifetime you would

play18:02

want to make do with linear entanglement

play18:04

these are the parameters that you need

play18:06

to tweak and play with

play18:09

then

play18:11

this is where you

play18:13

encode your circuit so this one is

play18:15

showing the lithium hydride molecule

play18:18

from the paper

play18:20

so what this one is showing is this is

play18:22

the lithium atom

play18:23

and then this is the hydrogen atom

play18:26

lithium is in origin 0 0 0 think of this

play18:29

is x y and z axis and hydrogen is the

play18:32

one that is moving so it's fixed in the

play18:34

x and y plane

play18:36

but moves only in one direction which is

play18:38

the z plane and that's the

play18:40

difference remember the in vqa we showed

play18:43

the um energy profile so if you have

play18:46

atoms closer you have higher energy and

play18:49

then at some point it starts dropping

play18:51

and when it moves away farther out then

play18:53

the energy also flattens out

play18:56

so what they do is they play it in they

play18:59

change the distance in one direction the

play19:01

z direction

play19:03

all these are details here which i'm not

play19:04

going to talk about

play19:06

but basically here is the fermionic

play19:08

operator

play19:09

that we talked about remember

play19:12

we have to go from the fermionic

play19:13

hamiltonian to the qubit hamiltonian so

play19:16

here is where you get the fermionic

play19:17

operator

play19:18

and then

play19:20

you will have to go do the mapping

play19:22

so here is where the mapping happens

play19:25

remember

play19:26

there are many mappings available there

play19:28

is jordan wigner mapping bravikit and

play19:31

here they are using parity this is where

play19:34

the fermionic operator becomes a cubit

play19:36

hamiltonian and that's what this one is

play19:39

so that's how the encoding happens

play19:41

and then um

play19:43

you have the this is the main loop this

play19:46

is the loop where you have the classical

play19:48

quantum back and forth happening um so

play19:51

you have the sl sls qp optimizer so

play19:54

cobella was used for the exact solution

play19:56

um that for the reference and this is

play19:59

the actual optimizer used

play20:01

for the vqe

play20:03

for different distances remember the

play20:05

x-axis is the distance we are trying to

play20:07

compute the

play20:08

energy

play20:10

you run it for each of the distances

play20:13

you calculate the value

play20:16

uccd is the onsites which is coming from

play20:19

quantum chemistry it's a very famous

play20:21

answer from quantum chemistry uh so we

play20:23

are leveraging that particular answers

play20:26

um for our calculations and then we just

play20:28

call vqe um send the qubit hamiltonian

play20:31

the variational form which is the uccd

play20:34

and the optimizer the classical

play20:35

optimizer so this is the main loop that

play20:37

we are talking about here

play20:39

and then you calculate the results and

play20:41

for different distances what is the

play20:42

energy and what is the exact energy

play20:44

remember the cobala was used to

play20:45

calculate the exact energy because this

play20:47

is a very small molecule which is

play20:49

simulatable efficiently in classical

play20:52

this is just for reference and when you

play20:54

generate that

play20:55

we get the plot very similar to what we

play20:58

had

play20:59

shown in in our presentation this is the

play21:02

exact energy and the vq energy much

play21:04

quite exactly so it's indistinguishable

play21:07

in this particular plot

play21:09

so you have

play21:11

so this is

play21:13

i think i believe this one did not have

play21:14

the noise i believe they run it with

play21:16

noise as the next step so you can play

play21:19

with the

play21:20

different simulator you can also run it

play21:22

when actual hardware so here they are

play21:24

running it in simulator you can choose

play21:26

to run it in actual hardware and see the

play21:28

results

play21:29

so it will be interesting for you to see

play21:30

how exact the results get

play21:33

here as i mentioned in the particular

play21:35

presentation they are using spsa

play21:37

and this is the number of iterations you

play21:39

can play with all of them the type of

play21:41

entanglement that you want to use

play21:43

here is the vqe as i was telling you

play21:46

about and you can run it and generate

play21:47

the results

play21:48

so it's fairly intuitive it's

play21:51

programmable in python

play21:54

and it's fairly high level very

play21:57

there is a portion where you need some

play21:59

knowledge but there is also a main loop

play22:02

which is fairly independent of any uh

play22:04

significant knowledge of quantum

play22:07

that you need to do in order to run this

play22:10

so now that we are done with both these

play22:12

demos i want to finally summarize

play22:14

um

play22:16

we are an interesting uh

play22:18

point in the journey

play22:20

uh programming a quantum computer now is

play22:22

a reality um

play22:24

and it's fairly advanced already

play22:26

um and then we are in a generation that

play22:30

is beyond just experiments now we are in

play22:32

what is called as the broadly called as

play22:34

a niskira this will be there for some

play22:36

time to come uh till such time the

play22:38

errors are fairly small we are still

play22:41

orders of magnitude of a

play22:44

and till such time that the fault

play22:46

tolerant

play22:47

quantum computing takes shape we are

play22:49

going to be in this era and also the

play22:51

knowledge that we are going to gain

play22:52

along the way is likely to continue

play22:54

beyond the niskera 2.

play22:57

what also needs to be mentioned is

play22:58

hardware is evolving quite rapidly

play23:00

exponentially fast we saw that early

play23:03

the lifetime is increasing quite bad the

play23:05

noise profiles are reducing quite a bit

play23:07

all the trend lines are good but still

play23:09

it's in the regime that will be called

play23:11

as noisy it's not still at a fault

play23:13

tolerant threshold we are still as i

play23:14

mentioned orders of magnitude away

play23:17

so how do we do computation efficiently

play23:19

in this particular context is what is

play23:21

the challenge

play23:22

traditional algorithms will not be

play23:24

practical

play23:26

because it doesn't deal with noise and

play23:28

the way it does the computation assumes

play23:30

that the qubits are clean that's not so

play23:32

in reality at this point in time so they

play23:35

don't map well so variational quantum

play23:37

algorithms are the mainstay in this

play23:38

particular era and possibly beyond as

play23:41

well and all the efficiencies that we're

play23:43

going to develop is going to likely to

play23:45

carry over too

play23:46

and all the new knowledge is that we're

play23:48

learning

play23:49

the variational quantum algorithms are

play23:52

a hybrid algorithm it's a classical

play23:55

quantum algorithm

play23:56

with

play23:57

shallow depth circuit it's an important

play24:00

piece the shallow depth part

play24:02

given that it's noisy we can't afford to

play24:03

be very deep quantum circuit

play24:07

there are a lot of areas of active

play24:08

research in this area including cost

play24:11

function how do you define the cost

play24:12

function um there is an active research

play24:15

there how do you map the hamiltonian

play24:17

that's another active area of research

play24:19

what kind of handsets will be useful

play24:22

how to use the gradients what kind of

play24:24

classical optimizers are quantum aware

play24:26

are sensitive to the quantum

play24:27

requirements that's another active area

play24:29

of research there is lot of

play24:31

things that are being looked at and

play24:33

conjectured and also proven

play24:36

and there are many many challenges also

play24:38

along the way as i described there is a

play24:40

trainability part

play24:41

how do you what are the

play24:44

potential barriers to entry the parent

play24:46

plateau being one significant portion of

play24:48

them efficiency how well can you do the

play24:51

measurement um and then the accuracy

play24:54

part

play24:55

all these are challenges that

play24:57

variational quantum algorithms needs to

play24:59

come about

play25:00

what has happened in the last few years

play25:02

is there are many flavors of these

play25:04

algorithms that have come about based on

play25:05

the particular requirement of a problem

play25:07

and many more are being explored

play25:09

as we speak

play25:11

potential use cases has exploded in the

play25:13

last few years

play25:15

spanned areas of finance natural

play25:17

sciences l senses manufacturing and this

play25:19

by no means exhaustive there are many

play25:21

many more explorations being happen to

play25:23

look at the problem context how quantum

play25:26

affects a particular vertical a

play25:28

particular problem

play25:30

naturally you would want to go look at

play25:32

problems that are hard to solve

play25:34

classically and see if you can solve it

play25:36

in quantum mechanical in the quantum

play25:38

computing context and within the quantum

play25:41

computing context is there a variational

play25:43

form because variational form is what is

play25:46

more practical at this time and so

play25:48

figuring out that kind of a dynamic

play25:51

is what is currently

play25:53

the state of the art and there are many

play25:55

more explorations happening at this

play25:57

uh i hope uh this particular

play26:02

introduction to this variation

play26:03

algorithms gave a flavor for what is

play26:05

happening in uh in quantum computing the

play26:07

practical side of quantum computing at

play26:09

this time i hope many of you find this

play26:11

useful and i hope many of you take this

play26:13

journey in

play26:14

in exploring this space and hopefully

play26:16

make a career in this area as well i

play26:19

wish you all the very best and hope

play26:22

we get to have more such conversations

play26:24

in the future thank you

Rate This

5.0 / 5 (0 votes)

Ähnliche Tags
Quantum ComputingFinanceMachine LearningGenerative NetworksQuantum AlgorithmsMonte CarloVariational QuantumOption PricingQuantum FinanceAlgorithm Demos
Benötigen Sie eine Zusammenfassung auf Englisch?