🚧 Simple Qubit with Constraints

This example is jusut like the previous example, one qubit interacting with I and Q microwave drives, but there is one difference, we now add constraints. There is no much to change, the only actual change will occur on the creation of the GrapePulse object

👉 Imports

import sys
sys.path.insert(1, r'..') # Path to grape code, won't be a problem if it was downloaded from conda
import grape  # This is mine :)
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import qutip as qt  # qutip is used only to easily define basic quantum operators and turn them into numpy ndarrays
import scipy.ndimage as ndi

plt.style.use("ggplot")

👉 Ladder operators

Setting up qubit ladder operators, $\hat{q}$ and $\hat{q}^\dagger$

q = qt.destroy(2)
qd = q.dag()
# Turn qutip objects to numpy
q = np.array(q)
qd = np.array(qd)

👉 Hamiltonians ⚡

In this example we’ll consider a very simple qubit, the Hamiltonian of which is given by:

\[\hat{H}_0 = \omega\hat{q}^\dagger \hat{q}\]

We use two microwave pulses to control the qubit, I and Q. The Hamiltonians describing the interaction between the qubit and the pulses are:

\[\hat{H}_I = (\hat{q} + \hat{q}^\dagger)\] \[\hat{H}_Q = (\hat{q} - \hat{q}^\dagger)\cdot i\]

The amplitude of the microwave pulses varies with time (this is how we control the system). The amplitude of the I pulse is given by $\epsilon_I(t)$ and of the Q by $\epsilon_Q(t)$. The total Hamiltonian of the system is the sum:

\[\hat{H} = \hat{H}_0 + \epsilon_I(t) \hat{H}_I + \epsilon_Q(t) \hat{H}_Q\]

We seek to find optimal $\epsilon_I (t)$ and $\epsilon_Q (t)$

# Constants
omega = 1
H_0 = omega*qd@q
H_I = qd + q
H_Q = (qd - q)*1j

👉 Time variables ⏰

T is the total time, Ns is the number of time steps and times is array of all time steps

T     = 25
Ns    = 200
times = np.linspace(0.0, T, Ns)

👉 Initial and target states

Setting initial and target states. $\lvert\psi_{initial}\rangle\ =\ \lvert0\rangle$ and $\lvert\psi_{target}\rangle\ =\ \lvert1\rangle$

psi_initial = np.array(qt.basis(2, 0))
psi_target  = np.array(qt.basis(2, 1))

👉 Initial pulses guess

We treat the the drive amplitudes $\epsilon_I(t)$ and $\epsilon_Q(t)$ as step-wise constant function, with step size of $\frac{T}{Ns}$ and a total of $Ns$ steps. This is why, in code, each drive amplitude is a numpy ndarray of length Ns.

This time we are going to be smarter about our initial guess choice. In example one we simply guessed a random wave, which is problematic for several of reason. The main problem with a random initial guess is that it is very much not smooth. We want an initial guess that is close to 0, pretty random and somewhat smooth, this could be achieved by taking the convolution of a guassian with a random array.

def gaussian(size, sigma, amp, graph=False):
    gaussian_window = np.zeros(size)
    for x in range(size):
        gaussian_window[x] = amp * np.exp(-(x - size / 2 ** 2) / sigma ** 2)
        if graph:
            plt.figure()
            plt.plot(times[0:size], gaussian_window)
    return gaussian_window
gaussian_window = gaussian(int(Ns/10), Ns/50, 1)

rand_amp_Q = 1/3000
rand_amp_I = 1/3000

conv_I = (ndi.convolve((np.random.random(Ns) - 0.5) *
                       2 * rand_amp_I, gaussian_window, mode='wrap'))
conv_Q = (ndi.convolve((np.random.random(Ns) - 0.5) *
                       2 * rand_amp_Q, gaussian_window, mode='wrap'))

e_I = conv_I
e_Q = conv_Q

👉 Constraints

We’ll now define the strength of the constraints (the lagrange multipliers, $\lambda$’s). In this example we will only have soft constraint (instead of hard constraints such as cut-off amplitude and frequency). We’ll only define constraints on the amplitude and the bandwidht, $\lambda_{amp}$ and $\lambda_{band}$.

lambda_amp = 0
lambda_band = 0.1

We need to put the multiple drive hamiltonians into one variable and also for the multiple drive amplitudes. We set a variable, drive_hamiltonians, which is a list of all the drive hamiltonians, and a variable, pulses, which is a list of the drive amplitudes. Notice that the two lists need to be the same length and that the $i^{th}$ drive hamiltonian in the list correspond to the $i^{th}$ drive amplitude in the list

drive_hamiltonians = [H_I, H_Q]
pulses    = np.array([e_I, e_Q])

👉 Creating GRAPE object 🍇

We now create the GrapePulse object, which contains all the data on the pulse and can be used for the optimization

grape_pulse = grape.GrapePulse(psi_initial, psi_target, T, Ns, H_0, drive_hamiltonians,
                               pulses, constraints=True, lambda_amp_lin=lambda_amp, lambda_band_lin=lambda_band)

👉 Optimizing 🛠

We now run the GRAPE optimization, we can choose to graph the probabilites over the duration of the pulse before and after the optimization

results = grape_pulse.optimize(graph_prob=True)

png

We get back a dictionary with all the results we need: optimized drive amplitudes, final fidelity, the Hamiltonians (which are allready known if you wrote this script, but usefull if you want to save it to a file), the Hilbert space dimensions and some comments you can add. The dictionary has the form:

  • “pulses”: ndarray of shape (number of drive amplitudes$\ \times\ $number of time steps)
  • “fidelity”: float of the final fidelity
  • “Hamiltonians”:
    • “constant”: ndarray matrix of the constant hamiltonian
    • “drives”: list of ndarray matrices of the drive hamiltonians
  • “dimensions”: ndarray of list of original hilbert space dimensions
  • “comments”: comments you can add (again, only useful when you save as a file and want to understand what these results mean

👉 Graphing the results 📈

pulses = results["pulses"]

fig, axes = plt.subplots(2)
# --- Initial ---
axes[0].set_title("Initial pulse")
axes[0].plot(times, e_I)
axes[0].plot(times, e_Q)
axes[0].legend([r'$\epsilon_I$', r'$\epsilon_Q$'])
# --- Final ---
axes[1].set_title("final pulse")
axes[1].plot(times,  pulses[0])
axes[1].plot(times,  pulses[1])
[<matplotlib.lines.Line2D at 0x276fac2b7c8>]

png

Now we got smooth results as expected

Checking final fidelity

One last thing we need to check is that the final fidelity is actually close to $1$, let’s check that now

print("A fidelity of", results["fidelity"], "was achieved by the optimization")
A fidelity of 0.9988314976817199 was achieved by the optimization