Skip to content

Commit

Permalink
Adding Efrat's edits 2/4
Browse files Browse the repository at this point in the history
Merge branch 'backends' of https://github.com/mawolf2023/cuda-quantum into backends
  • Loading branch information
mawolf2023 committed Feb 4, 2025
2 parents 390213d + 3c66feb commit af50c0d
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
4 changes: 2 additions & 2 deletions docs/sphinx/using/backends/sims/photonics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@ Hong-Ou-Mandel effect.

Executing Photonics Kernels
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In order to execute a photonics kernel, you need to specific a photonics simulator backend like :code:`orca-photonics` used in the example below.
In order to execute a photonics kernel, you need to specify a photonics simulator backend like :code:`orca-photonics` used in the example below.
There are two ways to execute photonics kernels :code:`sample` and :code:`get_state`


Expand Down Expand Up @@ -282,7 +282,7 @@ The :code:`get_state` command can be used to generate statistics about the quant
# Compute the statevector of the kernel
result = cudaq.get_state(kernel, qumode_count)
print(np.array(result))k
print(np.array(result))
.. parsed-literal::
Expand Down
6 changes: 3 additions & 3 deletions docs/sphinx/using/examples/multi_gpu_workflows.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ You can run a state vector simulation using your CPU with the :code:`qpp-cpu` ba
{ 00:475 11:525 }
As the number of qubits increases to even modest size, the CPU simulation will become impractically slow. By switching to the :code:`nvidia` backend, you can accelerate the same code on a single GPU and achieve a speedup of up to **2500x**. If you have a GPU available, this the default backend to ensure maximum productivity.
As the number of qubits increases to even modest size, the CPU simulation will become impractically slow. By switching to the :code:`nvidia` backend, you can accelerate the same code on a single GPU and achieve a speedup of up to **425x**. If you have a GPU available, this the default backend to ensure maximum productivity.

.. literalinclude:: ../../snippets/python/using/examples/multi_gpu_workflows/multiple_targets.py
:language: python
Expand Down Expand Up @@ -69,7 +69,7 @@ Parallel execution over multiple QPUs (`mqpu`)
Batching Hamiltonian Terms
^^^^^^^^^^^^^^^^^^^^^^^^^^^

Multiple GPUs can also come in handy for cases where applications might benefit from multiple QPUs running asynchronously. The `mqpu` backend uses multiple GPUs to simulate each QPU so you can test and accelerate quantum applications with parallelization.
Multiple GPUs can also come in handy for cases where applications might benefit from multiple QPUs running in parallel. The `mqpu` backend uses multiple GPUs to simulate QPUs so you can accelerate quantum applications with parallelization.


.. image:: images/mqpu.png
Expand Down Expand Up @@ -152,7 +152,7 @@ Multi-QPU + Other Backends (`remote-mqpu`)
-------------------------------------------


The `mqpu` backend can be extended so that each parallel simulated QPU can be simulated with backends other than :code:`nvidia`. This provides a way to simulate larger scale circuits and execute parallel algorithms. This accomplished by launching remotes servers which each simulated a QPU.
The `mqpu` backend can be extended so that each parallel simulated QPU run backends other than :code:`nvidia`. This provides a way to simulate larger scale circuits and execute parallel algorithms. This accomplished by launching remotes servers which each simulated a QPU.
The code example below demonstrates this using the :code:`tensornet-mps` backend which allows sampling of a 40 qubit circuit too larger for state vector simulation. In this case, the target is specified as :code:`remote-mqpu` while an additional :code:`backend` is specified for the simulator used for each QPU.

The default approach uses one GPU per QPU and can both launch and close each server automatically. This is accomplished by specifying :code:`auto_launch` and :code"`url` within :code:`cudaq.set_target`. Running the script below will then sample the 40 qubit circuit using two QPUs each running :code:`tensornet-mps`.
Expand Down

0 comments on commit af50c0d

Please sign in to comment.