Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrapping of legacy device automatically in various device creation/qnode/execute functions #6046

Merged
merged 117 commits into from
Aug 21, 2024
Merged
Show file tree
Hide file tree
Changes from 22 commits
Commits
Show all changes
117 commits
Select commit Hold shift + click to select a range
ab601c8
adding legacydevicefacade class
albi3ro Apr 1, 2024
b6daa90
more tests
albi3ro Apr 1, 2024
7b20c99
adding to interfaces tests
albi3ro Apr 2, 2024
71db075
finish up torch tests
albi3ro Apr 2, 2024
2e17b76
merge master
albi3ro Jun 4, 2024
583f526
fixing up interface tests
albi3ro Jun 4, 2024
34e0856
starting on testing
albi3ro Jun 10, 2024
6f12302
starting on testing
albi3ro Jun 11, 2024
d32f7f9
Merge branch 'master' into legacy-device-facade-class
albi3ro Jul 2, 2024
900fa35
fixing up tests
albi3ro Jul 2, 2024
0771623
adding some more test coverage [skip-ci]
albi3ro Jul 2, 2024
4dbd068
adding some more tests and coverage
albi3ro Jul 4, 2024
bb6612e
more tests and some docs
albi3ro Jul 8, 2024
f998016
Merge branch 'master' into legacy-device-facade-class
albi3ro Jul 9, 2024
cf46f57
Merge branch 'master' into legacy-device-facade-class
Shiro-Raven Jul 24, 2024
f654249
pass along postselect mode
albi3ro Jul 24, 2024
4e9cc8d
Merge branch 'master' into legacy-device-facade-class
Shiro-Raven Jul 24, 2024
9b7605a
Update pennylane/devices/legacy_facade.py
albi3ro Jul 25, 2024
ebb8f44
resolving merges
albi3ro Jul 25, 2024
30029db
Merge branch 'legacy-device-facade-class' of https://github.com/Penny…
albi3ro Jul 25, 2024
0d11cd2
revert merge problems
albi3ro Jul 26, 2024
0021a54
Update pennylane/devices/legacy_facade.py
albi3ro Jul 26, 2024
7f2e199
Update tests/devices/test_legacy_facade.py
albi3ro Jul 26, 2024
0c7f247
Merge branch 'master' into legacy-device-facade-class
albi3ro Jul 26, 2024
64e59f7
deprecation of backprop device switching
albi3ro Jul 26, 2024
a3d0032
Update pennylane/devices/legacy_facade.py
albi3ro Jul 26, 2024
2c2a764
added facade wrappers, some tests still failing
Shiro-Raven Jul 26, 2024
d71419f
Merge branch 'legacy-device-facade-class' into ad/facade-wrapper
Shiro-Raven Jul 26, 2024
f6b7a52
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Jul 26, 2024
c49a816
more test fixes and renaming of dq2 test file
Shiro-Raven Jul 26, 2024
575c5c3
fix remaining tests
Shiro-Raven Jul 26, 2024
dbcb7a9
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Jul 26, 2024
067efa2
Update pennylane/workflow/qnode.py
Shiro-Raven Jul 26, 2024
d006c87
Update pennylane/workflow/qnode.py
Shiro-Raven Jul 26, 2024
78b47d0
Update pennylane/workflow/qnode.py
Shiro-Raven Jul 26, 2024
ec72210
changelog update
Shiro-Raven Jul 26, 2024
1b85542
more test fixes and merge mashups fixes
Shiro-Raven Jul 26, 2024
68bf9e4
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Jul 26, 2024
10bf1e1
fix snapshot tests
Shiro-Raven Jul 29, 2024
18b855f
fix failing test_gates test
Shiro-Raven Jul 29, 2024
b5b15d8
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Jul 29, 2024
7e95b50
fixed more tests and added metaclass for legacy device API for facade…
Shiro-Raven Jul 29, 2024
5e5dc4c
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Jul 29, 2024
d3fc25d
fix to `TransformedDevice`
Shiro-Raven Jul 29, 2024
e0e7079
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Jul 29, 2024
cb841a1
fix jacobian bug
Shiro-Raven Jul 29, 2024
b94bbb7
more test fixes
Shiro-Raven Jul 30, 2024
6762273
more fixes
Shiro-Raven Jul 30, 2024
1fbd7fe
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Jul 30, 2024
3d198e8
fix for tensorflow test
Shiro-Raven Jul 30, 2024
9d3cc5d
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Jul 30, 2024
28cacd4
more test fixes
Shiro-Raven Jul 30, 2024
c01f3d8
jax tests fixes
Shiro-Raven Jul 30, 2024
bd43ed2
more test fixes
Shiro-Raven Jul 31, 2024
4869235
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Jul 31, 2024
2375c4a
more test fixes
Shiro-Raven Jul 31, 2024
f32c10a
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Jul 31, 2024
af525b5
revert error message
Shiro-Raven Jul 31, 2024
2a0fced
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 1, 2024
3d0cbba
more test fixes
Shiro-Raven Aug 1, 2024
f4b4142
more fixes
Shiro-Raven Aug 1, 2024
b7c7033
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 1, 2024
b8d2125
more tests
Shiro-Raven Aug 1, 2024
04ddf49
weeeeeee
Shiro-Raven Aug 1, 2024
4833934
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 1, 2024
d9712b7
Update pennylane/workflow/jacobian_products.py
Shiro-Raven Aug 2, 2024
fb43e42
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 2, 2024
4a9fba1
fix minor logging test fail
Shiro-Raven Aug 1, 2024
ebdd00a
fix parameterized evolution test
Shiro-Raven Aug 2, 2024
edca13d
fixes
Shiro-Raven Aug 2, 2024
441f0fc
fix
Shiro-Raven Aug 2, 2024
57e8800
hopefully last fix
Shiro-Raven Aug 2, 2024
f812961
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 5, 2024
33e4508
revert unnecessary changes
Shiro-Raven Aug 5, 2024
d18f695
delete unneeded attribute from QNode class
Shiro-Raven Aug 5, 2024
7016cd5
codecov fixes
Shiro-Raven Aug 5, 2024
f1a39cb
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 5, 2024
4b136f3
type hints fix
Shiro-Raven Aug 5, 2024
4fc5671
minor redundancy removal
Shiro-Raven Aug 5, 2024
0aa4a20
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 5, 2024
565ced6
add seed to TF test
Shiro-Raven Aug 5, 2024
b0f0772
revert default gradient method and remove `gradient_kwargs` from jaco…
Shiro-Raven Aug 5, 2024
ee6c610
remove erroneous xfails
Shiro-Raven Aug 5, 2024
90820a9
fix in facade tests
Shiro-Raven Aug 5, 2024
971b92a
no adjoint for non-expvals
albi3ro Aug 6, 2024
7e1c605
[no ci] bump nightly version
ringo-but-quantum Aug 6, 2024
e3cdff1
Fix `qml.center` with linear combinations (#6049)
dwierichs Aug 6, 2024
a281d80
revert repr of QNode to old logic
Shiro-Raven Aug 7, 2024
e2d0969
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 7, 2024
3b07de5
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 8, 2024
ca97469
fix tf legacy drawing test
Shiro-Raven Aug 8, 2024
954689f
generalize adjoint request in facade
Shiro-Raven Aug 8, 2024
5d0253b
Update tests/test_debugging.py
Shiro-Raven Aug 8, 2024
41f115f
fix recursion limit exceeded problem
Shiro-Raven Aug 8, 2024
e501e71
Revert "fix recursion limit exceeded problem"
Shiro-Raven Aug 8, 2024
9db314c
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 8, 2024
fb35f1d
change adjoint ops definition
albi3ro Aug 9, 2024
3956ed4
adjoint is never best
albi3ro Aug 12, 2024
01a9ae9
adjoint allows non-trainable qubit unitary
albi3ro Aug 12, 2024
0b662d3
oops, is_trainable is in operation
albi3ro Aug 12, 2024
5879c18
ignore observable validation
Shiro-Raven Aug 12, 2024
8b91e04
address codecov missed lines
Shiro-Raven Aug 12, 2024
aa8e98b
address final feedback concerns and add codecov no cover
Shiro-Raven Aug 13, 2024
631a291
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 13, 2024
a07dfff
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 13, 2024
54d97d9
address codecov misses
Shiro-Raven Aug 14, 2024
e1cfde3
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 14, 2024
8720103
more codecov fixes and minor code movement
Shiro-Raven Aug 14, 2024
3df7558
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 14, 2024
7a5085c
remove __new__ method from facade class, added error for re-wrapping …
Shiro-Raven Aug 15, 2024
8f75e51
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 15, 2024
7ed7a0b
add copy operations to facade class for catalyst use case
Shiro-Raven Aug 19, 2024
030715d
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 19, 2024
b99804c
trigger CI
Shiro-Raven Aug 19, 2024
b59bb38
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 20, 2024
d7e4438
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 20, 2024
3a97f79
Merge branch 'master' into ad/facade-wrapper
Shiro-Raven Aug 21, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 23 additions & 22 deletions pennylane/devices/legacy_facade.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,9 @@
from dataclasses import replace

import pennylane as qml
from pennylane.measurements import Shots
from pennylane.measurements import MidMeasureMP, Shots
from pennylane.transforms.core.transform_program import TransformProgram

from .default_qubit import adjoint_observables, adjoint_ops
from .device_api import Device
from .execution_config import DefaultExecutionConfig
from .modifiers import single_tape_support
Expand All @@ -34,10 +33,16 @@
no_sampling,
validate_adjoint_trainable_params,
validate_measurements,
validate_observables,
)


def _requests_adjoint(execution_config):
return execution_config.gradient_method == "adjoint" or (
execution_config.gradient_method == "device"
and execution_config.gradient_keyword_arguments.get("method", None) == "adjoint_jacobian"
)


@contextmanager
def _set_shots(device, shots):
"""Context manager to temporarily change the shots
Expand Down Expand Up @@ -98,6 +103,15 @@ def legacy_device_batch_transform(tape, device):
return _set_shots(device, tape.shots)(device.batch_transform)(tape)


def adjoint_ops(op: qml.operation.Operator) -> bool:
"""Specify whether or not an Operator is supported by adjoint differentiation."""
if isinstance(op, qml.QubitUnitary) and not qml.operation.is_trainable(op):
return True
return not isinstance(op, MidMeasureMP) and (
op.num_params == 0 or (op.num_params == 1 and op.has_generator)
)


def _add_adjoint_transforms(program: TransformProgram, name="adjoint"):
"""Add the adjoint specific transforms to the transform program."""
program.add_transform(no_sampling, name=name)
Expand All @@ -106,7 +120,6 @@ def _add_adjoint_transforms(program: TransformProgram, name="adjoint"):
stopping_condition=adjoint_ops,
name=name,
)
program.add_transform(validate_observables, adjoint_observables, name=name)

def accepted_adjoint_measurements(mp):
return isinstance(mp, qml.measurements.ExpectationMP)
Expand Down Expand Up @@ -144,11 +157,11 @@ class LegacyDeviceFacade(Device):

"""

def __new__(cls, device: "qml.devices.LegacyDevice", *args, **kwargs):
return device if isinstance(device, cls) else super().__new__(cls)

# pylint: disable=super-init-not-called
def __init__(self, device: "qml.devices.LegacyDevice"):
if isinstance(device, type(self)):
raise RuntimeError("An already-facaded device can not be wrapped in a facade again.")

if not isinstance(device, qml.devices.LegacyDevice):
raise ValueError(
"The LegacyDeviceFacade only accepts a device of type qml.devices.LegacyDevice."
Expand Down Expand Up @@ -208,11 +221,7 @@ def preprocess(self, execution_config=DefaultExecutionConfig):
program.add_transform(legacy_device_batch_transform, device=self._device)
program.add_transform(legacy_device_expand_fn, device=self._device)

if execution_config.gradient_method == "adjoint" or (
execution_config.gradient_method == "device"
and execution_config.gradient_keyword_arguments.get("method", None)
== "adjoint_jacobian"
):
if _requests_adjoint(execution_config):
_add_adjoint_transforms(program, name=f"{self.name} + adjoint")

if self._device.capabilities().get("supports_mid_measure", False):
Expand Down Expand Up @@ -249,9 +258,6 @@ def _setup_adjoint_config(self, execution_config):
return replace(execution_config, **updated_values)

def _setup_device_config(self, execution_config):
if execution_config.gradient_keyword_arguments.get("method", None) == "adjoint_jacobian":
return self._setup_adjoint_config(execution_config)

tape = qml.tape.QuantumScript([], [])

if not self._validate_device_method(tape):
Expand All @@ -276,13 +282,9 @@ def _setup_execution_config(self, execution_config):
config = replace(execution_config, gradient_method="backprop")
return self._setup_backprop_config(config)

if self._validate_adjoint_method(tape):
config = replace(execution_config, gradient_method="adjoint")
return self._setup_adjoint_config(config)

if execution_config.gradient_method == "backprop":
return self._setup_backprop_config(execution_config)
if execution_config.gradient_method == "adjoint":
if _requests_adjoint(execution_config):
return self._setup_adjoint_config(execution_config)
if execution_config.gradient_method == "device":
return self._setup_device_config(execution_config)
Expand All @@ -295,14 +297,13 @@ def supports_derivatives(self, execution_config=None, circuit=None) -> bool:
if execution_config is None or execution_config.gradient_method == "best":
validation_methods = (
self._validate_backprop_method,
self._validate_adjoint_method,
self._validate_device_method,
)
return any(validate(circuit) for validate in validation_methods)

if execution_config.gradient_method == "backprop":
return self._validate_backprop_method(circuit)
if execution_config.gradient_method == "adjoint":
if _requests_adjoint(execution_config):
return self._validate_adjoint_method(circuit)
if execution_config.gradient_method == "device":
return self._validate_device_method(circuit)
Expand Down
12 changes: 4 additions & 8 deletions pennylane/optimize/qnspsa.py
Original file line number Diff line number Diff line change
Expand Up @@ -442,15 +442,11 @@ def _apply_blocking(self, cost, args, kwargs, params_next):
cost.construct(params_next, kwargs)
tape_loss_next = cost.tape.copy(copy_operations=True)

if isinstance(cost.device, qml.devices.Device):
program, _ = cost.device.preprocess()

loss_curr, loss_next = qml.execute(
[tape_loss_curr, tape_loss_next], cost.device, None, transform_program=program
)
program, _ = cost.device.preprocess()

else:
loss_curr, loss_next = qml.execute([tape_loss_curr, tape_loss_next], cost.device, None)
loss_curr, loss_next = qml.execute(
[tape_loss_curr, tape_loss_next], cost.device, None, transform_program=program
)

# self.k has been updated earlier
ind = (self.k - 2) % self.last_n_steps.size
Expand Down
8 changes: 2 additions & 6 deletions pennylane/optimize/spsa.py
Original file line number Diff line number Diff line change
Expand Up @@ -265,12 +265,8 @@ def compute_grad(self, objective_fn, args, kwargs):
try:
# pylint: disable=protected-access
dev_shots = objective_fn.device.shots
if isinstance(dev_shots, Shots):
shots = dev_shots if dev_shots.has_partitioned_shots else Shots(None)
elif objective_fn.device.shot_vector is not None:
shots = Shots(objective_fn.device._raw_shot_sequence) # pragma: no cover
else:
shots = Shots(None)

shots = dev_shots if dev_shots.has_partitioned_shots else Shots(None)

if np.prod(objective_fn.func(*args, **kwargs).shape(objective_fn.device, shots)) > 1:
raise ValueError(
Expand Down
4 changes: 1 addition & 3 deletions pennylane/transforms/core/transform_dispatcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -306,9 +306,7 @@ def original_device(self):
"""Return the original device."""
return self._original_device

new_dev = TransformedDevice(original_device, self._transform)

return new_dev
return TransformedDevice(original_device, self._transform)

def _batch_transform(self, original_batch, targs, tkwargs):
"""Apply the transform on a batch of tapes."""
Expand Down
10 changes: 2 additions & 8 deletions pennylane/workflow/construct_batch.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,14 +65,8 @@ def _get_full_transform_program(qnode: QNode) -> "qml.transforms.core.TransformP
**qnode.gradient_kwargs,
)

if isinstance(qnode.device, qml.devices.Device):
config = _make_execution_config(qnode, qnode.gradient_fn)
return program + qnode.device.preprocess(config)[0]

program.add_transform(qml.transform(qnode.device.batch_transform))
program.add_transform(expand_fn_transform(qnode.device.expand_fn))

return program
config = _make_execution_config(qnode, qnode.gradient_fn)
return program + qnode.device.preprocess(config)[0]


def get_transform_program(qnode: "QNode", level=None) -> "qml.transforms.core.TransformProgram":
Expand Down
26 changes: 9 additions & 17 deletions pennylane/workflow/execution.py
Original file line number Diff line number Diff line change
Expand Up @@ -663,7 +663,7 @@ def inner_execute_with_empty_jac(tapes, **_):
device_vjp
and getattr(device, "short_name", "") in ("lightning.gpu", "lightning.kokkos")
and interface in jpc_interfaces
):
): # pragma: no cover
if INTERFACE_MAP[interface] == "jax" and "use_device_state" in gradient_kwargs:
gradient_kwargs["use_device_state"] = False

Expand Down Expand Up @@ -781,26 +781,18 @@ def _make_transform_programs(
):
"""helper function to make the transform programs."""

if isinstance(device, qml.devices.Device):

# If gradient_fn is a gradient transform, device preprocessing should happen in
# inner execute (inside the ml boundary).
if is_gradient_transform:
if inner_transform is None:
inner_transform = device.preprocess(config)[0]
if transform_program is None:
transform_program = qml.transforms.core.TransformProgram()
else:
if inner_transform is None:
inner_transform = qml.transforms.core.TransformProgram()
if transform_program is None:
transform_program = device.preprocess(config)[0]

else:
# If gradient_fn is a gradient transform, device preprocessing should happen in
# inner execute (inside the ml boundary).
if is_gradient_transform:
if inner_transform is None:
inner_transform = device.preprocess(config)[0]
if transform_program is None:
transform_program = qml.transforms.core.TransformProgram()
else:
if inner_transform is None:
inner_transform = qml.transforms.core.TransformProgram()
if transform_program is None:
transform_program = device.preprocess(config)[0]

return transform_program, inner_transform

Expand Down
51 changes: 28 additions & 23 deletions pennylane/workflow/qnode.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,13 @@
import inspect
import logging
import warnings
from collections.abc import Sequence
from typing import Optional
from collections.abc import Callable, Sequence
from typing import Any, Literal, Optional, Union, get_args

from cachetools import Cache

import pennylane as qml
from pennylane import Device
from pennylane.debugging import pldb_device_manager
from pennylane.logging import debug_logger
from pennylane.measurements import CountsMP, MidMeasureMP
Expand All @@ -47,6 +50,8 @@
"spsa",
]

SupportedDeviceAPIs = Union[Device, "qml.devices.Device"]


def _convert_to_interface(res, interface):
"""
Expand Down Expand Up @@ -454,19 +459,19 @@ def circuit_unpacking(x):

def __init__(
self,
func,
device: "qml.devices.Device",
interface="auto",
diff_method="best",
expansion_strategy=None,
max_expansion=None,
grad_on_execution="best",
cache="auto",
cachesize=10000,
max_diff=1,
device_vjp=False,
postselect_mode=None,
mcm_method=None,
func: Callable,
device: SupportedDeviceAPIs,
interface: SupportedInterfaceUserInput = "auto",
diff_method: Union[TransformDispatcher, SupportedDiffMethods] = "best",
expansion_strategy: Literal[None, "device", "gradient"] = None,
max_expansion: Optional[int] = None,
grad_on_execution: Literal[True, False, "best"] = "best",
cache: Union[Cache, Literal["auto", True, False]] = "auto",
cachesize: int = 10000,
max_diff: int = 1,
device_vjp: Union[None, bool] = False,
postselect_mode: Literal[None, "hw-like", "fill-shots"] = None,
mcm_method: Literal[None, "deferred", "one-shot", "tree-traversal"] = None,
**gradient_kwargs,
):
# Moving it here since the old default value is checked on debugging
Expand Down Expand Up @@ -508,9 +513,6 @@ def __init__(
gradient_kwargs,
)

if not isinstance(device, qml.devices.Device):
device = qml.devices.LegacyDeviceFacade(device)

if interface not in SUPPORTED_INTERFACES:
raise qml.QuantumFunctionError(
f"Unknown interface {interface}. Interface must be "
Expand All @@ -522,6 +524,9 @@ def __init__(
"Invalid device. Device must be a valid PennyLane device."
)

if not isinstance(device, qml.devices.Device):
device = qml.devices.LegacyDeviceFacade(device)

if "shots" in inspect.signature(func).parameters:
warnings.warn(
"Detected 'shots' as an argument to the given quantum function. "
Expand Down Expand Up @@ -668,7 +673,7 @@ def _update_gradient_fn(self, shots=None, tape: Optional["qml.tape.QuantumTape"]
@staticmethod
@debug_logger
def get_gradient_fn(
device: Union[Device, "qml.devices.Device"],
device: SupportedDeviceAPIs,
interface,
diff_method: Union[TransformDispatcher, SupportedDiffMethods] = "best",
tape: Optional["qml.tape.QuantumTape"] = None,
Expand Down Expand Up @@ -734,13 +739,13 @@ def get_gradient_fn(
@staticmethod
@debug_logger
def get_best_method(
device: Union[Device, "qml.devices.Device"],
interface,
device: SupportedDeviceAPIs,
interface: SupportedInterfaceUserInput,
tape: Optional["qml.tape.QuantumTape"] = None,
) -> tuple[
Union[TransformDispatcher, Literal["device", "backprop", "parameter-shift", "finite-diff"]],
dict[str, Any],
Union[Device, "qml.devices.Device"],
SupportedDeviceAPIs,
]:
"""Returns the 'best' differentiation method
for a particular device and interface combination.
Expand Down Expand Up @@ -782,7 +787,7 @@ def get_best_method(

@staticmethod
@debug_logger
def best_method_str(device: Union[Device, "qml.devices.Device"], interface) -> str:
def best_method_str(device: SupportedDeviceAPIs, interface: SupportedInterfaceUserInput) -> str:
"""Similar to :meth:`~.get_best_method`, except return the
'best' differentiation method in human-readable format.

Expand Down
15 changes: 12 additions & 3 deletions tests/devices/test_legacy_facade.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,8 +53,16 @@ def expval(self, observable, wires, par):
return 0.0


def test_double_facade_raises_error():
"""Test that a RuntimeError is raised if a facaded device is passed to constructor"""
dev = qml.device("default.mixed", wires=1)

with pytest.raises(RuntimeError, match="already-facaded device can not be wrapped"):
qml.devices.LegacyDeviceFacade(dev)


def test_error_if_not_legacy_device():
"""Test that a ValueError is raiuised if the target is not a legacy device."""
"""Test that a ValueError is raised if the target is not a legacy device."""

target = qml.devices.DefaultQubit()
with pytest.raises(ValueError, match="The LegacyDeviceFacade only accepts"):
Expand Down Expand Up @@ -282,10 +290,11 @@ def test_no_derivatives_case(self):
with pytest.raises(qml.DeviceError):
dev.preprocess(ExecutionConfig(gradient_method="backprop"))

@pytest.mark.parametrize("gradient_method", ("best", "adjoint"))
def test_adjoint_support(self, gradient_method):
def test_adjoint_support(self):
"""Test that the facade can handle devices that support adjoint."""

gradient_method = "adjoint"

# pylint: disable=unnecessary-lambda-assignment
class AdjointDev(DummyDevice):
"""A dummy device that supports adjoint diff"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -157,10 +157,10 @@ def circuit(p1, p2=y, **kwargs):
qml.RY(p2[0] * p2[1], wires=1)
qml.RX(kwargs["p3"], wires=0)
qml.CNOT(wires=[0, 1])
return qml.state()
return qml.expval(qml.PauliZ(0)), qml.expval(qml.PauliZ(1))

result = qml.draw(circuit)(p1=x, p3=z)
expected = "0: ──RX(0.10)──RX(0.40)─╭●─┤ State\n1: ──RY(0.06)───────────╰X─┤ State"
expected = "0: ──RX(0.10)──RX(0.40)─╭●─┤ <Z>\n1: ──RY(0.06)───────────╰X─┤ <Z>"
assert result == expected

def test_jacobian(self, dev_name, diff_method, grad_on_execution, tol, interface):
Expand Down
Loading
Loading