skqulacs.qnn.regressor module#

class skqulacs.qnn.regressor.QNNRegressor(circuit: skqulacs.circuit.circuit.LearningCircuit, solver: skqulacs.qnn.solver.Solver, cost: typing_extensions.Literal[mse] = 'mse', do_x_scale: bool = True, do_y_scale: bool = True, x_norm_range: float = 1.0, y_norm_range: float = 0.7, observables_str: typing.List[str] = <factory>, n_outputs: int = 1)[source]#

Bases: object

Class to solve regression problems with quantum neural networks The output is taken as expectation values of pauli Z operator acting on the first qubit, i.e., output is <Z_0>.

Parameters
  • circuit – Circuit to use in the learning.

  • solver – Solver to use(Nelder-Mead is not recommended).

  • n_output – Dimentionality of each output data.

  • cost – Cost function. MSE only for now. MSE computes squared sum after normalization.

  • do_x_scale – Whether to scale x.

  • do_x_scale – Whether to scale y.

  • x_norm_range – Normalize x in [+-xy_norm_range].

  • y_norm_range – Normalize y in [+-y_norm_range]. Setting y_norm_range to 0.7 improves performance.

Examples

>>> from skqulacs.qnn import QNNRegressor
>>> from skqulacs.circuit import create_qcl_ansatz
>>> n_qubits = 4
>>> depth = 3
>>> evo_time = 0.5
>>> circuit = create_qcl_ansatz(n_qubits, depth, evo_time)
>>> model = QNNRegressor(circuit)
>>> _, theta = model.fit(x_train, y_train, maxiter=1000)
>>> x_list = np.arange(x_min, x_max, 0.02)
>>> y_pred = qnn.predict(theta, x_list)
circuit: skqulacs.circuit.circuit.LearningCircuit#
cost: typing_extensions.Literal[mse] = 'mse'#
cost_func(theta: List[float], x_scaled: numpy.ndarray[Any, numpy.dtype[numpy.float64]], y_scaled: numpy.ndarray[Any, numpy.dtype[numpy.float64]]) float[source]#
do_x_scale: bool = True#
do_y_scale: bool = True#
fit(x_train: numpy.ndarray[Any, numpy.dtype[numpy.float64]], y_train: numpy.ndarray[Any, numpy.dtype[numpy.float64]], maxiter_or_lr: Optional[int] = None) Tuple[float, List[float]][source]#
Parameters
  • x_list – List of training data inputs whose shape is (n_sample, n_features).

  • y_list – List of training data outputs whose shape is (n_sample, n_output_dims).

  • maxiter – The number of iterations to pass scipy.optimize.minimize

Returns

Loss after learning. theta: Parameter theta after learning.

Return type

loss

func_grad(theta: List[float], x_scaled: numpy.ndarray[Any, numpy.dtype[numpy.float64]]) numpy.ndarray[Any, numpy.dtype[numpy.float64]][source]#
n_outputs: int = 1#
n_qubit: int#
observables: List[qulacs_core.Observable]#
observables_str: List[str]#
predict(x_test: numpy.ndarray[Any, numpy.dtype[numpy.float64]]) numpy.ndarray[Any, numpy.dtype[numpy.float64]][source]#

Predict outcome for each input data in x_test.

Parameters

x_test – Input data whose shape is (n_samples, n_features).

Returns

Predicted outcome.

Return type

y_pred

solver: skqulacs.qnn.solver.Solver#
x_norm_range: float = 1.0#
x_scaler: sklearn.preprocessing._data.MinMaxScaler#
y_norm_range: float = 0.7#
y_scaler: sklearn.preprocessing._data.MinMaxScaler#