skqulacs.qnn.classifier module#

class skqulacs.qnn.classifier.QNNClassifier(circuit: skqulacs.circuit.circuit.LearningCircuit, num_class: int, solver: skqulacs.qnn.solver.Solver, cost: typing_extensions.Literal[log_loss] = 'log_loss', do_x_scale: bool = True, x_norm_range: float = 1.0, y_exp_ratio: float = 2.2, manyclass: bool = False)[source]#

Bases: object

Class to solve classification problems by quantum neural networks The prediction is made by making a vector which predicts one-hot encoding of labels. The prediction is made by 1. taking expectation values of Pauli Z operator of each qubit <Z_i>, 2. taking softmax function of the vector (<Z_0>, <Z_1>, …, <Z_{n-1}>).

Parameters
  • circuit – Circuit to use in the learning.

  • num_class – The number of classes; the number of qubits to measure. must be n_qubits >= num_class .

  • solver – Solver to use(Nelder-Mead is not recommended).

  • cost – Cost function. log_loss only for now.

  • do_x_scale – Whether to scale x.

  • y_exp_ratio – coeffcient used in the application of softmax function. the output prediction vector is made by transforming (<Z_0>, <Z_1>, …, <Z_{n-1}>) to (y_1, y_2, …, y_(n-1)) where y_i = e^{<Z_i>*y_exp_scale}/(sum_j e^{<Z_j>*y_exp_scale})

Examples

>>> from skqulacs.qnn import QNNClassifier
>>> from skqulacs.circuit import create_qcl_ansatz
>>> from skqulacs.qnn.solver import Bfgs
>>> n_qubits = 4
>>> depth = 3
>>> evo_time = 0.5
>>> num_class = 3
>>> solver = Bfgs()
>>> circuit = create_qcl_ansatz(n_qubits, depth, evo_time)
>>> model = QNNClassifier(circuit, num_class, solver)
>>> _, theta = model.fit(x_train, y_train, maxiter=1000)
>>> x_list = np.arange(x_min, x_max, 0.02)
>>> y_pred = qnn.predict(theta, x_list)

manyclassの説明 manyclass=Trueにした場合、各状態に対応する数字が<Z0><Z1>… の値の代わりに、[000],[001],[010]… を取る確率を使用します。 それにより、最大2^n_qubit クラスの分類が可能ですが、多分精度が落ちます。

circuit: skqulacs.circuit.circuit.LearningCircuit#
cost: typing_extensions.Literal[log_loss] = 'log_loss'#
cost_func(theta: List[float], x_scaled: numpy.ndarray[Any, numpy.dtype[numpy.float64]], y_scaled: numpy.ndarray[Any, numpy.dtype[numpy.int64]]) float[source]#
do_x_scale: bool = True#
fit(x_train: numpy.ndarray[Any, numpy.dtype[numpy.float64]], y_train: numpy.ndarray[Any, numpy.dtype[numpy.int64]], maxiter: Optional[int] = None) Tuple[float, List[float]][source]#
Parameters
  • x_train – List of training data inputs whose shape is (n_sample, n_features).

  • y_train – List of labels to fit. Labels must be represented as integers. Shape is (n_samples,)

  • maxiter – The number of maximum iterations to pass scipy.optimize.minimize

Returns

Loss after learning. theta: Parameter theta after learning.

Return type

loss

fitting_qubit: int#
manyclass: bool = False#
n_qubit: int#
num_class: int#
observables: List[qulacs_core.Observable]#
predict(x_test: numpy.ndarray[Any, numpy.dtype[numpy.float64]]) numpy.ndarray[Any, numpy.dtype[numpy.int64]][source]#

Predict outcome for each input data in x_test.

Parameters

x_test – Input data whose shape is (n_samples, n_features).

Returns

Predicted outcome whose shape is (n_samples,).

Return type

y_pred

solver: skqulacs.qnn.solver.Solver#
x_norm_range: float = 1.0#
x_scaler: sklearn.preprocessing._data.MinMaxScaler#
y_exp_ratio: float = 2.2#