Home
last modified time | relevance | path

Searched full:input_bias (Results 1 – 25 of 27) sorted by relevance

12

/aosp_15_r20/external/ComputeLibrary/src/core/CL/kernels/
H A DCLFuseBatchNormalizationKernel.cpp44 … const ITensorInfo *input_bias, const ITensorInfo *bn_beta, const ITensorInfo *bn_gamma, in validate_arguments() argument
53 ARM_COMPUTE_RETURN_ERROR_ON(input_bias == nullptr && fused_bias == nullptr); in validate_arguments()
67 if(input_bias != nullptr) in validate_arguments()
69 ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_SHAPES(bn_mean, input_bias); in validate_arguments()
70 ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_TYPES(input_weights, input_bias); in validate_arguments()
111 … const ICLTensor *input_bias, const ICLTensor *bn_beta, const ICLTensor *bn_gamma, in configure() argument
114 …_context(), input_weights, bn_mean, bn_var, fused_weights, fused_bias, input_bias, bn_beta, bn_gam… in configure()
119 … const ICLTensor *input_bias, const ICLTensor *bn_beta, const ICLTensor *bn_gamma, in configure() argument
124 …ding_info({ input_weights, bn_mean, bn_var, fused_weights, fused_bias, input_bias, bn_beta, bn_gam… in configure()
127 _input_bias = input_bias; in configure()
[all …]
H A DCLFuseBatchNormalizationKernel.h56 …d bias tensor. It can be a nullptr in case of in-place computation and input_bias != nullptr. Same…
57 …* @param[in] input_bias (Optional) Input bias tensor for convolution or depthwise convolution …
66 …const ICLTensor *input_bias = nullptr, const ICLTensor *bn_beta = nullptr, const ICLTensor *bn_gam…
75 …d bias tensor. It can be a nullptr in case of in-place computation and input_bias != nullptr. Same…
76 …* @param[in] input_bias (Optional) Input bias tensor for convolution or depthwise convolutio…
85 …const ICLTensor *input_bias = nullptr, const ICLTensor *bn_beta = nullptr, const ICLTensor *bn_gam…
93 …s tensor info. It can be a nullptr in case of in-place computation and input_bias != nullptr. Same…
94 …* @param[in] input_bias (Optional) Input bias tensor info for convolution or depthwise convolut…
106 …const ITensorInfo *input_bias = nullptr, const ITensorInfo *bn_beta = nullptr, const ITensorInfo *…
/aosp_15_r20/external/pytorch/aten/src/ATen/native/cuda/
H A DRNN.cu34 const TensorArg& input_bias, const TensorArg& hidden_bias, in checkSizes() argument
40 if (input_bias->defined()) { in checkSizes()
41 checkDim(c, input_bias, 1); in checkSizes()
42 checkNumel(c, input_bias, gates_size); in checkSizes()
43 checkSameSize(c, input_bias, hidden_bias); in checkSizes()
49 checkAllSameGPU(c, {input_gates, hidden_gates, input_bias, hidden_bias, prev_hidden}); in checkSizes()
371 const Tensor& input_bias, const Tensor& hidden_bias, in lstm_forward_impl() argument
383 auto input_biasI = tryGetTensorInfo<scalar_t, index_type>(input_bias); in lstm_forward_impl()
392 if (allContiguous({input_gates, hidden_gates, input_bias, hidden_bias, cx, hy, cy, workspace})) { in lstm_forward_impl()
444 const Tensor& input_bias, const Tensor& hidden_bias, in gru_forward_impl() argument
[all …]
/aosp_15_r20/external/ComputeLibrary/src/core/NEON/kernels/
H A DNEFuseBatchNormalizationKernel.cpp155 … const ITensorInfo *input_bias, const ITensorInfo *bn_beta, const ITensorInfo *bn_gamma, in validate_arguments() argument
164 ARM_COMPUTE_RETURN_ERROR_ON(input_bias == nullptr && fused_bias == nullptr); in validate_arguments()
177 if(input_bias != nullptr) in validate_arguments()
179 ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_SHAPES(bn_mean, input_bias); in validate_arguments()
180 ARM_COMPUTE_RETURN_ERROR_ON_MISMATCHING_DATA_TYPES(input_weights, input_bias); in validate_arguments()
222 … const ITensor *input_bias, const ITensor *bn_beta, const ITensor *bn_gamma, in configure() argument
228 _input_bias = input_bias; in configure()
238 …_run_in_place_bias = (fused_bias == nullptr) || (input_bias != nullptr && fused_bias == input_b… in configure()
256 … (input_bias != nullptr) ? input_bias->info() : nullptr, in configure()
273 … const ITensorInfo *input_bias, const ITensorInfo *bn_beta, const ITensorInfo *bn_gamma, in validate() argument
[all …]
H A DNEFuseBatchNormalizationKernel.h60 …d bias tensor. It can be a nullptr in case of in-place computation and input_bias != nullptr. Same…
61 …* @param[in] input_bias (Optional) Input bias tensor for convolution or depthwise convolution …
70 …const ITensor *input_bias = nullptr, const ITensor *bn_beta = nullptr, const ITensor *bn_gamma = n…
78 …s tensor info. It can be a nullptr in case of in-place computation and input_bias != nullptr. Same…
79 …* @param[in] input_bias (Optional) Input bias tensor info for convolution or depthwise convolut…
91 …const ITensorInfo *input_bias = nullptr, const ITensorInfo *bn_beta = nullptr, const ITensorInfo *…
110 …using FuseBatchNormFunction = void(const ITensor *input_weights, const ITensor *input_bias, ITenso…
/aosp_15_r20/external/ComputeLibrary/arm_compute/runtime/CL/functions/
H A DCLFuseBatchNormalization.h72 …d bias tensor. It can be a nullptr in case of in-place computation and input_bias != nullptr. Same…
73 …* @param[in] input_bias (Optional) Input bias tensor for convolution or depthwise convolution …
82 …const ICLTensor *input_bias = nullptr, const ICLTensor *bn_beta = nullptr, const ICLTensor *bn_gam…
91 …d bias tensor. It can be a nullptr in case of in-place computation and input_bias != nullptr. Same…
92 …* @param[in] input_bias (Optional) Input bias tensor for convolution or depthwise convolutio…
101 …const ICLTensor *input_bias = nullptr, const ICLTensor *bn_beta = nullptr, const ICLTensor *bn_gam…
109 …s tensor info. It can be a nullptr in case of in-place computation and input_bias != nullptr. Same…
110 …* @param[in] input_bias (Optional) Input bias tensor info for convolution or depthwise convolut…
122 …const ITensorInfo *input_bias = nullptr, const ITensorInfo *bn_beta = nullptr, const ITensorInfo *…
/aosp_15_r20/external/ComputeLibrary/src/runtime/CL/functions/
H A DCLFuseBatchNormalization.cpp46 … const ICLTensor *input_bias, const ICLTensor *bn_beta, const ICLTensor *bn_gamma, in configure() argument
49 …_context(), input_weights, bn_mean, bn_var, fused_weights, fused_bias, input_bias, bn_beta, bn_gam… in configure()
54 … const ICLTensor *input_bias, const ICLTensor *bn_beta, const ICLTensor *bn_gamma, in configure() argument
57 …ARM_COMPUTE_LOG_PARAMS(input_weights, bn_mean, bn_var, fused_weights, fused_bias, input_bias, bn_b… in configure()
58 …le_context, input_weights, bn_mean, bn_var, fused_weights, fused_bias, input_bias, bn_beta, bn_gam… in configure()
63 … const ITensorInfo *input_bias, const ITensorInfo *bn_beta, const ITensorInfo *bn_gamma, in validate() argument
66 …l::validate(input_weights, bn_mean, bn_var, fused_weights, fused_bias, input_bias, bn_beta, bn_gam… in validate()
/aosp_15_r20/packages/modules/NeuralNetworks/runtime/test/fuzzing/operation_signatures/
DConvolutions.cpp140 INPUT_BIAS, \
160 INPUT_BIAS, \
185 INPUT_BIAS, \
206 INPUT_BIAS, \
224 INPUT_BIAS, \
247 INPUT_BIAS, \
381 INPUT_BIAS, \
403 INPUT_BIAS, \
430 INPUT_BIAS, \
453 INPUT_BIAS, \
[all …]
DFullyConnected.cpp54 .inputs = {INPUT_DEFAULT, INPUT_DEFAULT, INPUT_BIAS, in DEFINE_OPERATION_SIGNATURE()
65 .inputs = {INPUT_DEFAULT, INPUT_DEFAULT, INPUT_BIAS, in DEFINE_OPERATION_SIGNATURE()
75 .inputs = {INPUT_DEFAULT, INPUT_DEFAULT, INPUT_BIAS, in DEFINE_OPERATION_SIGNATURE()
DOperationSignatureUtils.h349 #define INPUT_BIAS \
/aosp_15_r20/external/tensorflow/tensorflow/lite/delegates/nnapi/
H A Dquant_lstm_sup_test.cc259 std::vector<int32_t> input_bias; in TEST() local
263 DecomposeBiasTensor(biases.data(), 4, &input_bias, &cell_bias, &forget_bias, in TEST()
266 EXPECT_THAT(input_bias, ElementsAreArray({-7876, 13488, -726, 32839})); in TEST()
282 std::vector<int32_t> input_bias; in TEST() local
286 DecomposeBiasTensor(biases.data(), 4, &input_bias, &cell_bias, &forget_bias, in TEST()
305 std::vector<int32_t> input_bias; in TEST() local
309 DecomposeBiasTensor(biases.data(), 4, &input_bias, &cell_bias, &forget_bias, in TEST()
328 std::vector<int32_t> input_bias; in TEST() local
332 DecomposeBiasTensor(biases.data(), 4, &input_bias, &cell_bias, &forget_bias, in TEST()
H A Dquant_lstm_sup.cc131 std::vector<int32_t>* input_bias, in DecomposeBiasTensor() argument
135 input_bias->resize(bias_size); in DecomposeBiasTensor()
136 std::copy(biases, biases + bias_size, input_bias->begin()); in DecomposeBiasTensor()
H A Dquant_lstm_sup.h49 std::vector<int32_t>* input_bias,
H A Dnnapi_delegate.cc4127 std::vector<int32_t> input_bias; in Map() local
4132 &input_bias, &cell_bias, in Map()
4137 ANEURALNETWORKS_TENSOR_INT32, kTfLiteInt32, {bias_size}, input_bias, in Map()
/aosp_15_r20/external/ComputeLibrary/arm_compute/runtime/NEON/functions/
H A DNEFuseBatchNormalization.h69 …d bias tensor. It can be a nullptr in case of in-place computation and input_bias != nullptr. Same…
70 …* @param[in] input_bias (Optional) Input bias tensor for convolution or depthwise convolution …
79 …const ITensor *input_bias = nullptr, const ITensor *bn_beta = nullptr, const ITensor *bn_gamma = n…
87 …s tensor info. It can be a nullptr in case of in-place computation and input_bias != nullptr. Same…
88 …* @param[in] input_bias (Optional) Input bias tensor info for convolution or depthwise convolut…
100 …const ITensorInfo *input_bias = nullptr, const ITensorInfo *bn_beta = nullptr, const ITensorInfo *…
/aosp_15_r20/external/ComputeLibrary/src/runtime/NEON/functions/
H A DNEFuseBatchNormalization.cpp45 … const ITensor *input_bias, const ITensor *bn_beta, const ITensor *bn_gamma, in configure() argument
48 ARM_COMPUTE_LOG_PARAMS(input_weights, bn_mean, bn_var, fused_weights, fused_bias, input_bias, in configure()
52 …->configure(input_weights, bn_mean, bn_var, fused_weights, fused_bias, input_bias, bn_beta, bn_gam… in configure()
57 … const ITensorInfo *input_bias, const ITensorInfo *bn_beta, const ITensorInfo *bn_gamma, in validate() argument
60 …l::validate(input_weights, bn_mean, bn_var, fused_weights, fused_bias, input_bias, bn_beta, bn_gam… in validate()
/aosp_15_r20/external/pytorch/aten/src/ATen/native/
H A DRNN.cpp1553 const Tensor& input_bias = c10::value_or_else(input_bias_opt, [] {return Tensor();}); in _thnn_differentiable_lstm_cell_backward() local
1560 if (input_bias.defined()) { in _thnn_differentiable_lstm_cell_backward()
1561 gates = gates + input_bias; in _thnn_differentiable_lstm_cell_backward()
1594 Tensor grad_bias = input_bias.defined() ? grad_gates.sum(0, /*keepdim=*/false) : at::Tensor{}; in _thnn_differentiable_lstm_cell_backward()
1605 const Tensor& input_bias = *input_bias_maybe_owned; in _thnn_differentiable_gru_cell_backward() local
1610 if (input_bias.defined()){ in _thnn_differentiable_gru_cell_backward()
1611 in_g = in_g+input_bias; in _thnn_differentiable_gru_cell_backward()
1634 …Tensor grad_input_bias = input_bias.defined() ? grad_input_gates.sum(0, /*keepdim=*/false) : at::T… in _thnn_differentiable_gru_cell_backward()
1635 …Tensor grad_hidden_bias = input_bias.defined() ? grad_hidden_gates.sum(0, /*keepdim=*/false) : at:… in _thnn_differentiable_gru_cell_backward()
/aosp_15_r20/external/pytorch/test/
H A Dtest_matmul_cuda.py503 input_bias = None
505 input_bias = torch.rand((16,), device=device).to(torch.half)
506 _ = torch._scaled_mm(x, y, scale_a, scale_b, bias=input_bias)
/aosp_15_r20/external/pytorch/torch/
H A D_meta_registrations.py5865 input_bias, argument
5876 if input_bias is not None:
5877 torch._check(input_bias.ndim == 1, lambda: f"{input_bias.ndim} != 1")
5879 input_bias.numel() == gates_size,
5880 lambda: f"{input_bias.numel()} != {gates_size}",
5883 input_bias.shape == hidden_bias.shape,
5884 lambda: f"{input_bias.shape} != {hidden_bias.shape}",
5895 for x in [hidden_gates, input_bias, hidden_bias, prev_hidden]
5906 input_bias=None, argument
5909 rnn_cell_checkSizes(input_gates, hidden_gates, input_bias, hidden_bias, 4, cx)
/aosp_15_r20/external/tensorflow/tensorflow/python/keras/layers/
H A Drecurrent.py1844 input_bias, recurrent_bias = self.bias, None
1846 input_bias, recurrent_bias = array_ops.unstack(self.bias)
1863 x_z = backend.bias_add(x_z, input_bias[:self.units])
1864 x_r = backend.bias_add(x_r, input_bias[self.units: self.units * 2])
1865 x_h = backend.bias_add(x_h, input_bias[self.units * 2:])
1908 matrix_x = backend.bias_add(matrix_x, input_bias)
H A Drecurrent_v2.py559 combined input_bias and recurrent_bias.
585 input_bias, recurrent_bias = array_ops.unstack(bias)
593 matrix_x = backend.bias_add(matrix_x, input_bias)
646 # to be done for kernel, recurrent_kernel, input_bias, recurrent_bias.
/aosp_15_r20/external/pytorch/test/inductor/
H A Dtest_fp8.py105 input_bias = torch.rand(32, device="cuda", dtype=dtype)
114 bias=input_bias,
H A Dtest_aot_inductor.py725 bias=input_bias,
736 input_bias = torch.rand(32, device="cuda", dtype=dtype)
748 (x, weight, input_bias, a_inverse_scale, b_inverse_scale),
776 bias=input_bias,
787 input_bias = torch.rand(32, device=self.device, dtype=dtype)
803 (x, input_bias, a_inverse_scale, b_inverse_scale),
/aosp_15_r20/external/pytorch/tools/autograd/
H A Dderivatives.yaml2861 …_lstm_cell(Tensor input_gates, Tensor hidden_gates, Tensor cx, Tensor? input_bias=None, Tensor? hi…
2863input_bias, hidden_bias: "GradMode::is_enabled() ? _thnn_differentiable_lstm_cell_backward(grads[0…
2865 - name: _thnn_fused_gru_cell(Tensor input_gates, Tensor hidden_gates, Tensor hx, Tensor? input_bias
2866input_bias, hidden_bias: "grad.defined() ? (GradMode::is_enabled() ? _thnn_differentiable_gru_cell…
/aosp_15_r20/external/pytorch/torch/csrc/inductor/aoti_torch/generated/
H A Dc_shim_cuda.h45 … AtenTensorHandle hidden_gates, AtenTensorHandle cx, AtenTensorHandle* input_bias, AtenTensorHandl…

12