1*c217d954SCole Faust /* 2*c217d954SCole Faust * Copyright (c) 2017-2021 Arm Limited. 3*c217d954SCole Faust * 4*c217d954SCole Faust * SPDX-License-Identifier: MIT 5*c217d954SCole Faust * 6*c217d954SCole Faust * Permission is hereby granted, free of charge, to any person obtaining a copy 7*c217d954SCole Faust * of this software and associated documentation files (the "Software"), to 8*c217d954SCole Faust * deal in the Software without restriction, including without limitation the 9*c217d954SCole Faust * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or 10*c217d954SCole Faust * sell copies of the Software, and to permit persons to whom the Software is 11*c217d954SCole Faust * furnished to do so, subject to the following conditions: 12*c217d954SCole Faust * 13*c217d954SCole Faust * The above copyright notice and this permission notice shall be included in all 14*c217d954SCole Faust * copies or substantial portions of the Software. 15*c217d954SCole Faust * 16*c217d954SCole Faust * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 17*c217d954SCole Faust * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 18*c217d954SCole Faust * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 19*c217d954SCole Faust * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 20*c217d954SCole Faust * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 21*c217d954SCole Faust * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 22*c217d954SCole Faust * SOFTWARE. 23*c217d954SCole Faust */ 24*c217d954SCole Faust #include "src/core/common/Macros.h" 25*c217d954SCole Faust #include "src/cpu/ICpuOperator.h" 26*c217d954SCole Faust 27*c217d954SCole Faust namespace arm_compute 28*c217d954SCole Faust { 29*c217d954SCole Faust namespace cpu 30*c217d954SCole Faust { 31*c217d954SCole Faust /** Basic function to simulate a convolution layer. This function calls one of the following functions: 32*c217d954SCole Faust * -# @ref CpuGemm (executed only in case GEMM is required for the operation) 33*c217d954SCole Faust * -# @ref CpuWinogradConv2d (executed only in case Winograd is required for the operation) 34*c217d954SCole Faust * -# @ref CpuDirectConv2d (executed only in case Direct Convolution is required for the operation) 35*c217d954SCole Faust * 36*c217d954SCole Faust * 37*c217d954SCole Faust * The function selects one of the algorithms mentioned above based on: 38*c217d954SCole Faust * - The size of the kernel 39*c217d954SCole Faust * - Number of input/output feature maps 40*c217d954SCole Faust * - Amount of memory needed 41*c217d954SCole Faust * 42*c217d954SCole Faust * Generally GEMM-based convolution is executed when neither Winograd nor FFT nor Direct convolution can be performed. 43*c217d954SCole Faust * 44*c217d954SCole Faust * FP32 Algorithm| Filter Size | Input/Output feature maps | 45*c217d954SCole Faust * --------------|----------------------------------------------------|-------------------------------------------| 46*c217d954SCole Faust * Winograd | 3x3 1x3 3x1 5x1 1x5 5x5(fast maths) 7x1 1x7 | Input channels is greater than 3 | 47*c217d954SCole Faust * FFT | Squared kernels and greater than 9x9 | Input feature maps > Output feature maps | 48*c217d954SCole Faust * DirectConv | 9x9 | | 49*c217d954SCole Faust * GEMM | Any size | | 50*c217d954SCole Faust * 51*c217d954SCole Faust * Winograd 5x5 requires fast maths enabled. 52*c217d954SCole Faust * 53*c217d954SCole Faust * FP16 Algorithm| Filter Size | 54*c217d954SCole Faust * --------------|------------------| 55*c217d954SCole Faust * Winograd | Not supported | 56*c217d954SCole Faust * FFT | Not supported | 57*c217d954SCole Faust * DirectConv | 9x9 | 58*c217d954SCole Faust * GEMM | Any size | 59*c217d954SCole Faust * 60*c217d954SCole Faust * 61*c217d954SCole Faust */ 62*c217d954SCole Faust class CpuConv2d : public ICpuOperator 63*c217d954SCole Faust { 64*c217d954SCole Faust public: 65*c217d954SCole Faust /** Constructor */ 66*c217d954SCole Faust CpuConv2d(); 67*c217d954SCole Faust ARM_COMPUTE_DISALLOW_COPY_ALLOW_MOVE(CpuConv2d); 68*c217d954SCole Faust /** Default destructor */ 69*c217d954SCole Faust ~CpuConv2d(); 70*c217d954SCole Faust /** Set the input and output tensors. 71*c217d954SCole Faust * 72*c217d954SCole Faust * Valid data layouts: 73*c217d954SCole Faust * - NHWC 74*c217d954SCole Faust * - NCHW 75*c217d954SCole Faust * 76*c217d954SCole Faust * Valid data type configurations: 77*c217d954SCole Faust * |src0 |src1 |src2 |dst | 78*c217d954SCole Faust * |:--------------|:------------------|:------|:--------------| 79*c217d954SCole Faust * |F16 |F16 |F16 |F16 | 80*c217d954SCole Faust * |F32 |F32 |F32 |F32 | 81*c217d954SCole Faust * |QASYMM8 |QASYMM8 |S32 |QASYMM8 | 82*c217d954SCole Faust * |QASYMM8 |QSYMM8_PER_CHANNEL |S32 |QASYMM8 | 83*c217d954SCole Faust * |QASYMM8_SIGNED |QASYMM8_SIGNED |S32 |QASYMM8_SIGNED | 84*c217d954SCole Faust * |QASYMM8_SIGNED |QSYMM8_PER_CHANNEL |S32 |QASYMM8_SIGNED | 85*c217d954SCole Faust * 86*c217d954SCole Faust * @param[in] src Source tensor info. 3 lower dimensions represent a single input [width, height, IFM], 87*c217d954SCole Faust * while every optional dimension from 4 and above represent a batch of inputs. 88*c217d954SCole Faust * Data types supported: QASYMM8/QASYMM8_SIGNED/F16/F32. 89*c217d954SCole Faust * @param[in] weights Weights tensor info. Weights are 4D tensor with dimensions [kernel_x, kernel_y, IFM, OFM]. 90*c217d954SCole Faust * Data type supported: Same as @p src, also could be QSYMM8_PER_CHANNEL if input is QASYMM8/QASYMM8_SIGNED. 91*c217d954SCole Faust * @param[in] biases Biases tensor info. Shared biases supported. Biases are 1D tensor with dimensions [OFM]. 92*c217d954SCole Faust * Data type supported: Same as @p src, except for input of QASYMM8/QASYMM8_SIGNED type where biases should be of S32 type. 93*c217d954SCole Faust * @param[out] dst Destination tensor info. 3 lower dimensions represent a single output [width, height, OFM], while the rest represent batch of outputs. 94*c217d954SCole Faust * Data types supported: Same as @p src. 95*c217d954SCole Faust * @param[in] conv_info Contains padding and stride information described in @ref PadStrideInfo. 96*c217d954SCole Faust * @param[in] weights_info Specifies if the weights tensor has been reshaped with NEWeightsReshapeKernel. If this is not part of the fully connected layer the weights 97*c217d954SCole Faust * tensor has also been transposed with cpu::kernels::CpuGemmTranspose1xWKernel. Data type supported: Same as @p input. 98*c217d954SCole Faust * @param[in] dilation (Optional) Dilation, in elements, across x and y. Defaults to (1, 1). 99*c217d954SCole Faust * @param[in] act_info (Optional) Activation layer information in case of a fused activation. Only RELU, BOUNDED_RELU and LU_BOUNDED_RELU supported. 100*c217d954SCole Faust * @param[in] enable_fast_math (Optional) Enable fast math computation. In case this flag were set, the function could dispatch the fastest implementation 101*c217d954SCole Faust * available which may introduce a drop of accuracy as well. Default is false 102*c217d954SCole Faust * @param[in] num_groups (Optional) Number of groups when performing a grouped convolution. num_groups != 1 is not supported 103*c217d954SCole Faust */ 104*c217d954SCole Faust void configure(ITensorInfo *src, ITensorInfo *weights, const ITensorInfo *biases, ITensorInfo *dst, const PadStrideInfo &conv_info, const WeightsInfo &weights_info = WeightsInfo(), 105*c217d954SCole Faust const Size2D &dilation = Size2D(1U, 1U), const ActivationLayerInfo &act_info = ActivationLayerInfo(), bool enable_fast_math = false, unsigned int num_groups = 1); 106*c217d954SCole Faust /** Static function to check if given info will lead to a valid configuration of @ref CpuConv2d 107*c217d954SCole Faust * 108*c217d954SCole Faust * Similar to CpuConv2d::configure() 109*c217d954SCole Faust * 110*c217d954SCole Faust * @return a status 111*c217d954SCole Faust */ 112*c217d954SCole Faust static Status validate(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *biases, const ITensorInfo *output, const PadStrideInfo &conv_info, 113*c217d954SCole Faust const WeightsInfo &weights_info = WeightsInfo(), const Size2D &dilation = Size2D(1U, 1U), const ActivationLayerInfo &act_info = ActivationLayerInfo(), bool enable_fast_math = false, 114*c217d954SCole Faust unsigned int num_groups = 1); 115*c217d954SCole Faust /** Static function to check if given info will return the convolution called by @ref CpuConv2d 116*c217d954SCole Faust * 117*c217d954SCole Faust * @param[in] src Source tensor info. 3 lower dimensions represent a single input [width, height, IFM], 118*c217d954SCole Faust * while every optional dimension from 4 and above represent a batch of inputs. 119*c217d954SCole Faust * Data types supported: QASYMM8/QASYMM8_SIGNED/F16/F32. 120*c217d954SCole Faust * @param[in] weights Weights tensor info. Weights are 4D tensor with dimensions [kernel_x, kernel_y, IFM, OFM]. 121*c217d954SCole Faust * Data type supported:Same as @p src, also could be QSYMM8_PER_CHANNEL if input is QASYMM8/QASYMM8_SIGNED. 122*c217d954SCole Faust * @param[in] dst Destination tensor info. 3 lower dimensions represent a single output [width, height, OFM], while the rest represent batch of outputs. 123*c217d954SCole Faust * Data types supported: Same as @p src. 124*c217d954SCole Faust * @param[in] conv_info Contains padding and stride information described in @ref PadStrideInfo. 125*c217d954SCole Faust * @param[in] weights_info Specifies if the weights tensor has been reshaped with NEWeightsReshapeKernel. If this is not part of the fully connected layer the weights 126*c217d954SCole Faust * tensor has also been transposed with cpu::kernels::CpuGemmTranspose1xWKernel. Data type supported: Same as @p input. 127*c217d954SCole Faust * @param[in] dilation (Optional) Dilation, in elements, across x and y. Defaults to (1, 1). 128*c217d954SCole Faust * @param[in] act_info (Optional) Activation layer information in case of a fused activation. 129*c217d954SCole Faust * @param[in] enable_fast_math (Optional) Enable fast math computation. In case this flag were set, the function could dispatch the fastest implementation 130*c217d954SCole Faust * available which may introduce a drop of accuracy as well. Default is false 131*c217d954SCole Faust * 132*c217d954SCole Faust * @return the Convolution Method Hint 133*c217d954SCole Faust */ 134*c217d954SCole Faust static ConvolutionMethod get_convolution_method(const ITensorInfo *src, const ITensorInfo *weights, const ITensorInfo *dst, const PadStrideInfo &conv_info, 135*c217d954SCole Faust const WeightsInfo &weights_info = WeightsInfo(), const Size2D &dilation = Size2D(1U, 1U), const ActivationLayerInfo &act_info = ActivationLayerInfo(), bool enable_fast_math = false); 136*c217d954SCole Faust // Inherited methods overridden: 137*c217d954SCole Faust void run(ITensorPack &tensors) override; 138*c217d954SCole Faust void prepare(ITensorPack &constants) override; 139*c217d954SCole Faust experimental::MemoryRequirements workspace() const override; 140*c217d954SCole Faust 141*c217d954SCole Faust private: 142*c217d954SCole Faust std::unique_ptr<ICpuOperator> _function; 143*c217d954SCole Faust experimental::MemoryRequirements _aux_mem{}; 144*c217d954SCole Faust }; 145*c217d954SCole Faust } // namespace cpu 146*c217d954SCole Faust } // namespace arm_compute 147