Home
last modified time | relevance | path

Searched +refs:dw +refs:reduce (Results 1 – 25 of 238) sorted by relevance

12345678910

/aosp_15_r20/external/pytorch/torch/ao/pruning/sparsifier/
H A Dweight_norm_sparsifier.py3 from functools import reduce
67 zeros_per_block = reduce(operator.mul, sparse_block_shape)
119 dw = (block_w - w % block_w) % block_w
122 mask = torch.ones(h + dh, w + dw, device=data.device)
131 values_per_block = reduce(operator.mul, sparse_block_shape)
153 output_shape=(h + dh, w + dw),
173 dw = (block_w - w % block_w) % block_w
174 values_per_block = reduce(operator.mul, sparse_block_shape)
177 mask = torch.ones((h + dh, w + dw), device=data.device)
185 padded_data = torch.ones(h + dh, w + dw, dtype=data.dtype, device=data.device)
[all …]
/aosp_15_r20/external/pytorch/torch/ao/pruning/_experimental/data_sparsifier/
H A Ddata_norm_sparsifier.py3 from functools import reduce
48 zeros_per_block = reduce(operator.mul, sparse_block_shape)
86 dw = (block_width - width % block_width) % block_width
90 height + dh, width + dw, dtype=data.dtype, device=data.device
124 dw = (block_width - width % block_width) % block_width
133 values_per_block = reduce(operator.mul, sparse_block_shape)
150 output_size=(height + dh, width + dw),
162 values_per_block = reduce(operator.mul, sparse_block_shape)
/aosp_15_r20/external/swiftshader/third_party/llvm-16.0/llvm/lib/Target/PowerPC/
H A DREADME.txt228 We could also strength reduce the rem and the div:
235 void func(unsigned int *ret, float dx, float dy, float dz, float dw) {
237 if(dx < -dw) code |= 1;
238 if(dx > dw) code |= 2;
239 if(dy < -dw) code |= 4;
240 if(dy > dw) code |= 8;
241 if(dz < -dw) code |= 16;
242 if(dz > dw) code |= 32;
/aosp_15_r20/external/llvm/lib/Target/PowerPC/
H A DREADME.txt286 We could also strength reduce the rem and the div:
293 void func(unsigned int *ret, float dx, float dy, float dz, float dw) {
295 if(dx < -dw) code |= 1;
296 if(dx > dw) code |= 2;
297 if(dy < -dw) code |= 4;
298 if(dy > dw) code |= 8;
299 if(dz < -dw) code |= 16;
300 if(dz > dw) code |= 32;
/aosp_15_r20/external/swiftshader/third_party/llvm-10.0/llvm/lib/Target/PowerPC/
H A DREADME.txt289 We could also strength reduce the rem and the div:
296 void func(unsigned int *ret, float dx, float dy, float dz, float dw) {
298 if(dx < -dw) code |= 1;
299 if(dx > dw) code |= 2;
300 if(dy < -dw) code |= 4;
301 if(dy > dw) code |= 8;
302 if(dz < -dw) code |= 16;
303 if(dz > dw) code |= 32;
/aosp_15_r20/external/pytorch/torch/nested/_internal/
H A Dops.py344 product = functools.reduce(operator.mul, inp.shape[start_dim : end_dim + 1])
488 dw = torch.matmul(grad_output._values.transpose(-2, -1), inp._values)
490 return (ds, dw, db)
/aosp_15_r20/packages/modules/NeuralNetworks/tools/api/
Dtypes.spec2639 * Reduces the input tensor along the given dimensions to reduce. Unless
2659 * to reduce. Must be in the range
2664 * would reduce across all dimensions. This behavior was never
3176 * reduce across. Negative index is used to specify axis from the
3205 * reduce across. Negative index is used to specify axis from the
3241 * [dx, dy, dw, dh], where dx and dy is the relative correction factor
3243 * and height, dw and dh is the log-scale relative correction factor
3935 * box deltas. The box deltas are encoded in the order of [dy, dx, dh, dw],
3938 * dh and dw is the log-scale relative correction factor for the width and
3952 * factor for dw in bounding box deltas.
[all …]
/aosp_15_r20/hardware/interfaces/neuralnetworks/1.2/
H A Dtypes.hal1930 * Reduces the input tensor along the given dimensions to reduce. Unless
1945 * to reduce. Must be in the range
1950 * would reduce across all dimensions. This behavior was never
2245 * reduce across. Negative index is used to specify axis from the
2270 * reduce across. Negative index is used to specify axis from the
2301 * [dx, dy, dw, dh], where dx and dy is the relative correction factor
2303 * and height, dw and dh is the log-scale relative correction factor
2827 * box deltas. The box deltas are encoded in the order of [dy, dx, dh, dw],
2830 * dh and dw is the log-scale relative correction factor for the width and
2844 * factor for dw in bounding box deltas.
[all …]
/aosp_15_r20/hardware/interfaces/neuralnetworks/1.3/
H A Dtypes.hal1986 * Reduces the input tensor along the given dimensions to reduce. Unless
2002 * to reduce. Must be in the range
2007 * would reduce across all dimensions. This behavior was never
2320 * reduce across. Negative index is used to specify axis from the
2346 * reduce across. Negative index is used to specify axis from the
2377 * [dx, dy, dw, dh], where dx and dy is the relative correction factor
2379 * and height, dw and dh is the log-scale relative correction factor
3002 * box deltas. The box deltas are encoded in the order of [dy, dx, dh, dw],
3005 * dh and dw is the log-scale relative correction factor for the width and
3019 * factor for dw in bounding box deltas.
[all …]
/aosp_15_r20/external/swiftshader/third_party/llvm-10.0/configs/common/include/llvm/IR/
H A DIntrinsicImpl.inc147 "llvm.experimental.vector.reduce.add",
148 "llvm.experimental.vector.reduce.and",
149 "llvm.experimental.vector.reduce.fmax",
150 "llvm.experimental.vector.reduce.fmin",
151 "llvm.experimental.vector.reduce.mul",
152 "llvm.experimental.vector.reduce.or",
153 "llvm.experimental.vector.reduce.smax",
154 "llvm.experimental.vector.reduce.smin",
155 "llvm.experimental.vector.reduce.umax",
156 "llvm.experimental.vector.reduce.umin",
[all …]
/aosp_15_r20/external/swiftshader/third_party/llvm-subzero/build/Android/include/llvm/IR/
H A DIntrinsics.gen5022 x86_avx512_mask_pmov_dw_128, // llvm.x86.avx512.mask.pmov.dw.128
5023 x86_avx512_mask_pmov_dw_256, // llvm.x86.avx512.mask.pmov.dw.256
5024 x86_avx512_mask_pmov_dw_512, // llvm.x86.avx512.mask.pmov.dw.512
5025 x86_avx512_mask_pmov_dw_mem_128, // llvm.x86.avx512.mask.pmov.dw.mem.128
5026 x86_avx512_mask_pmov_dw_mem_256, // llvm.x86.avx512.mask.pmov.dw.mem.256
5027 x86_avx512_mask_pmov_dw_mem_512, // llvm.x86.avx512.mask.pmov.dw.mem.512
5058 x86_avx512_mask_pmovs_dw_128, // llvm.x86.avx512.mask.pmovs.dw.128
5059 x86_avx512_mask_pmovs_dw_256, // llvm.x86.avx512.mask.pmovs.dw.256
5060 x86_avx512_mask_pmovs_dw_512, // llvm.x86.avx512.mask.pmovs.dw.512
5061 x86_avx512_mask_pmovs_dw_mem_128, // llvm.x86.avx512.mask.pmovs.dw.mem.128
[all …]
/aosp_15_r20/external/swiftshader/third_party/llvm-subzero/build/Linux/include/llvm/IR/
H A DIntrinsics.gen5022 x86_avx512_mask_pmov_dw_128, // llvm.x86.avx512.mask.pmov.dw.128
5023 x86_avx512_mask_pmov_dw_256, // llvm.x86.avx512.mask.pmov.dw.256
5024 x86_avx512_mask_pmov_dw_512, // llvm.x86.avx512.mask.pmov.dw.512
5025 x86_avx512_mask_pmov_dw_mem_128, // llvm.x86.avx512.mask.pmov.dw.mem.128
5026 x86_avx512_mask_pmov_dw_mem_256, // llvm.x86.avx512.mask.pmov.dw.mem.256
5027 x86_avx512_mask_pmov_dw_mem_512, // llvm.x86.avx512.mask.pmov.dw.mem.512
5058 x86_avx512_mask_pmovs_dw_128, // llvm.x86.avx512.mask.pmovs.dw.128
5059 x86_avx512_mask_pmovs_dw_256, // llvm.x86.avx512.mask.pmovs.dw.256
5060 x86_avx512_mask_pmovs_dw_512, // llvm.x86.avx512.mask.pmovs.dw.512
5061 x86_avx512_mask_pmovs_dw_mem_128, // llvm.x86.avx512.mask.pmovs.dw.mem.128
[all …]
/aosp_15_r20/external/swiftshader/third_party/llvm-subzero/build/MacOS/include/llvm/IR/
H A DIntrinsics.gen5004 x86_avx512_mask_pmov_dw_128, // llvm.x86.avx512.mask.pmov.dw.128
5005 x86_avx512_mask_pmov_dw_256, // llvm.x86.avx512.mask.pmov.dw.256
5006 x86_avx512_mask_pmov_dw_512, // llvm.x86.avx512.mask.pmov.dw.512
5007 x86_avx512_mask_pmov_dw_mem_128, // llvm.x86.avx512.mask.pmov.dw.mem.128
5008 x86_avx512_mask_pmov_dw_mem_256, // llvm.x86.avx512.mask.pmov.dw.mem.256
5009 x86_avx512_mask_pmov_dw_mem_512, // llvm.x86.avx512.mask.pmov.dw.mem.512
5040 x86_avx512_mask_pmovs_dw_128, // llvm.x86.avx512.mask.pmovs.dw.128
5041 x86_avx512_mask_pmovs_dw_256, // llvm.x86.avx512.mask.pmovs.dw.256
5042 x86_avx512_mask_pmovs_dw_512, // llvm.x86.avx512.mask.pmovs.dw.512
5043 x86_avx512_mask_pmovs_dw_mem_128, // llvm.x86.avx512.mask.pmovs.dw.mem.128
[all …]
/aosp_15_r20/external/swiftshader/third_party/llvm-subzero/build/Windows/include/llvm/IR/
H A DIntrinsics.gen5022 x86_avx512_mask_pmov_dw_128, // llvm.x86.avx512.mask.pmov.dw.128
5023 x86_avx512_mask_pmov_dw_256, // llvm.x86.avx512.mask.pmov.dw.256
5024 x86_avx512_mask_pmov_dw_512, // llvm.x86.avx512.mask.pmov.dw.512
5025 x86_avx512_mask_pmov_dw_mem_128, // llvm.x86.avx512.mask.pmov.dw.mem.128
5026 x86_avx512_mask_pmov_dw_mem_256, // llvm.x86.avx512.mask.pmov.dw.mem.256
5027 x86_avx512_mask_pmov_dw_mem_512, // llvm.x86.avx512.mask.pmov.dw.mem.512
5058 x86_avx512_mask_pmovs_dw_128, // llvm.x86.avx512.mask.pmovs.dw.128
5059 x86_avx512_mask_pmovs_dw_256, // llvm.x86.avx512.mask.pmovs.dw.256
5060 x86_avx512_mask_pmovs_dw_512, // llvm.x86.avx512.mask.pmovs.dw.512
5061 x86_avx512_mask_pmovs_dw_mem_128, // llvm.x86.avx512.mask.pmovs.dw.mem.128
[all …]
/aosp_15_r20/external/swiftshader/third_party/llvm-subzero/build/Fuchsia/include/llvm/IR/
H A DIntrinsics.gen5022 x86_avx512_mask_pmov_dw_128, // llvm.x86.avx512.mask.pmov.dw.128
5023 x86_avx512_mask_pmov_dw_256, // llvm.x86.avx512.mask.pmov.dw.256
5024 x86_avx512_mask_pmov_dw_512, // llvm.x86.avx512.mask.pmov.dw.512
5025 x86_avx512_mask_pmov_dw_mem_128, // llvm.x86.avx512.mask.pmov.dw.mem.128
5026 x86_avx512_mask_pmov_dw_mem_256, // llvm.x86.avx512.mask.pmov.dw.mem.256
5027 x86_avx512_mask_pmov_dw_mem_512, // llvm.x86.avx512.mask.pmov.dw.mem.512
5058 x86_avx512_mask_pmovs_dw_128, // llvm.x86.avx512.mask.pmovs.dw.128
5059 x86_avx512_mask_pmovs_dw_256, // llvm.x86.avx512.mask.pmovs.dw.256
5060 x86_avx512_mask_pmovs_dw_512, // llvm.x86.avx512.mask.pmovs.dw.512
5061 x86_avx512_mask_pmovs_dw_mem_128, // llvm.x86.avx512.mask.pmovs.dw.mem.128
[all …]
/aosp_15_r20/out/soong/.intermediates/external/llvm/llvm-gen-intrinsics/gen/llvm/IR/
DIntrinsics.gen4939 x86_avx512_mask_pmov_dw_128, // llvm.x86.avx512.mask.pmov.dw.128
4940 x86_avx512_mask_pmov_dw_256, // llvm.x86.avx512.mask.pmov.dw.256
4941 x86_avx512_mask_pmov_dw_512, // llvm.x86.avx512.mask.pmov.dw.512
4942 x86_avx512_mask_pmov_dw_mem_128, // llvm.x86.avx512.mask.pmov.dw.mem.128
4943 x86_avx512_mask_pmov_dw_mem_256, // llvm.x86.avx512.mask.pmov.dw.mem.256
4944 x86_avx512_mask_pmov_dw_mem_512, // llvm.x86.avx512.mask.pmov.dw.mem.512
4975 x86_avx512_mask_pmovs_dw_128, // llvm.x86.avx512.mask.pmovs.dw.128
4976 x86_avx512_mask_pmovs_dw_256, // llvm.x86.avx512.mask.pmovs.dw.256
4977 x86_avx512_mask_pmovs_dw_512, // llvm.x86.avx512.mask.pmovs.dw.512
4978 x86_avx512_mask_pmovs_dw_mem_128, // llvm.x86.avx512.mask.pmovs.dw.mem.128
[all …]
/aosp_15_r20/external/swiftshader/third_party/llvm-16.0/configs/common/include/llvm/IR/
H A DIntrinsicImpl.inc359 "llvm.vector.reduce.add",
360 "llvm.vector.reduce.and",
361 "llvm.vector.reduce.fadd",
362 "llvm.vector.reduce.fmax",
363 "llvm.vector.reduce.fmin",
364 "llvm.vector.reduce.fmul",
365 "llvm.vector.reduce.mul",
366 "llvm.vector.reduce.or",
367 "llvm.vector.reduce.smax",
368 "llvm.vector.reduce.smin",
[all …]
/aosp_15_r20/prebuilts/clang/host/linux-x86/clang-r522817/include/llvm/IR/
DIntrinsicImpl.inc448 "llvm.vector.reduce.add",
449 "llvm.vector.reduce.and",
450 "llvm.vector.reduce.fadd",
451 "llvm.vector.reduce.fmax",
452 "llvm.vector.reduce.fmaximum",
453 "llvm.vector.reduce.fmin",
454 "llvm.vector.reduce.fminimum",
455 "llvm.vector.reduce.fmul",
456 "llvm.vector.reduce.mul",
457 "llvm.vector.reduce.or",
[all …]
/aosp_15_r20/prebuilts/clang/host/linux-x86/clang-r530567b/include/llvm/IR/
DIntrinsicImpl.inc448 "llvm.vector.reduce.add",
449 "llvm.vector.reduce.and",
450 "llvm.vector.reduce.fadd",
451 "llvm.vector.reduce.fmax",
452 "llvm.vector.reduce.fmaximum",
453 "llvm.vector.reduce.fmin",
454 "llvm.vector.reduce.fminimum",
455 "llvm.vector.reduce.fmul",
456 "llvm.vector.reduce.mul",
457 "llvm.vector.reduce.or",
[all …]
/aosp_15_r20/prebuilts/clang/host/linux-x86/clang-r530567/include/llvm/IR/
DIntrinsicImpl.inc448 "llvm.vector.reduce.add",
449 "llvm.vector.reduce.and",
450 "llvm.vector.reduce.fadd",
451 "llvm.vector.reduce.fmax",
452 "llvm.vector.reduce.fmaximum",
453 "llvm.vector.reduce.fmin",
454 "llvm.vector.reduce.fminimum",
455 "llvm.vector.reduce.fmul",
456 "llvm.vector.reduce.mul",
457 "llvm.vector.reduce.or",
[all …]
/aosp_15_r20/prebuilts/clang/host/linux-x86/clang-r536225/include/llvm/IR/
DIntrinsicImpl.inc456 "llvm.vector.reduce.add",
457 "llvm.vector.reduce.and",
458 "llvm.vector.reduce.fadd",
459 "llvm.vector.reduce.fmax",
460 "llvm.vector.reduce.fmaximum",
461 "llvm.vector.reduce.fmin",
462 "llvm.vector.reduce.fminimum",
463 "llvm.vector.reduce.fmul",
464 "llvm.vector.reduce.mul",
465 "llvm.vector.reduce.or",
[all …]
/aosp_15_r20/external/cldr/tools/cldr-code/src/main/resources/org/unicode/cldr/util/data/transforms/
H A Dinternal_raw_IPA.txt1654 addwest ædwˈɛst
2073 advena %29421 ˈɑdwɛnˌɑ
2099 adversa ɑdwˈɛrsɑ
2100 adversaria ˌɑdwɛrsˈɑrɪɑ
2163 adwell ədwˈɛl
9921 assiduous %27641 əsˈɪdwəs, əsˈɪdʒuəs
9922 assiduously %29567 əsˈɪdwəsli, əsˈɪdʒuəsli
10725 audwin ˈɔdwɪn
14656 bed-wetting bˈɛdwˌɛtɪŋ
14732 bedouin %29599 bˈɛdoən, bˈɛduɪn, bˈɛduˌɪn, bˈɛdwɪn, bˈɛdəwən
[all …]
H A Dinternal_raw_IPA-old.txt1980 addwest ædwˈɛst
2474 advena ˈɑdwɛnˌɑ
2574 adwell ədwˈɛl
11593 assiduous %19533 əsˈɪdwəs, əsˈɪʤuəs
11594 assiduously %22180 əsˈɪdwəsli, əsˈɪʤuəsli
12542 audwin ˈɔdwɪn
17240 bed-wetting bˈɛdwˌɛtɪŋ
17330 bedouin %20246 bˈɛdoən, bˈɛduɪn, bˈɛduˌɪn, bˈɛdwɪn, bˈɛdəwən
17383 beduin bˈɛduɪn, bˈɛdwɪn
17384 bedward bˈɛdwərd
[all …]
/aosp_15_r20/external/crosvm/docs/book/
H A Dmermaid.min.js1reduce(ad,0)/t.length}(n),e.y=function(t){return 1+t.reduce(od,0)}(n)):(e.x=a?o+=t(e,a):0,e.y=0,a=…
24reduce:n(142),size:n(279),transform:n(285),union:n(286),values:n(147)}}catch(t){}r||(r=window._),t…
/aosp_15_r20/external/mesa3d/docs/relnotes/
H A D19.3.0.rst2093 - iris: Track per-stage bind history, reduce work accordingly
2691 - intel/fs: make scan/reduce work with SIMD32 when it fits 2 registers
2731 - radeonsi: align sdma byte count to dw
2805 - aco: CSE readlane/readfirstlane/permute/reduce with the same exec

12345678910