Home
last modified time | relevance | path

Searched full:inference (Results 1 – 25 of 3016) sorted by relevance

12345678910>>...121

/aosp_15_r20/external/apache-commons-math/src/main/java/org/apache/commons/math3/stat/inference/
H A DTestUtils.java17 package org.apache.commons.math3.stat.inference;
36 * A collection of static methods to create inference test instances or to
37 * perform inference tests.
68 * @see org.apache.commons.math3.stat.inference.TTest#homoscedasticT(double[], double[])
76 …* @see org.apache.commons.math3.stat.inference.TTest#homoscedasticT(org.apache.commons.math3.stat.…
85 …* @see org.apache.commons.math3.stat.inference.TTest#homoscedasticTTest(double[], double[], double)
95 * @see org.apache.commons.math3.stat.inference.TTest#homoscedasticTTest(double[], double[])
103 …* @see org.apache.commons.math3.stat.inference.TTest#homoscedasticTTest(org.apache.commons.math3.s…
112 * @see org.apache.commons.math3.stat.inference.TTest#pairedT(double[], double[])
121 * @see org.apache.commons.math3.stat.inference.TTest#pairedTTest(double[], double[], double)
[all …]
/aosp_15_r20/external/aws-sdk-java-v2/services/elasticinference/src/main/resources/codegen-resources/
H A Dservice-2.json5 "endpointPrefix":"api.elastic-inference",
8 "serviceAbbreviation":"Amazon Elastic Inference",
9 "serviceFullName":"Amazon Elastic Inference",
10 "serviceId":"Elastic Inference",
12 "signingName":"elastic-inference",
13 "uid":"elastic-inference-2017-07-25"
29 …ng April 15, 2023, AWS will not onboard new customers to Amazon Elastic Inference (EI), and will h…
42 …ng April 15, 2023, AWS will not onboard new customers to Amazon Elastic Inference (EI), and will h…
57 …ng April 15, 2023, AWS will not onboard new customers to Amazon Elastic Inference (EI), and will h…
72 …lastic Inference Accelerator. </p> <p> February 15, 2023: Starting April 15, 2023, AWS will not on…
[all …]
H A Dendpoint-tests.json7 "url": "https://api.elastic-inference.ap-northeast-1.amazonaws.com"
20 "url": "https://api.elastic-inference.ap-northeast-2.amazonaws.com"
33 "url": "https://api.elastic-inference.eu-west-1.amazonaws.com"
46 "url": "https://api.elastic-inference.us-east-1.amazonaws.com"
59 "url": "https://api.elastic-inference.us-east-2.amazonaws.com"
72 "url": "https://api.elastic-inference.us-west-2.amazonaws.com"
85 "url": "https://api.elastic-inference-fips.us-east-1.api.aws"
98 "url": "https://api.elastic-inference-fips.us-east-1.amazonaws.com"
111 "url": "https://api.elastic-inference.us-east-1.api.aws"
124 … "url": "https://api.elastic-inference-fips.cn-north-1.api.amazonwebservices.com.cn"
[all …]
/aosp_15_r20/external/apache-commons-math/src/main/java/org/apache/commons/math/stat/inference/
H A DTestUtils.java17 package org.apache.commons.math.stat.inference;
24 * A collection of static methods to create inference test instances or to
25 * perform inference tests.
156 * @see org.apache.commons.math.stat.inference.TTest#homoscedasticT(double[], double[])
164 …* @see org.apache.commons.math.stat.inference.TTest#homoscedasticT(org.apache.commons.math.stat.de…
173 … * @see org.apache.commons.math.stat.inference.TTest#homoscedasticTTest(double[], double[], double)
182 * @see org.apache.commons.math.stat.inference.TTest#homoscedasticTTest(double[], double[])
190 …* @see org.apache.commons.math.stat.inference.TTest#homoscedasticTTest(org.apache.commons.math.sta…
199 * @see org.apache.commons.math.stat.inference.TTest#pairedT(double[], double[])
207 * @see org.apache.commons.math.stat.inference.TTest#pairedTTest(double[], double[], double)
[all …]
/aosp_15_r20/external/aws-sdk-java-v2/services/lookoutequipment/src/main/resources/codegen-resources/
H A Dservice-2.json51 …mentation":"<p> Creates a scheduled inference. Scheduling an inference is setting up a continuous …
107 …"documentation":"<p>Creates a machine learning model for data inference. </p> <p>A machine-learnin…
142 …ataset and associated artifacts. The operation will check to see if any inference scheduler or dat…
159 …"documentation":"<p>Deletes an inference scheduler that has been set up. Prior inference results w…
210 …zon Lookout for Equipment. This will prevent it from being used with an inference scheduler, even …
295 …"documentation":"<p> Specifies information about the inference scheduler being used, including nam…
484 …"documentation":"<p> Lists all inference events that have been found for the specified inference s…
501 …"documentation":"<p> Lists all inference executions that have been performed by the specified infe…
517 …"documentation":"<p>Retrieves a list of all inference schedulers currently available for your acco…
688 "documentation":"<p>Starts an inference scheduler. </p>"
[all …]
/aosp_15_r20/external/aws-sdk-java-v2/services/sagemakerruntime/src/main/resources/codegen-resources/
H A Dservice-2.json47Inference requests sent to this API are enqueued for asynchronous processing. The processing of th…
65inference response as a stream. The inference stream provides the response payload incrementally a…
172 …"documentation":"<p>The desired MIME type of the inference response from the model container.</p>",
178 …"documentation":"<p>Provides additional information about a request for an inference submitted to …
184 …"documentation":"<p>The identifier for the inference request. Amazon SageMaker will generate an id…
186 "locationName":"X-Amzn-SageMaker-Inference-Id"
190 "documentation":"<p>The Amazon S3 URI where the inference request payload is stored.</p>",
213 …"documentation":"<p>Identifier for an inference request. This will be the same as the <code>Infere…
217 … "documentation":"<p>The Amazon S3 URI where the inference response payload is stored.</p>",
223 …"documentation":"<p>The Amazon S3 URI where the inference failure response payload is stored.</p>",
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/lite/g3doc/guide/
H A Dinference.md1 # TensorFlow Lite inference
3 The term *inference* refers to the process of executing a TensorFlow Lite model
5 inference with a TensorFlow Lite model, you must run it through an
11 an inference using C++, Java, and Python, plus links to other resources for each
18 TensorFlow Lite inference typically follows the following steps:
31 1. **Running inference**
39 When you receive results from the model inference, you must interpret the
48 TensorFlow inference APIs are provided for most common mobile/embedded platforms
53 use. TensorFlow Lite is designed for fast inference on small devices, so it
59 inputs, and retrieve inference outputs.
[all …]
/aosp_15_r20/external/armnn/samples/ObjectDetection/include/
H A DObjectDetectionPipeline.hpp18 …eric object detection pipeline with 3 steps: data pre-processing, inference execution and inference
27 * @param executor - unique pointer to inference runner
28 * @param decoder - unique pointer to inference results decoder
44 * @brief Executes inference
46 * Calls inference runner provided during instance construction.
48 * @param[in] processed - input inference data. Data type should be aligned with input tensor.
49 * @param[out] result - raw floating point inference results.
51 virtual void Inference(const cv::Mat& processed, common::InferenceResults<float>& result);
54 * @brief Standard inference results post-processing implementation.
56 * Decodes inference results using decoder provided during construction.
[all …]
/aosp_15_r20/external/pytorch/docs/cpp/source/notes/
H A Dinference_mode.rst1 Inference Mode
12 all newly allocated (non-view) tensors are marked as inference tensors. Inference tensors:
21 A non-view tensor is an inference tensor if and only if it was allocated inside ``InferenceMode``.
22 A view tensor is an inference tensor if and only if the tensor it is a view of is an inference tens…
27 This applies to both inference tensors and normal tensors.
28 - View operations on inference tensors do not do view tracking. View and non-view inference tensors…
30 - Inplace operations on inference tensors are guaranteed not to do a version bump.
37 In production use of PyTorch for inference workload, we have seen a proliferation
40 current colloquial of this guard for inference workload is unsafe: it's possible to
49 1. Users trying to run workload in inference only mode (like loading a pretrained JIT model and
[all …]
/aosp_15_r20/external/pytorch/test/cpp/api/
H A Dinference_mode.cpp25 - Autograd=false, ADInplaceOrView=false (inference tensor)
26 Tensors created in InferenceMode are mostly inference tensors. The only
64 // New tensor created through constructors are inference tensors. in TEST()
69 // requires_grad doesn't change inference tensor behavior inside in TEST()
165 "Inplace update to inference tensor outside InferenceMode is not allowed"); in TEST()
308 "A view was created in inference mode and is being modified inplace") in TEST()
328 // add(Tensor, Tensor) is safe with inference tensor since it doesn't save in TEST()
335 // leaf inference tensor with requires_grad=true can still have gradient. in TEST()
344 c.mul(s), "Inference tensors cannot be saved for backward."); in TEST()
346 // Inference tensor in TensorList input in TEST()
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/lite/delegates/xnnpack/
H A DREADME.md3 XNNPACK is a highly optimized library of neural network inference operators for
6 library as an inference engine for TensorFlow Lite.
12 for floating-point inference.
85 inference by default.**
129 // Run inference using XNNPACK
178 // and inference.
197 The weights cache has to be finalized before any inference, it will be an error
421 XNNPACK supports half-precision (using IEEE FP16 format) inference for a subset
423 inference when the following conditions are met:
431 * IEEE FP16 inference is supported for every floating-point operator in the
[all …]
/aosp_15_r20/external/javaparser/javaparser-symbol-solver-core/src/main/java/com/github/javaparser/symbolsolver/resolution/typeinference/
H A DTypeInference.java65 // throw new IllegalArgumentException("Type inference unnecessary as type arguments have… in instantiationInference()
71 …// - Where P1, ..., Pp (p ≥ 1) are the type parameters of m, let α1, ..., αp be inference variable… in instantiationInference()
134 // inference variables in B2 succeeds (§18.4). in instantiationInference()
184 …:=α1, ..., Pp:=αp] defined in §18.5.1 to replace the type parameters of m with inference variables. in invocationTypeInferenceBoundsSetB3()
186 … in §18.5.1. (While it was necessary in §18.5.1 to demonstrate that the inference variables in B2 … in invocationTypeInferenceBoundsSetB3()
197 …// for fresh inference variables β1, ..., βn, the constraint formula ‹G<β1, ..., βn> → T› is r… in invocationTypeInferenceBoundsSetB3()
200 // - Otherwise, if R θ is an inference variable α, and one of the following is true: in invocationTypeInferenceBoundsSetB3()
256inference variable α can influence an inference variable β if α depends on the resolution of β (§1… in invocationTypeInference()
272 … //Finally, if B4 does not contain the bound false, the inference variables in B4 are resolved. in invocationTypeInference()
274 …//If resolution succeeds with instantiations T1, ..., Tp for inference variables α1, ..., αp, let … in invocationTypeInference()
[all …]
/aosp_15_r20/frameworks/base/packages/NeuralNetworks/framework/platform/java/android/app/ondeviceintelligence/
H A DInferenceInfo.java29 * This class represents the information related to an inference event to track the resource usage
30 * as a function of inference time.
44 * Inference start time (milliseconds from the epoch time).
49 * Inference end time (milliseconds from the epoch time).
62 * @param startTimeMs Inference start time (milliseconds from the epoch time).
63 * @param endTimeMs Inference end time (milliseconds from the epoch time).
111 * Returns the inference start time in milliseconds from the epoch time.
113 * @return the inference start time in milliseconds from the epoch time.
121 * Returns the inference end time in milliseconds from the epoch time.
123 * @return the inference end time in milliseconds from the epoch time.
[all …]
/aosp_15_r20/frameworks/base/packages/NeuralNetworks/framework/module/java/android/app/ondeviceintelligence/
H A DInferenceInfo.java29 * This class represents the information related to an inference event to track the resource usage
30 * as a function of inference time.
44 * Inference start time (milliseconds from the epoch time).
49 * Inference end time (milliseconds from the epoch time).
62 * @param startTimeMs Inference start time (milliseconds from the epoch time).
63 * @param endTimeMs Inference end time (milliseconds from the epoch time).
111 * Returns the inference start time in milliseconds from the epoch time.
113 * @return the inference start time in milliseconds from the epoch time.
121 * Returns the inference end time in milliseconds from the epoch time.
123 * @return the inference end time in milliseconds from the epoch time.
[all …]
/aosp_15_r20/external/armnn/
H A DInstallationViaAptRepository.md153 libarmnn-cpuref-backend23 - Arm NN is an inference engine for CPUs, GPUs and NPUs
154 libarmnn-cpuref-backend24 - Arm NN is an inference engine for CPUs, GPUs and NPUs
155 libarmnn-dev - Arm NN is an inference engine for CPUs, GPUs and NPUs
156 …libarmnntfliteparser-dev - Arm NN is an inference engine for CPUs, GPUs and NPUs # Note: removal o…
157 libarmnn-tfliteparser23 - Arm NN is an inference engine for CPUs, GPUs and NPUs
158 …libarmnntfliteparser24 - Arm NN is an inference engine for CPUs, GPUs and NPUs # Note: removal of …
159 …libarmnntfliteparser24.5 - Arm NN is an inference engine for CPUs, GPUs and NPUs # Note: removal o…
160 libarmnn23 - Arm NN is an inference engine for CPUs, GPUs and NPUs
161 libarmnn24 - Arm NN is an inference engine for CPUs, GPUs and NPUs
162 libarmnn25 - Arm NN is an inference engine for CPUs, GPUs and NPUs
[all …]
/aosp_15_r20/external/aws-sdk-java-v2/services/bedrockruntime/src/main/resources/codegen-resources/
H A Dservice-2.json35inference using the input provided in the request body. You use InvokeModel to run inference for t…
58inference using the input provided. Return the response in a stream.</p> <p>For more information, …
105 …://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>.<…
115 …"documentation":"<p>The desired MIME type of the inference body in the response. The default value…
137Inference response from the model in the format specified in the content-type header field. To see…
141 "documentation":"<p>The MIME type of the inference result.</p>",
157Inference input in the format specified by the content-type. To see the format and content of this…
167 …"documentation":"<p>The desired MIME type of the inference body in the response. The default value…
189Inference response from the model in the format specified by Content-Type. To see the format and c…
193 "documentation":"<p>The MIME type of the inference result.</p>",
[all …]
/aosp_15_r20/external/armnn/samples/KeywordSpotting/include/
H A DKeywordSpottingPipeline.hpp16 …eric Keyword Spotting pipeline with 3 steps: data pre-processing, inference execution and inference
26 * @param executor - unique pointer to inference runner
27 * @param decoder - unique pointer to inference results decoder
36 * Preprocesses and prepares the data for inference by
45 * @brief Executes inference
47 * Calls inference runner provided during instance construction.
49 …* @param[in] preprocessedData - input inference data. Data type should be aligned with input tenso…
50 * @param[out] result - raw inference results.
52 …void Inference(const std::vector<int8_t>& preprocessedData, common::InferenceResults<int8_t>& resu…
55 * @brief Standard inference results post-processing implementation.
[all …]
/aosp_15_r20/external/armnn/samples/SpeechRecognition/
H A DReadme.md16 sample rate. Top level inference API is provided by Arm NN library.
104 3. Executing Inference
106 5. Decoding and Processing Inference Output
163 Using the `Optimize()` function we optimize the graph for inference and load the optimized network …
195 …on pipeline has 3 steps to perform, data pre-processing, run inference and decode inference results
202 …ositioned window of data, sized appropriately for the given model, to pre-process before inference.
212 After all the MFCCs needed for an inference have been extracted from the audio data, we convolve th…
215 #### Executing Inference
219 asrPipeline->Inference<int8_t>(preprocessedData, results);
221 Inference step will call `ArmnnNetworkExecutor::Run` method that will prepare input tensors and exe…
[all …]
/aosp_15_r20/external/armnn/samples/SpeechRecognition/include/
H A DSpeechRecognitionPipeline.hpp16 …ic Speech Recognition pipeline with 3 steps: data pre-processing, inference execution and inference
26 * @param executor - unique pointer to inference runner
27 * @param decoder - unique pointer to inference results decoder
35 * Preprocesses and prepares the data for inference by
51 * @brief Executes inference
53 * Calls inference runner provided during instance construction.
55 …* @param[in] preprocessedData - input inference data. Data type should be aligned with input tenso…
56 * @param[out] result - raw inference results.
59 … void Inference(const std::vector<T>& preprocessedData, common::InferenceResults<int8_t>& result) in Inference() function in asr::ASRPipeline
66 * @brief Standard inference results post-processing implementation.
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/core/framework/
H A Dop_def_builder.h17 // inference function for Op registration.
42 // A type inference function, called for each node during type inference
100 // Forward type inference function. This callable infers the return type of an
103 // Note that the type constructor and forward inference functions need not be
107 // forward inference function.
117 // These type inference functions are intermediate solutions as well: once the
119 // a solver-based type inference, it will replace these functions.
121 // TODO(mdan): Merge with shape inference.
122 // TODO(mdan): Replace with a union-based type inference algorithm.
125 // Reverse type inference function. This callable infers some input types
[all …]
H A Dfull_type_inference_util.h33 // inference functions.
40 // same can be said about the shape inference function.
42 // Note: Unlike type constructors, which describe op definitions, type inference
46 // Helper for a no-op type inference function that indicates type inference
48 // This is the same as not defining a type inference function at all, but
52 // Helper for a type inference function which has the same type as the i'th
59 // Helper for a type inference function which has the same type as a variadic
74 // Helper for the type inference counterpart of Unary, that is (U ->
116 // Auxiliary constructs to help creation of type inference functions.
117 // TODO(mdan): define these as type inference functions as well.
[all …]
/aosp_15_r20/external/armnn/samples/KeywordSpotting/
H A DReadme.md17 …data from file, and to re-sample to the expected sample rate. Top level inference API is provided …
123 3. Executing Inference
125 5. Decoding and Processing Inference Output
188 Using the `Optimize()` function we optimize the graph for inference and load the optimized network …
224 …ng pipeline has 3 steps to perform: data pre-processing, run inference and decode inference result…
231 …ositioned window of data, sized appropriately for the given model, to pre-process before inference.
241 After all the MFCCs needed for an inference have been extracted from the audio data they are concat…
243 #### Executing Inference
248 kwsPipeline->Inference(preprocessedData, results);
251 Inference step will call `ArmnnNetworkExecutor::Run` method that will prepare input tensors and exe…
[all …]
/aosp_15_r20/external/aws-sdk-java-v2/services/applicationautoscaling/src/main/resources/codegen-resources/
H A Dservice-2.json318inference component - The resource type is <code>inference-component</code> and the unique identif…
322inference units for an Amazon Comprehend document classification endpoint.</p> </li> <li> <p> <cod…
350inference component - The resource type is <code>inference-component</code> and the unique identif…
354inference units for an Amazon Comprehend document classification endpoint.</p> </li> <li> <p> <cod…
377inference component - The resource type is <code>inference-component</code> and the unique identif…
381inference units for an Amazon Comprehend document classification endpoint.</p> </li> <li> <p> <cod…
400inference component - The resource type is <code>inference-component</code> and the unique identif…
404inference units for an Amazon Comprehend document classification endpoint.</p> </li> <li> <p> <cod…
439inference component - The resource type is <code>inference-component</code> and the unique identif…
443inference units for an Amazon Comprehend document classification endpoint.</p> </li> <li> <p> <cod…
[all …]
/aosp_15_r20/external/pytorch/benchmarks/dynamo/
H A Drunner.py11 -> python benchmarks/runner.py --suites=torchbench --inference
13 below) for inference, run them and visualize the logs.
16 -> python benchmarks/runner.py --print-run-commands --suites=torchbench --inference
19 -> python benchmarks/runner.py --visualize-logs --suites=torchbench --inference
22 -> python benchmarks/runner.py --suites=torchbench --inference --dtypes=float16
80 "inference": {
81 "aot_eager": "--inference --backend=aot_eager ",
82 "eager": "--inference --backend=eager ",
83 "ts_nnc": "--inference --speedup-ts ",
84 "ts_nvfuser": "--inference -n100 --speedup-ts --nvfuser ",
[all …]
/aosp_15_r20/external/armnn/samples/ObjectDetection/
H A DReadme.md371 3. Executing Inference
373 5. Decoding and Processing Inference Output
395 In order to interpret the result of running inference on the loaded network, it is required to load…
454 Using the `Optimize()` function we optimize the graph for inference and load the optimized network …
544 …on pipeline has 3 steps, to perform data pre-processing, run inference and decode inference results
565 Pre-processing step returns `cv::Mat` object containing data ready for inference.
567 #### Executing Inference
571 objectDetectionPipeline->Inference(processed, results);
573 Inference step will call `ArmnnNetworkExecutor::Run` method that will prepare input tensors and exe…
580 ##### Executing Inference utilizing the Arm NN C++ API argument
[all …]

12345678910>>...121