Home
last modified time | relevance | path

Searched full:learning (Results 1 – 25 of 1484) sorted by relevance

12345678910>>...60

/aosp_15_r20/external/tensorflow/tensorflow/python/keras/optimizer_v2/
H A Dlegacy_learning_rate_decay.py15 """Various learning rate decay functions."""
35 """Applies exponential decay to the learning rate.
37 When training a model, it is often recommended to lower the learning rate as
39 to a provided initial learning rate. It requires a `global_step` value to
40 compute the decayed learning rate. You can just pass a TensorFlow variable
43 The function returns the decayed learning rate. It is computed as:
51 integer division and the decayed learning rate follows a staircase function.
71 The initial learning rate.
78 staircase: Boolean. If `True` decay the learning rate at discrete intervals
84 learning rate.
[all …]
H A Dlearning_rate_schedule.py15 """Various learning rate decay functions."""
33 """The learning rate schedule base class.
35 You can use a learning rate schedule to modulate how the learning rate
38 Several built-in learning rate schedules are available, such as
77 raise NotImplementedError("Learning rate schedule must override __call__")
81 raise NotImplementedError("Learning rate schedule must override get_config")
100 When training a model, it is often useful to lower the learning rate as
102 to an optimizer step, given a provided initial learning rate.
104 The schedule a 1-arg callable that produces a decayed learning
106 the learning rate value across different invocations of optimizer functions.
[all …]
H A Dadadelta.py33 adaptive learning rate per dimension to address two drawbacks:
35 - The continual decay of learning rates throughout training.
36 - The need for a manually selected global learning rate.
38 Adadelta is a more robust extension of Adagrad that adapts learning rates
40 past gradients. This way, Adadelta continues learning even when many updates
42 don't have to set an initial learning rate. In this version, the initial
43 learning rate can be set, as in most other Keras optimizers.
46 learning_rate: Initial value for the learning rate:
50 Note that `Adadelta` tends to benefit from higher initial learning rate
/aosp_15_r20/external/pytorch/torch/optim/
H A Dlr_scheduler.py2 r"""Learning Rate Scheduler."""
64 "to access the learning rate.",
90 r"""Adjusts the learning rate during optimization."""
102 # Initialize epoch and base learning rates
173 """Return last computed learning rate by current scheduler."""
177 """Compute learning rate using chainable form of the scheduler."""
187 """Display the current learning rate.
191 learning rate.
194 "`LRScheduler.print_lr()` is being deprecated. To fetch the learning rate, "
201 print(f"Adjusting learning rate of group {group} to {lr:.4e}.")
[all …]
/aosp_15_r20/external/google-cloud-java/java-aiplatform/proto-google-cloud-aiplatform-v1beta1/src/main/java/com/google/cloud/aiplatform/v1beta1/
H A DActiveLearningConfig.java25 * Parameters that configure the active learning pipeline. Active learning will
193 * Active learning data sampling config. For every active learning labeling
209 * Active learning data sampling config. For every active learning labeling
227 * Active learning data sampling config. For every active learning labeling
246 * CMLE training config. For every active learning labeling iteration, system
247 * will train a machine learning model on CMLE. The trained model will be used
263 * CMLE training config. For every active learning labeling iteration, system
264 * will train a machine learning model on CMLE. The trained model will be used
282 * CMLE training config. For every active learning labeling iteration, system
283 * will train a machine learning model on CMLE. The trained model will be used
[all …]
H A DActiveLearningConfigOrBuilder.java80 * Active learning data sampling config. For every active learning labeling
93 * Active learning data sampling config. For every active learning labeling
106 * Active learning data sampling config. For every active learning labeling
118 * CMLE training config. For every active learning labeling iteration, system
119 * will train a machine learning model on CMLE. The trained model will be used
132 * CMLE training config. For every active learning labeling iteration, system
133 * will train a machine learning model on CMLE. The trained model will be used
146 * CMLE training config. For every active learning labeling iteration, system
147 * will train a machine learning model on CMLE. The trained model will be used
H A DStreamingReadFeatureValuesRequest.java85 * for a machine learning model predicting user clicks on a website, an
115 * for a machine learning model predicting user clicks on a website, an
147 * IDs is 100. For example, for a machine learning model predicting user
163 * IDs is 100. For example, for a machine learning model predicting user
179 * IDs is 100. For example, for a machine learning model predicting user
196 * IDs is 100. For example, for a machine learning model predicting user
707 * for a machine learning model predicting user clicks on a website, an
736 * for a machine learning model predicting user clicks on a website, an
765 * for a machine learning model predicting user clicks on a website, an
793 * for a machine learning model predicting user clicks on a website, an
[all …]
/aosp_15_r20/external/google-cloud-java/java-aiplatform/proto-google-cloud-aiplatform-v1/src/main/java/com/google/cloud/aiplatform/v1/
H A DActiveLearningConfig.java25 * Parameters that configure the active learning pipeline. Active learning will
193 * Active learning data sampling config. For every active learning labeling
209 * Active learning data sampling config. For every active learning labeling
227 * Active learning data sampling config. For every active learning labeling
246 * CMLE training config. For every active learning labeling iteration, system
247 * will train a machine learning model on CMLE. The trained model will be used
263 * CMLE training config. For every active learning labeling iteration, system
264 * will train a machine learning model on CMLE. The trained model will be used
282 * CMLE training config. For every active learning labeling iteration, system
283 * will train a machine learning model on CMLE. The trained model will be used
[all …]
H A DActiveLearningConfigOrBuilder.java80 * Active learning data sampling config. For every active learning labeling
93 * Active learning data sampling config. For every active learning labeling
106 * Active learning data sampling config. For every active learning labeling
118 * CMLE training config. For every active learning labeling iteration, system
119 * will train a machine learning model on CMLE. The trained model will be used
132 * CMLE training config. For every active learning labeling iteration, system
133 * will train a machine learning model on CMLE. The trained model will be used
146 * CMLE training config. For every active learning labeling iteration, system
147 * will train a machine learning model on CMLE. The trained model will be used
H A DStreamingReadFeatureValuesRequest.java85 * for a machine learning model predicting user clicks on a website, an
115 * for a machine learning model predicting user clicks on a website, an
147 * IDs is 100. For example, for a machine learning model predicting user
163 * IDs is 100. For example, for a machine learning model predicting user
179 * IDs is 100. For example, for a machine learning model predicting user
196 * IDs is 100. For example, for a machine learning model predicting user
702 * for a machine learning model predicting user clicks on a website, an
731 * for a machine learning model predicting user clicks on a website, an
760 * for a machine learning model predicting user clicks on a website, an
788 * for a machine learning model predicting user clicks on a website, an
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/core/protobuf/tpu/
H A Doptimization_parameters.proto37 // Dynamic learning rate specification in the TPUEmbeddingConfiguration. The
38 // actual learning rates are provided as a scalar input list to the
42 // For tables where learning rates are dynamically computed and communicated
43 // to the TPU embedding program, a tag must be specified for the learning
49 // learning rate, and specifies exactly one tag if it uses dynamic learning
57 // the same dynamic learning rate, for example, their dynamic learning rate
63 // communicate dynamic learning rates to the TPU embedding program.
65 // equal to the number of unique tags. The learning rate associated with a
71 // Source of learning rate to use.
121 // computing the effective learning rate. When update_accumulator_first is set
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/python/tpu/
H A Dtpu_embedding_v2_utils.py215 a learning rate of 0.2 while the second feature will be looked up in a table
216 that has a learning rate of 0.1.
234 learning_rate: The learning rate. It should be a floating point value or a
235 callable taking no arguments for a dynamic learning rate.
244 `weight_decay_factor` is multiplied by the current learning rate.
316 a learning rate of 0.2 while the second feature will be looked up in a table
317 that has a learning rate of 0.1.
338 learning_rate: The learning rate. It should be a floating point value or a
339 callable taking no arguments for a dynamic learning rate.
348 `weight_decay_factor` is multiplied by the current learning rate.
[all …]
/aosp_15_r20/external/pytorch/docs/source/
H A Doptim.rst17 you can specify optimizer-specific options such as the learning rate, weight decay, etc.
34 For example, this is very useful when one wants to specify per-layer learning rates::
41 This means that ``model.base``'s parameters will use a learning rate of ``1e-2``, whereas
42 ``model.classifier``'s parameters will stick to the default learning rate of ``1e-3``.
223 How to adjust learning rate
226 :class:`torch.optim.lr_scheduler.LRScheduler` provides several methods to adjust the learning
228 allows dynamic learning rate reducing based on some validation measurements.
230 Learning rate scheduling should be applied after optimizer's update; e.g., you
247 Most learning rate schedulers can be called back-to-back (also referred to as
249 other on the learning rate obtained by the one preceding it.
[all …]
/aosp_15_r20/external/aws-sdk-java-v2/services/lookoutequipment/src/main/resources/codegen-resources/
H A Dservice-2.json107 …"documentation":"<p>Creates a machine learning model for data inference. </p> <p>A machine-learnin…
210 …"documentation":"<p>Deletes a machine learning model currently available for Amazon Lookout for Eq…
346 …ides a JSON containing the overall information about a specific machine learning model, including …
363 … "documentation":"<p>Retrieves information about a specific machine learning model version.</p>"
795 "documentation":"<p>Sets the active model version for a given machine learning model.</p>"
1024 …"documentation":"<p>The name of the previously trained machine learning model being used to create…
1185 "documentation":"<p>The name for the machine learning model to be created.</p>"
1189 … "documentation":"<p>The name of the dataset for the machine learning model being created. </p>"
1193 "documentation":"<p>The data schema for the machine learning model being created. </p>"
1197 …":"<p>The input configuration for the labels being used for the machine learning model that's bein…
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/python/keras/
H A Doptimizer_v1.py165 learning rate decay, and Nesterov momentum.
168 lr: float >= 0. Learning rate.
171 decay: float >= 0. Learning rate decay over each update.
236 (except the learning rate, which can be freely tuned).
239 lr: float >= 0. Learning rate.
243 decay: float >= 0. Learning rate decay over each update.
305 Adagrad is an optimizer with parameter-specific learning rates,
314 lr: float >= 0. Initial learning rate.
316 decay: float >= 0. Learning rate decay over each update.
319 - [Adaptive Subgradient Methods for Online Learning and Stochastic
[all …]
/aosp_15_r20/external/googleapis/google/cloud/aiplatform/v1beta1/
H A Ddata_labeling_job.proto140 // Parameters that configure the active learning pipeline. Active learning
146 // Parameters that configure the active learning pipeline. Active learning will
160 // Active learning data sampling config. For every active learning labeling
164 // CMLE training config. For every active learning labeling iteration, system
165 // will train a machine learning model on CMLE. The trained model will be used
170 // Active learning data sampling config. For every active learning labeling
203 // CMLE training config. For every active learning labeling iteration, system
204 // will train a machine learning model on CMLE. The trained model will be used
/aosp_15_r20/external/google-cloud-java/java-aiplatform/proto-google-cloud-aiplatform-v1beta1/src/main/proto/google/cloud/aiplatform/v1beta1/
H A Ddata_labeling_job.proto140 // Parameters that configure the active learning pipeline. Active learning
146 // Parameters that configure the active learning pipeline. Active learning will
160 // Active learning data sampling config. For every active learning labeling
164 // CMLE training config. For every active learning labeling iteration, system
165 // will train a machine learning model on CMLE. The trained model will be used
170 // Active learning data sampling config. For every active learning labeling
203 // CMLE training config. For every active learning labeling iteration, system
204 // will train a machine learning model on CMLE. The trained model will be used
/aosp_15_r20/external/googleapis/google/cloud/aiplatform/v1/
H A Ddata_labeling_job.proto140 // Parameters that configure the active learning pipeline. Active learning
146 // Parameters that configure the active learning pipeline. Active learning will
160 // Active learning data sampling config. For every active learning labeling
164 // CMLE training config. For every active learning labeling iteration, system
165 // will train a machine learning model on CMLE. The trained model will be used
170 // Active learning data sampling config. For every active learning labeling
203 // CMLE training config. For every active learning labeling iteration, system
204 // will train a machine learning model on CMLE. The trained model will be used
/aosp_15_r20/external/google-cloud-java/java-aiplatform/proto-google-cloud-aiplatform-v1/src/main/proto/google/cloud/aiplatform/v1/
H A Ddata_labeling_job.proto140 // Parameters that configure the active learning pipeline. Active learning
146 // Parameters that configure the active learning pipeline. Active learning will
160 // Active learning data sampling config. For every active learning labeling
164 // CMLE training config. For every active learning labeling iteration, system
165 // will train a machine learning model on CMLE. The trained model will be used
170 // Active learning data sampling config. For every active learning labeling
203 // CMLE training config. For every active learning labeling iteration, system
204 // will train a machine learning model on CMLE. The trained model will be used
/aosp_15_r20/external/tensorflow/tensorflow/lite/g3doc/android/
H A Dindex.md3 TensorFlow Lite lets you run TensorFlow machine learning (ML) models in your
8 ## Learning roadmap {:.hide-from-toc}
53 ## Machine learning models
56 portable, more efficient machine learning model format. You can use pre-built
64 This page discusses using already-built machine learning models and does not
66 picking, modifying, building, and converting machine learning models for
132 learning models into your Android app:
143 for performing common machine learning tasks on handling visual, audio, and
201 called *accelerators*. Machine learning models can run faster on these
233 you have a machine learning model that uses ML operations that are not supported
[all …]
H A Dquickstart.md4 to analyze a live camera feed and identify objects using a machine learning
10 ## Object detection with machine learning
13 The machine learning model in this tutorial performs object detection. An object
19 size of data being processed, and the size of the machine learning model.
105 TensorFlow Lite machine learning models, and access utility functions that
110 of the object detection machine learning model:
113 classes, execution of the machine learning model, and output results from
118 data object that can be processed by the machine learning model.
149 In your Android app, you must initialize the TensorFlow Lite machine learning
203 of machine learning models using specialized processing hardware on a mobile
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/compiler/mlir/tfrt/
H A DBUILD31 "//learning/brain/experimental/mlir/tflite/tfmrt/...",
32 "//learning/brain/experimental/mlir/tfrt_compiler/...",
33 "//learning/brain/experimental/tfrt/...",
34 "//learning/brain/tfrt/...",
35 "//learning/infra/mira/...",
36 "//learning/serving/contrib/tfrt/mlir/...",
38 "//learning/brain/mlir/mlir_lsp_server/...",
426 "//learning/brain/tfrt/tpu/compiler/mlir:tf_to_tfrt_tpu",
536 "//learning/brain/tfrt/tpu/compiler/mlir:tf_to_tfrt_tpu",
549 # copybara:uncomment "//learning/brain/experimental/tfrt/visualization:__pkg__",
[all …]
/aosp_15_r20/external/apache-commons-math/src/main/java/org/apache/commons/math3/ml/neuralnet/sofm/
H A DKohonenUpdateAction.java44 * <li>&alpha; is the current <em>learning rate</em>, </li>
59 * <li>the <em>learning rate</em>, and</li>
72 /** Learning factor update function. */
81 * @param learningFactor Learning factor update function.
105 // smaller the learning rate will become. in update()
152 * @param learningRate Learning factor.
172 * @param learningRate Learning factor.
190 * @param learningRate Current learning factor.
214 * @param learningRate Learning factor.
/aosp_15_r20/frameworks/base/media/mca/filterpacks/java/android/filterpacks/videoproc/
H A DBackDropperFilter.java126 // Frame count for learning bg model
128 // Frame count for learning verification
166 // Default rate at which to learn bg model during learning period
247 // Select learning rate for pixel based on smoothed decision mask alpha
393 // value for a pixel, weighted by the learning rate and by whether the pixel is classified as
419 // recent variance for the pixel, weighted by the learning rate and by whether the pixel is
452 // most recent frame, weighted by the learning rate.
500 /** Learning listener object */
562 // We can't resize because that would require re-learning. in createMemoryFormat()
703 // Update learning rate after initial learning period in process()
[all …]
/aosp_15_r20/external/iproute2/ip/
H A Diplink_vxlan.c38 " [ [no]learning ]\n" in print_explain()
82 __u8 learning = 1; in vxlan_parse_opt() local
246 learning = 0; in vxlan_parse_opt()
247 } else if (!matches(*argv, "learning")) { in vxlan_parse_opt()
249 learning = 1; in vxlan_parse_opt()
316 learning = 0; in vxlan_parse_opt()
317 /* we will add LEARNING attribute outside of the loop */ in vxlan_parse_opt()
385 addattr8(n, 1024, IFLA_VXLAN_LEARNING, learning); in vxlan_parse_opt()
500 __u8 learning = rta_getattr_u8(tb[IFLA_VXLAN_LEARNING]); in vxlan_print_opt() local
502 print_bool(PRINT_JSON, "learning", NULL, learning); in vxlan_print_opt()
[all …]

12345678910>>...60