Home
last modified time | relevance | path

Searched full:train (Results 1 – 25 of 1705) sorted by relevance

12345678910>>...69

/aosp_15_r20/external/tensorflow/tensorflow/tools/compatibility/
H A Drenames_v2.py1397 'tf.train.AdadeltaOptimizer':
1398 'tf.compat.v1.train.AdadeltaOptimizer',
1399 'tf.train.AdagradDAOptimizer':
1400 'tf.compat.v1.train.AdagradDAOptimizer',
1401 'tf.train.AdagradOptimizer':
1402 'tf.compat.v1.train.AdagradOptimizer',
1403 'tf.train.AdamOptimizer':
1404 'tf.compat.v1.train.AdamOptimizer',
1405 'tf.train.CheckpointSaverHook':
1407 'tf.train.CheckpointSaverListener':
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/python/training/
H A Dtraining.py18 See the [Training](https://tensorflow.org/api_guides/python/train) guide.
130 tf_export("train.BytesList")(BytesList)
131 tf_export("train.ClusterDef")(ClusterDef)
132 tf_export("train.Example")(Example)
133 tf_export("train.Feature")(Feature)
134 tf_export("train.Features")(Features)
135 tf_export("train.FeatureList")(FeatureList)
136 tf_export("train.FeatureLists")(FeatureLists)
137 tf_export("train.FloatList")(FloatList)
138 tf_export("train.Int64List")(Int64List)
[all …]
H A Dserver_lib.py29 """Creates a `tf.train.ServerDef` protocol buffer.
32 server_or_cluster_def: A `tf.train.ServerDef` or `tf.train.ClusterDef`
33 protocol buffer, or a `tf.train.ClusterSpec` object, describing the server
47 A `tf.train.ServerDef`.
69 "`tf.train.ServerDef` or `tf.train.ClusterSpec`.")
94 @tf_export("distribute.Server", v1=["distribute.Server", "train.Server"])
95 @deprecation.deprecated_endpoints("train.Server")
102 cluster (specified by a `tf.train.ClusterSpec`), and
120 server_or_cluster_def: A `tf.train.ServerDef` or `tf.train.ClusterDef`
121 protocol buffer, or a `tf.train.ClusterSpec` object, describing the
[all …]
H A Dsaver.py144 # of a V2 checkpoint: e.g. "/fs/train/ckpt-<step>/tmp/worker<i>-<step>".
262 # <train dir>/myckpt_temp/
267 # <train dir>/
275 # "<train dir>/myckpt" in this case. Save() and Restore() work with the
547 # - Extend the inference graph to a train graph.
638 @tf_export(v1=["train.Saver"])
644 `tf.compat.v1.train.Saver` is not supported for saving and restoring
645 checkpoints in TF2. Please switch to `tf.train.Checkpoint` or
654 You can load a name-based checkpoint written by `tf.compat.v1.train.Saver`
655 using `tf.train.Checkpoint.restore` or `tf.keras.Model.load_weights`. However,
[all …]
H A Dmoving_averages.py32 @tf_export("__internal__.train.assign_moving_average", v1=[])
281 @tf_export("train.ExponentialMovingAverage")
339 ema = tf.train.ExponentialMovingAverage(decay=0.9999)
358 ...train the model by running train_step multiple times...
368 weights and restore them before continuing to train. You can see the
373 `tf.train.Checkpoint`. At evaluation time, create your shadow variables and
374 use `tf.train.Checkpoint` to restore the moving averages into the shadow
377 3. Checkpoint out your moving average variables in your `tf.train.Checkpoint`.
397 ema = tf.train.ExponentialMovingAverage(decay=0.9999)
403 checkpoint = tf.train.Checkpoint(model_weights=[var0, var1],
[all …]
H A Dtraining_util.py36 @tf_export(v1=['train.global_step'])
48 print('global_step: %s' % tf.compat.v1.train.global_step(sess,
67 @tf_export(v1=['train.get_global_step'])
111 ... global_step = tf.compat.v1.train.get_or_create_global_step()
113 ... optimizer = tf.compat.v1.train.GradientDescentOptimizer(0.1)
126 ... print(sess.run(tf.compat.v1.train.get_global_step()))
161 @tf_export(v1=['train.create_global_step'])
202 ... global_step = tf.compat.v1.train.create_global_step()
204 ... optimizer = tf.compat.v1.train.GradientDescentOptimizer(0.1)
255 @tf_export(v1=['train.get_or_create_global_step'])
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/python/checkpoint/
H A Dcheckpoint.py203 log_fn("Detecting that an object or model or tf.train.Checkpoint is being"
207 "https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restore"
230 `tf.train.latest_checkpoint`.
515 tf.train.latest_checkpoint(checkpoint_directory))
524 `tf.train.latest_checkpoint`.
727 "you. As a workaround, consider either using tf.train.Checkpoint to "
893 being restored by a later call to `tf.train.Checkpoint.restore()`.
975 `tf.train.Checkpoint.restore()`.
999 "Restoring a name-based tf.train.Saver checkpoint using the object-based "
1026 checkpoint saved with train.Saver(), and restored with train.Checkpoint():
[all …]
H A Dcheckpoint_management.py62 @tf_export(v1=["train.generate_checkpoint_state_proto"])
82 `tf.train.CheckpointManager` for an implementation).
129 instructions=("Use `tf.train.CheckpointManager` to manage checkpoints "
131 @tf_export(v1=["train.update_checkpoint_state"])
158 `tf.train.CheckpointManager` for an implementation).
173 @tf_export("__internal__.train.update_checkpoint_state", v1=[])
203 `tf.train.CheckpointManager` for an implementation).
248 @tf_export("train.get_checkpoint_state")
326 @tf_export("train.latest_checkpoint")
333 using `v1.train.Saver.save`
[all …]
/aosp_15_r20/external/pytorch/aten/src/ATen/native/
H A DDropout.cpp62 Ctype<inplace> _dropout_impl(T& input, double p, bool train) { in _dropout_impl() argument
64 if (p == 0 || !train || input.sym_numel() == 0) { in _dropout_impl()
105 native_dropout_cpu(const Tensor& input, double p, std::optional<bool> train) { in native_dropout_cpu() argument
113 if (!train.has_value() || *train) { in native_dropout_cpu()
132 Tensor dropout(const Tensor& input, double p, bool train) { in dropout() argument
138 if (input.is_nested() || (train && is_fused_kernel_acceptable(input, p))) { in dropout()
139 return std::get<0>(at::native_dropout(input, p, train)); in dropout()
141 return _dropout<false>(input, p, train); in dropout()
147 Tensor& dropout_(Tensor& input, double p, bool train) { in dropout_() argument
148 return _dropout<true>(input, p, train); in dropout_()
[all …]
H A DNormalization.cpp139 bool train, double eps, Tensor& output) { in batch_norm_cpu_transform_input_template() argument
152 save_mean, save_invstd, running_mean, running_var, train, eps); in batch_norm_cpu_transform_input_template()
168 auto mean = as_nd(train ? save_mean : running_mean); in batch_norm_cpu_transform_input_template()
170 if (train) { in batch_norm_cpu_transform_input_template()
306 bool train, double eps, std::array<bool,3> grad_input_mask) { in batch_norm_backward_cpu_template() argument
337 grad_out_, input, weight, running_mean, running_var, save_mean, save_invstd, train, eps); in batch_norm_backward_cpu_template()
379 .add_const_input(train ? input : grad_out_) in batch_norm_backward_cpu_template()
383 if (train) { in batch_norm_backward_cpu_template()
410 if (train) { in batch_norm_backward_cpu_template()
430 if (train) { in batch_norm_backward_cpu_template()
[all …]
H A DRNN.cpp1071 return at::dropout(input, p, /*train=*/true); in dropout()
1075 return {at::dropout(input.data, p, /*train=*/true), input.batch_sizes}; in dropout()
1082 int64_t num_layers, double dropout_p, bool train) { in apply_layer_stack() argument
1095 if (dropout_p != 0 && train && l < num_layers - 1) { in apply_layer_stack()
1112 int64_t num_layers, double dropout_p, bool train, bool bidirectional) { in _rnn_impl() argument
1117 …_stack(BidirLayer{cell}, input, pair_vec(hiddens), pair_vec(params), num_layers, dropout_p, train); in _rnn_impl()
1120 …stack(LayerT<hidden_type,cell_params>{cell}, input, hiddens, params, num_layers, dropout_p, train); in _rnn_impl()
1129 int64_t num_layers, double dropout_p, bool train, bool bidirectional) { in _rnn_impl_with_concat() argument
1130 …ellType, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional); in _rnn_impl_with_concat()
1138 int64_t num_layers, double dropout_p, bool train, bool bidirectional) { in _lstm_impl() argument
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/core/kernels/
H A Dtraining_ops_test.cc110 Graph* train; in BM_SGD() local
111 SGD(params, &init, &train); in BM_SGD()
112 test::Benchmark("cpu", train, GetOptions(), init, nullptr, "", in BM_SGD()
146 Graph* train; in BM_Adagrad() local
147 Adagrad(params, &init, &train); in BM_Adagrad()
148 test::Benchmark("cpu", train, GetOptions(), init, nullptr, "", in BM_Adagrad()
185 Graph* train; in BM_SparseAdagrad() local
186 SparseAdagrad(m, n, &init, &train); in BM_SparseAdagrad()
187 test::Benchmark("cpu", train, GetMultiThreadedOptions(), init, nullptr, "", in BM_SparseAdagrad()
229 Graph* train; in BM_Momentum() local
[all …]
/aosp_15_r20/packages/modules/StatsD/statsd/src/storage/
DStorageManager.cpp180 VLOG("Failed to wrtie train info magic"); in writeTrainInfo()
185 // Write the train version in writeTrainInfo()
189 VLOG("Failed to wrtie train version code"); in writeTrainInfo()
199 VLOG("Failed to write train name size"); in writeTrainInfo()
207 VLOG("Failed to write train name"); in writeTrainInfo()
293 VLOG("Failed to read train info magic"); in readTrainInfoLocked()
299 VLOG("Train info magic was 0x%08x, expected 0x%08x", magic, TRAIN_INFO_FILE_MAGIC); in readTrainInfoLocked()
304 // Read the train version code in readTrainInfoLocked()
308 VLOG("Failed to read train version code from train info file"); in readTrainInfoLocked()
317 VLOG("Failed to read train name size from train info file"); in readTrainInfoLocked()
[all …]
/aosp_15_r20/external/pytorch/aten/src/ATen/functorch/
H A DPyTorchOperatorHacks.cpp165 Ctype<inplace> _dropout_impl(T& input, double p, bool train) { in _dropout_impl() argument
167 if (p == 0 || !train || input.numel() == 0) { in _dropout_impl()
215 static Tensor dropout(const Tensor& input, double p, bool train) { in ALIAS_SPECIALIZATION()
218 if (train && is_fused_kernel_acceptable(input, p)) { in ALIAS_SPECIALIZATION()
219 return std::get<0>(at::native_dropout(input, p, train)); in ALIAS_SPECIALIZATION()
221 return _dropout<false>(input, p, train); in ALIAS_SPECIALIZATION()
227 Tensor& dropout_(Tensor& input, double p, bool train) { in dropout_() argument
228 return _dropout<true>(input, p, train); in dropout_()
231 Tensor feature_dropout(const Tensor& input, double p, bool train) { in feature_dropout() argument
232 return _feature_dropout<false>(input, p, train); in feature_dropout()
[all …]
/aosp_15_r20/external/pytorch/torch/csrc/lazy/
H A Dtutorial.md165 Let's set up a loader that would feed the `MNIST` dataset in `train` to our model.
204 dataset1 = datasets.MNIST('./data', train=True, download=True,
211 train(log_interval, model, device, train_loader, optimizer, epoch)
215 The training loop in `train` also has one addition. Namely, `torch._lazy.mark_step()` which deserve…
226 def train(log_interval, model, device, train_loader, optimizer, epoch):
227 model.train()
238 print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
248 Train Epoch: 1 [0/60000 (0%)] Loss: 2.343924
249 Train Epoch: 1 [640/60000 (1%)] Loss: 1.760821
250 Train Epoch: 1 [1280/60000 (2%)] Loss: 0.802798
[all …]
/aosp_15_r20/external/zstd/tests/
H A DplayTests.sh1100 zstd --train -B2K tmpCorpusHighCompress -o tmpDictHighCompress
1101 zstd --train -B2K tmpCorpusLowCompress -o tmpDictLowCompress
1110 zstd --train "$TESTDIR"/*.c "$PRGDIR"/*.c -o tmpDict
1135 zstd --train "$TESTDIR"/*.c "$PRGDIR"/*.c "$PRGDIR"/*.h -o tmpDictC
1138 zstd --train "$TESTDIR"/*.c "$PRGDIR"/*.c --dictID=1 -o tmpDict1
1141 zstd --train "$TESTDIR"/*.c "$PRGDIR"/*.c --dictID -o 1 tmpDict1 && die "wrong order : --dictID mus…
1143 zstd --train "$TESTDIR"/*.c "$PRGDIR"/*.c -o tmpDict2 --maxdict=4K -v
1145 zstd --train "$TESTDIR"/*.c "$PRGDIR"/*.c -o tmpDict3 --maxdict=1K -v
1147 zstd --train "$TESTDIR"/*.c "$PRGDIR"/*.c -o tmpDict3 --maxdict -v 4K && die "wrong order : --maxdi…
1168 zstd --train-legacy -q tmp && die "Dictionary training should fail : not enough input source"
[all …]
/aosp_15_r20/external/zstd/programs/
H A Dzstd.192 \fB\-\-train FILES\fR
421 …eatly improves efficiency on small files and messages\. It\'s possible to train \fBzstd\fR with a …
424 \fB\-\-train FILEs\fR
425 …et dictionary size (for example, ~10 MB for a 100 KB dictionary)\. \fB\-\-train\fR can be combined…
431train\fR supports multithreading if \fBzstd\fR is compiled with threading support (default)\. Addi…
464 \fB\-\-train\-cover[=k#,d=#,steps=#,split=#,shrink[=#]]\fR
474 \fBzstd \-\-train\-cover FILEs\fR
477 \fBzstd \-\-train\-cover=k=50,d=8 FILEs\fR
480 \fBzstd \-\-train\-cover=d=8,steps=500 FILEs\fR
483 \fBzstd \-\-train\-cover=k=50 FILEs\fR
[all …]
H A Dzstd.1.md97 * `--train FILES`:
514 It's possible to train `zstd` with a set of samples,
520 * `--train FILEs`:
525 `--train` can be combined with `-r` to indicate a directory rather than listing all the files,
533 `--train` supports multithreading if `zstd` is compiled with threading support (default).
534 Additional advanced parameters can be specified with `--train-fastcover`.
535 The legacy dictionary builder can be accessed with `--train-legacy`.
536 The slower cover dictionary builder can be accessed with `--train-cover`.
537 Default `--train` is equivalent to `--train-fastcover=d=8,steps=4`.
586 * `--train-cover[=k#,d=#,steps=#,split=#,shrink[=#]]`:
[all …]
/aosp_15_r20/external/skia/infra/bots/
H A Dinfra_tests.py28 def python_unit_tests(train): argument
29 if train:
37 def recipe_test(train): argument
43 if train:
44 cmd.append('train')
50 def gen_tasks_test(train): argument
52 if not train:
63 train = False
64 if '--train' in sys.argv:
65 train = True
[all …]
/aosp_15_r20/external/google-cloud-java/java-automl/proto-google-cloud-automl-v1/src/main/java/com/google/cloud/automl/v1/
H A DInputConfig.java48 * * `TRAIN` - Rows in this file are used to train the model.
51 * Automatically divided into train and test data. 80% for training and
61 * TRAIN,gs://folder/image1.jpg,daisy
74 * * `TRAIN` - Rows in this file are used to train the model.
77 * Automatically divided into train and test data. 80% for training and
91 * TRAIN,gs://folder/image1.png,car,0.1,0.1,,,0.3,0.3,,
92 * TRAIN,gs://folder/image1.png,bike,.7,.6,,,.8,.9,,
118 * TRAIN,gs://folder/train_videos.csv
152 * TRAIN,gs://folder/train_videos.csv
175 * * `TRAIN` - Rows in this file are used to train the model.
[all …]
/aosp_15_r20/external/pytorch/torch/ao/quantization/pt2e/
H A Dexport_utils.py40 Switch dropout patterns in the model between train and eval modes.
42 Dropout has different behavior in train vs eval mode. For exported models,
43 however, calling `model.train()` or `model.eval()` does not automatically switch
94 Switch batchnorm patterns in the model between train and eval modes.
96 Batchnorm has different behavior in train vs eval mode. For exported models,
97 however, calling `model.train()` or `model.eval()` does not automatically switch
188 Move an exported GraphModule to train mode.
190 This is equivalent to model.train() but only for certain special ops like dropout, batchnorm.
200 Allow users to call `model.train()` and `model.eval()` on an exported model,
204 Note: This does not achieve the same effect as what `model.train()` and `model.eval()`
[all …]
/aosp_15_r20/external/pytorch/aten/src/ATen/native/cpu/
H A Dbatch_norm_kernel.cpp35 const Tensor& running_mean, const Tensor& running_var, bool train, double eps) { in batch_norm_cpu_collect_linear_and_constant_terms() argument
58 if (train) { in batch_norm_cpu_collect_linear_and_constant_terms()
77 const Tensor& running_mean, const Tensor& running_var, bool train, double eps) { in batch_norm_cpu_contiguous_impl() argument
91 save_mean, save_invstd, running_mean, running_var, train, eps); in batch_norm_cpu_contiguous_impl()
129 const Tensor& running_mean, const Tensor& running_var, bool train, double eps) { in batch_norm_cpu_channels_last_impl() argument
143 save_mean, save_invstd, running_mean, running_var, train, eps); in batch_norm_cpu_channels_last_impl()
408 bool train, double eps) { in batch_norm_cpu_backward_contiguous_impl() argument
441 if (train) { in batch_norm_cpu_backward_contiguous_impl()
473 if (train) { in batch_norm_cpu_backward_contiguous_impl()
531 bool train, double eps) { in batch_norm_cpu_backward_channels_last_impl() argument
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/python/keras/optimizer_v2/
H A Dlegacy_learning_rate_decay.py28 @tf_export(v1=["train.exponential_decay"])
59 learning_rate = tf.compat.v1.train.exponential_decay(starter_learning_rate,
64 tf.compat.v1.train.GradientDescentOptimizer(learning_rate)
104 @tf_export(v1=["train.piecewise_constant_decay", "train.piecewise_constant"])
115 learning_rate = tf.compat.v1.train.piecewise_constant(global_step, boundaries,
182 @tf_export(v1=["train.polynomial_decay"])
229 learning_rate = tf.compat.v1.train.polynomial_decay(starter_learning_rate,
235 tf.compat.v1.train.GradientDescentOptimizer(learning_rate)
283 @tf_export(v1=["train.natural_exp_decay"])
320 learning_rate = tf.compat.v1.train.natural_exp_decay(learning_rate,
[all …]
/aosp_15_r20/external/pytorch/torchgen/_autoheuristic/
H A Dtrain_decision.py23 from train import AHTrain
155 df_train = datasets["train"]
252 Splits the dataframe into train, val, and test sets.
253 Also adds other datasets, specified by the user, to the train set.
256 # Split into train+val and test
261 # Split train+val inputs into train and val
266 datasets = {"train": df_train, "val": df_val, "test": df_test}
297 return datasets["train"]
317 df_train = self.add_training_data(datasets["train"], datasets)
318 datasets["train"] = df_train
[all …]
/aosp_15_r20/external/pytorch/aten/src/ATen/native/mkldnn/
H A DNormalization.cpp26 bool train, in mkldnn_batch_norm() argument
35 bool train, in mkldnn_batch_norm_backward() argument
50 bool train, in _mkldnn_batch_norm_legit() argument
59 bool train, in _mkldnn_batch_norm_legit_no_stats() argument
135 bool train, in mkldnn_batch_norm() argument
159 if (train) { in mkldnn_batch_norm()
215 …mkldnn_batch_norm(input, weight_opt, bias_opt, running_mean, running_var, /*train*/true, momentum,… in _batch_norm_with_update_mkldnn()
223 bool train, in _mkldnn_batch_norm_legit() argument
226 …return mkldnn_batch_norm(input, weight_opt, bias_opt, running_mean, running_var, train, momentum, … in _mkldnn_batch_norm_legit()
232 bool train, in _mkldnn_batch_norm_legit_no_stats() argument
[all …]

12345678910>>...69