Home
last modified time | relevance | path

Searched full:loss (Results 1 – 25 of 22171) sorted by relevance

12345678910>>...887

/aosp_15_r20/external/tensorflow/tensorflow/python/kernel_tests/nn_ops/
H A Dlosses_test.py52 loss = losses.absolute_difference(self._predictions, self._predictions)
54 self.assertAlmostEqual(0.0, self.evaluate(loss), 3)
57 loss = losses.absolute_difference(self._labels, self._predictions)
59 self.assertAlmostEqual(5.5, self.evaluate(loss), 3)
63 loss = losses.absolute_difference(self._labels, self._predictions, weights)
65 self.assertAlmostEqual(5.5 * weights, self.evaluate(loss), 3)
69 loss = losses.absolute_difference(self._labels, self._predictions,
72 self.assertAlmostEqual(5.5 * weights, self.evaluate(loss), 3)
76 loss = losses.absolute_difference(self._labels, self._predictions, weights)
78 self.assertAlmostEqual(5.6, self.evaluate(loss), 3)
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/python/keras/
H A Dlosses.py16 """Built-in loss functions."""
48 @keras_export('keras.losses.Loss')
49 class Loss: class
50 """Loss base class.
53 * `call()`: Contains the logic for loss calculation using `y_true`, `y_pred`.
58 class MeanSquaredError(Loss):
82 loss = (tf.reduce_sum(loss_obj(labels, predictions)) *
88 """Initializes `Loss` class.
92 loss. Default value is `AUTO`. `AUTO` indicates that the reduction
121 """Invokes the `Loss` instance.
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/python/ops/losses/
H A Dlosses_impl.py15 """Implementation of Loss operations for use in neural networks."""
36 """Types of loss reduction.
77 losses: `Tensor` whose elements contain individual loss measurements.
89 """Computes the number of elements in the loss function induced by `weights`.
141 """Computes the weighted loss.
148 scope: the scope for the operations performed in computing the loss.
149 loss_collection: the loss will be added to these collections.
150 reduction: Type of reduction to apply to loss.
153 Weighted loss `Tensor` of the same type as `losses`. If `reduction` is
162 When calculating the gradient of a weighted loss contributions from
[all …]
/aosp_15_r20/external/webrtc/modules/rtp_rtcp/test/testFec/
H A Dtest_packet_masks_metrics.cc15 * The metrics measure the efficiency (recovery potential or residual loss) of
16 * the FEC code, under various statistical loss models for the packet/symbol
17 * loss events. Various constraints on the behavior of these metrics are
25 * In the case of XOR, the residual loss is determined via the set of packet
26 * masks (generator matrix). In the case of RS, the residual loss is determined
40 * The type of packet/symbol loss models considered in this test are:
41 * (1) Random loss: Bernoulli process, characterized by the average loss rate.
42 * (2) Bursty loss: Markov chain (Gilbert-Elliot model), characterized by two
43 * parameters: average loss rate and average burst length.
62 // Maximum gap size for characterizing the consecutiveness of the loss.
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/python/keras/mixed_precision/
H A Dloss_scale_optimizer.py15 """Contains the loss scaling optimizer class."""
97 """The state of a dynamic loss scale."""
103 """Creates the dynamic loss scale."""
115 # nonfinite gradient or change in loss scale. The name is 'good_steps' for
121 """Adds a weight to this loss scale.
141 # Set aggregation to NONE, as loss scaling variables should never be
200 """Returns the current loss scale as a float32 `tf.Variable`."""
209 """Returns the current loss scale as a scalar `float32` tensor."""
213 """Updates the value of the loss scale.
217 all-reduced gradient of the loss with respect to a weight.
[all …]
/aosp_15_r20/external/pytorch/torch/nn/modules/
H A Dloss.py66 The unreduced (i.e. with :attr:`reduction` set to ``'none'``) loss can be described as:
93 the losses are averaged over each loss element in the batch. Note that for
99 on :attr:`size_average`. When :attr:`reduce` is ``False``, returns a loss per
116 >>> loss = nn.L1Loss()
119 >>> output = loss(input, target)
132 r"""The negative log likelihood loss. It is useful to train a classification
143 higher dimension inputs, such as computing NLL loss per-pixel for 2D images.
150 The `target` that this loss expects should be a class index in the range :math:`[0, C-1]`
151 where `C = number of classes`; if `ignore_index` is specified, this loss also accepts
154 The unreduced (i.e. with :attr:`reduction` set to ``'none'``) loss can be described as:
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/python/training/experimental/
H A Dloss_scale.py42 """Base class for all TF1 loss scales.
49 Loss scaling is a process that multiplies the loss by a multiplier called the
50 loss scale, and divides each gradient by the same multiplier. The pseudocode
54 loss = ...
55 loss *= loss_scale
56 grads = gradients(loss, vars)
60 Mathematically, loss scaling has no effect, but can help avoid numerical
62 precision training. By multiplying the loss, each intermediate gradient will
65 Instances of this class represent a loss scale. Calling instances of this
66 class returns the loss scale as a scalar float32 tensor, while method
[all …]
H A Dloss_scale_optimizer.py32 """An optimizer that applies loss scaling.
34 Loss scaling is a process that multiplies the loss by a multiplier called the
35 loss scale, and divides each gradient by the same multiplier. The pseudocode
39 loss = ...
40 loss *= loss_scale
41 grads = gradients(loss, vars)
45 Mathematically, loss scaling has no effect, but can help avoid numerical
47 precision training. By multiplying the loss, each intermediate gradient will
50 The loss scale can either be a fixed constant, chosen by the user, or be
51 dynamically determined. Dynamically determining the loss scale is convenient
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/python/keras/engine/
H A Dtraining_eager_v1.py34 loss = loss_fn(targets, outputs)
35 return loss
87 """Calculates the loss for a given model.
95 loss values.
100 Returns the model output, total loss, loss value calculated using the
101 specified loss function and masks for each output. The total loss includes
103 to the loss value.
106 # Used to keep track of the total loss value (stateless).
143 with backend.name_scope('loss'):
152 'because it has no loss to optimize.')
[all …]
H A Dcompile_utils.py79 Applies a Loss / Metric to all outputs.
85 # each Metric / Loss separate. When there is only one Model output,
116 self._loss_metric = metrics_mod.Mean(name='loss') # Total loss.
121 """Per-output loss metrics."""
131 """One-time setup of loss objects."""
152 """Creates per-output loss metrics, but only for multi-output Models."""
169 """Computes the overall loss.
175 per-sample loss weights. If one Tensor is passed, it is used for all
178 regularization_losses: Additional losses to be added to the total loss.
194 loss_metric_values = [] # Used for loss metric calculation.
[all …]
H A Dtraining_utils_v1.py118 """Aggregator that calculates loss and metrics info.
138 # Loss.
472 stateful_metric_names = stateful_metric_names[1:] # Exclude `loss`
781 """Does validation on the compatibility of targets and loss functions.
783 This helps prevent users from using loss functions incorrectly. This check
788 loss_fns: list of loss functions.
792 ValueError: if a loss function or target array
801 for y, loss, shape in zip(targets, loss_fns, output_shapes):
802 if y is None or loss is None or tensor_util.is_tf_type(y):
804 if losses.is_categorical_crossentropy(loss):
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/python/keras/utils/
H A Dlosses_utils.py16 """Utilities related to loss functions."""
31 """Types of loss reduction.
41 loss function. When non-scalar losses are returned to Keras functions like
42 `fit`/`evaluate`, the unreduced vector loss is passed to the optimizer
43 but the reported loss will be a scalar value.
46 The builtin loss functions wrapped by the loss classes reduce
47 one dimension (`axis=-1`, or `axis` if specified by loss function).
67 loss = tf.reduce_sum(loss_obj(labels, predictions)) *
247 losses: `Tensor` whose elements contain individual loss measurements.
266 """Reduces the individual weighted loss measurements."""
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/python/keras/optimizer_v2/
H A Doptimizer_v2.py123 # `loss` is a callable that takes no argument and returns the value
125 loss = lambda: 3 * var1 * var1 + 2 * var2 * var2
126 # In graph mode, returns op that minimizes the loss by updating the listed
128 opt_op = opt.minimize(loss, var_list=[var1, var2])
131 opt.minimize(loss, var_list=[var1, var2])
172 loss = <call_loss_function>
174 grads = tape.gradient(loss, vars)
188 you divide your loss by the global batch size, which is done
190 See the `reduction` argument of your loss which should be set to
244 # `loss` is a callable that takes no argument and returns the value
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/python/saved_model/model_utils/
H A Dexport_output_test.py241 loss = {'my_loss': constant_op.constant([0])}
249 outputter = MockSupervisedOutput(loss, predictions, metrics)
250 self.assertEqual(outputter.loss['loss/my_loss'], loss['my_loss'])
260 loss['my_loss'], predictions['output1'], metrics['metrics'])
261 self.assertEqual(outputter.loss, {'loss': loss['my_loss']})
270 self.assertLen(outputter.loss, 1)
277 with self.assertRaisesRegex(ValueError, 'loss output value must'):
281 with self.assertRaisesRegex(ValueError, 'loss output key must'):
287 loss = {('my', 'loss'): constant_op.constant([0])}
296 outputter = MockSupervisedOutput(loss, predictions, metrics)
[all …]
/aosp_15_r20/external/pytorch/torch/testing/_internal/distributed/rpc/
H A Ddist_autograd_test.py170 loss = torch.sparse.sum(ret)
172 loss = ret.sum()
173 dist_autograd.backward(context_id, [loss])
184 loss = torch.sparse.sum(ret)
186 loss = ret.sum()
187 dist_autograd.backward(context_id, [loss])
611 loss = rpc.rpc_sync(
616 loss = torch.sparse.sum(loss)
618 loss = loss.sum()
619 dist_autograd.backward(context_id, [loss], retain_graph=True)
[all …]
/aosp_15_r20/external/webrtc/modules/video_coding/
H A Dmedia_opt_util.h26 // Number of time periods used for (max) window filter for packet loss
27 // TODO(marpan): set reasonable window size for filtered packet loss,
28 // adjustment should be based on logged/real data of loss stats/correlation.
34 // The type of filter used on the received packet loss reports.
36 kNoFilter, // No filtering on received loss.
100 // Returns the effective packet loss for ER, required by this protection
103 // Return value : Required effective packet loss
135 // Estimation of residual loss after the FEC
150 // Get the effective packet loss
159 // Get the effective packet loss for ER
[all …]
/aosp_15_r20/external/tensorflow/tensorflow/python/ops/
H A Dnn_loss_scaling_utilities_test.py15 """Tests for loss scaling utilities in tensorflow.ops.nn."""
40 loss = nn_impl.compute_average_loss(per_example_loss, global_batch_size=10)
41 self.assertEqual(self.evaluate(loss), 1.5)
76 loss = nn_impl.compute_average_loss(per_example_loss)
77 self.assertAllClose(self.evaluate(loss), (2.5 + 6.2 + 5.) / 3)
83 loss = distribution.reduce("SUM", per_replica_losses, axis=None)
84 self.assertAllClose(self.evaluate(loss), (2.5 + 6.2 + 5.) / 3)
99 loss = distribution.reduce("SUM", per_replica_losses, axis=None)
100 self.assertAllClose(self.evaluate(loss), (2. + 4. + 6.) * 2. / 3)
107 loss = distribution.reduce("SUM", per_replica_losses, axis=None)
[all …]
H A Dnn_xent_test.py57 loss = nn_impl.sigmoid_cross_entropy_with_logits(
59 self.assertEqual("mylogistic", loss.op.name)
66 loss = nn_impl.sigmoid_cross_entropy_with_logits(
69 tf_loss = self.evaluate(loss)
77 loss = nn_impl.sigmoid_cross_entropy_with_logits(
80 tf_loss = self.evaluate(loss)
88 loss = nn_impl.sigmoid_cross_entropy_with_logits(
90 err = gradient_checker.compute_gradient_error(logits, sizes, loss, sizes)
91 print("logistic loss gradient err = ", err)
99 loss = nn_impl.sigmoid_cross_entropy_with_logits(
[all …]
/aosp_15_r20/external/libopus/dnn/torch/lossgen/
H A Dtrain_lossgen.py17 self.loss = np.loadtxt(loss_file, dtype='float32')
19 self.nb_sequences = self.loss.shape[0]//self.sequence_length
20 self.loss = self.loss[:self.nb_sequences*self.sequence_length]
21 …erc = lfilter(np.array([.001], dtype='float32'), np.array([1., -.999], dtype='float32'), self.loss)
23 self.loss = np.reshape(self.loss, (self.nb_sequences, self.sequence_length, 1))
34 return [self.loss[index, :, :], perc]
73 for i, (loss, perc) in enumerate(tepoch):
75 loss = loss.to(device) variable
78 out, states = model(loss, perc, states=states)
81 target = loss[:,1:,:]
[all …]
/aosp_15_r20/external/iproute2/tc/
H A Dq_netem.c38 " [ loss random PERCENT [CORRELATION]]\n" \ in explain()
39 " [ loss state P13 [P31 [P32 [P23 P14]]]\n" \ in explain()
40 " [ loss gemodel PERCENT [R [1-H [1-K]]]\n" \ in explain()
220 } else if (matches(*argv, "loss") == 0 || in netem_parse_opt()
222 if (opt.loss > 0 || loss_type != NETEM_LOSS_UNSPEC) { in netem_parse_opt()
223 explain1("duplicate loss argument\n"); in netem_parse_opt()
228 /* Old (deprecated) random loss model syntax */ in netem_parse_opt()
235 if (get_percent(&opt.loss, *argv)) { in netem_parse_opt()
236 explain1("loss percent"); in netem_parse_opt()
243 explain1("loss correllation"); in netem_parse_opt()
[all …]
/aosp_15_r20/external/pytorch/docs/source/notes/
H A Damp_examples.rst47 loss = loss_fn(output, target)
49 # Scales loss. Calls backward() on scaled loss to create scaled gradients.
52 scaler.scale(loss).backward()
67 All gradients produced by ``scaler.scale(loss).backward()`` are scaled. If you wish to modify or i…
92 loss = loss_fn(output, target)
93 scaler.scale(loss).backward()
145 loss = loss_fn(output, target)
146 loss = loss / iters_to_accumulate
149 scaler.scale(loss).backward()
165 and adds the penalty value to the loss.
[all …]
/aosp_15_r20/external/pytorch/test/
H A Dtest_optim.py77 and systematically validated by assuring that the loss goes the right direction
197 loss = (weight.mv(input) + bias).pow(2).sum()
198 loss.backward()
204 return loss
209 loss = optimizer.step(closure)
211 loss = closure()
216 scheduler.step(loss)
256 loss = (weight.mv(inpt).cuda(1) + bias).pow(2).sum()
257 loss.backward()
263 return loss
[all …]
/aosp_15_r20/external/webrtc/modules/congestion_controller/goog_cc/
H A Dloss_based_bwe_v2_test.cc502 // If the delay based estimate is infinity, then loss based estimate increases in TEST_P()
512 // If the delay based estimate is not infinity, then loss based estimate is in TEST_P()
519 // When loss based bwe receives a strong signal of overusing and an increase in
520 // loss rate, it should acked bitrate for emegency backoff.
522 // Create two packet results, first packet has 50% loss rate, second packet in TEST_P()
523 // has 100% loss rate. in TEST_P()
541 // Update estimate when network is overusing, and 50% loss rate. in TEST_P()
548 // loss rate. in TEST_P()
560 // When receiving the same packet feedback, loss based bwe ignores the feedback
595 // duration, and network is in the normal state, loss based bwe returns the
[all …]
/aosp_15_r20/external/deqp-deps/glslang/Test/
Dhlsl.promotions.frag29 float3 Fn_R_F3D(out float3 p) { p = d3; return d3; } // valid, but loss of precision on downconve…
34 int3 Fn_R_I3D(out int3 p) { p = d3; return d3; } // valid, but loss of precision on downconvers…
39 uint3 Fn_R_U3D(out uint3 p) { p = d3; return d3; } // valid, but loss of precision on downconver…
57 float3 r03 = d3; // valid, but loss of precision on downconversion.
62 int3 r13 = d3; // valid, but loss of precision on downconversion.
67 uint3 r23 = d3; // valid, but loss of precision on downconversion.
83 r03 *= d3; // valid, but loss of precision on downconversion.
88 r13 *= d3; // valid, but loss of precision on downconversion.
93 r23 *= d3; // valid, but loss of precision on downconversion.
106 r03 *= ds; // valid, but loss of precision on downconversion.
[all …]
/aosp_15_r20/external/angle/third_party/glslang/src/Test/
H A Dhlsl.promotions.frag29 float3 Fn_R_F3D(out float3 p) { p = d3; return d3; } // valid, but loss of precision on downconve…
34 int3 Fn_R_I3D(out int3 p) { p = d3; return d3; } // valid, but loss of precision on downconvers…
39 uint3 Fn_R_U3D(out uint3 p) { p = d3; return d3; } // valid, but loss of precision on downconver…
57 float3 r03 = d3; // valid, but loss of precision on downconversion.
62 int3 r13 = d3; // valid, but loss of precision on downconversion.
67 uint3 r23 = d3; // valid, but loss of precision on downconversion.
83 r03 *= d3; // valid, but loss of precision on downconversion.
88 r13 *= d3; // valid, but loss of precision on downconversion.
93 r23 *= d3; // valid, but loss of precision on downconversion.
106 r03 *= ds; // valid, but loss of precision on downconversion.
[all …]

12345678910>>...887