Utility functions

class pyjama.utils.IterationLoss(alpha=0.5, exponential_alpha_scaling=True, reduction='sum_over_batch_size')[source]

Loss function for iterative decoder which returns the output of each decoder iteration.

Calculates the loss for each iteration separately and returns the weighted sum. If exponential_alpha_scaling is true, the loss for each iteration is \(\sum_i \alpha^{n-i} * |b - \hat{b}_i|\), where \(i\) is the iteration number and \(n\) the total number of iterations. Otherwise, each iteration loss is scaled by \(\alpha\) instead.

Parameters
  • alpha (float) – Weights applied

  • exponential_alpha_scaling (bool) – If true, later iterations are weighted more heavily. Otherwise, all iterations are weighted equally.

  • reduction (Reduction) – A Reduction to apply to the loss.

Input
  • b ([batch_size, num_bits]) – Ground truth bits.

  • b_hat_iterations ([batch_size, num_bits * num_iterations], float) – Output of decoder iterations.

Output

float – Loss value.

class pyjama.utils.MaxMeanSquareNorm(max_mean_squared_norm=1.0, axis=None)[source]

Scales the input tensor so that the mean power of each element along the given axis is at most max_mean_squared_norm.

Ensures that \(\frac{1}{n} \sum{|w|^2} \le \mathtt{max\_mean\_squared\_norm}\). n is the number of elements in w along the axis axis.

Parameters
  • max_mean_squared_norm (float) – Maximum to which all elements should be scaled, so that the mean squared norm does not exceed this value.

  • axis (int or list of int) – Axis along which the mean squared norm is calculated.

Input

w (tf.Tensor) – Tensor to which the constraint is applied.

Output

Same shape as w, w.dtype – If the constraint is valid, w is returned unchanged. Otherwise, w is scaled so that the constraint is valid.

class pyjama.utils.NonNegMaxMeanSquareNorm(max_mean_squared_norm=1.0, axis=None)[source]

Scales the input tensor so that the mean power of each element along the given axis is at most max_mean_squared_norm. Also ensures that all elements are non-negative.

Ensures that \(\frac{1}{n} \sum{|w_i|^2} \le \mathtt{max\_mean\_squared\_norm}\) along axis and that \(w_i \ge 0\) for all values in w. n is the number of elements in w along the axis axis.

Parameters
  • max_mean_squared_norm (float) – Maximum to which all elements should be scaled, so that the mean squared norm does not exceed this value.

  • axis (int or list of int) – Axis along which the mean squared norm is calculated.

Input

w (tf.Tensor) – Tensor to which the constraint is applied.

Output

Same shape as w, w.dtype – If the constraint is valid, w is returned unchanged. Otherwise, w is scaled so that the constraint is valid. All values in w which were negative are set to zero.

pyjama.utils.constellation_to_sampler(constellation, normalize=True, dtype=tf.complex64)[source]

Convert a constellation to a function which samples the constellation.

Input
  • constellation (Constellation) – An instance of Constellation to sample from.

  • normalize (bool) – If True, normalize the constellation so that the average power of each symbol is 1.

Output

callable, (shape, dtype) -> tf.Tensor – Function which samples the constellation.

pyjama.utils.covariance_estimation_from_signals(y, num_odfm_symbols)[source]

Estimate the covariance matrix of a signal y.

Input
  • y ([batch_size, num_rx, num_rx_ant, num_symbols, fft_size], tf.complex) – num_symbols is the number of symbols over which we estimate the covariance matrix (e.g. the number of symbols where only a jammer is transmitting).

  • num_ofdm_symbols (int) – Number of OFDM symbols in the complete resource grid.

Output

[batch_size, num_rx, num_ofdm_symbols, fft_size, num_rx_ant, num_rx_ant], y.dtype – Covariance matrix over rx antennas, for each batch, rx and subcarrier. Broadcasted over num_ofdm_symbols.

pyjama.utils.db_to_linear(db)[source]

Converts number from dB to linear scale.

Input

db (float) – Number in dB.

Output

float\(10^{db/10}\)

pyjama.utils.linear_to_db(linear)[source]

Converts number from linear to dB scale.

Input

linear (float) – Number from linear scale.

Output

float\(10\log_{10}(linear)\)

pyjama.utils.matrix_to_image(a)[source]

Converts a matrix to an image.

pyjama.utils.merge_plotbers(plotbers)[source]

Merges multiple :class:sionna.utils.plotting.PlotBER instances into one. Properties which are unique per instance (like title) are taken from the first instance.

Input

plotbers (list of :class:sionna.utils.plotting.PlotBER) – Instances to merge.

Output

class:sionna.utils.plotting.PlotBER – Merged instance.

pyjama.utils.normalize_power(a, is_amplitude=True)[source]

Scales input tensor so that the mean power per element is 1.

Input
  • a (tf.Tensor) – Tensor to be normalized.

  • is_amplitude (bool) – If True, a is assumed to be the amplitude, otherwise it is assumed to be the power.

Output

tf.Tensor – Tensor with mean power of 1. It can be interpreted as amplitude or power like the input.

pyjama.utils.plot_matrix(a, figsize=(6.4, 4.8))[source]

Plots a matrix as a heatmap. If a has more than 2 dimensions, a[0, 0 .., :, :] is plotted.

Input
  • a ([…, M, N]) – Matrix to plot.

  • figsize ((float, float)) – width, height in inches

pyjama.utils.plot_to_image(figure)[source]

Converts a matplotlib figure to a PNG image and returns it. The supplied figure is closed and inaccessible after this call.

Input

figure (Figure) – An instance of Figure to convert to an image tensor.

Output

[1, height, width, 4], tf.uint8 – Image of the figure.

pyjama.utils.reduce_matrix_rank(matrix, rank)[source]

Reduce the rank of a matrix by setting the smallest singular values to zero.

Input
  • matrix ([…, M, N])

  • rank (int.) – Desired rank of matrix.

Output

[…, M, N] – Matrix with rank smaller or equal rank.

pyjama.utils.reduce_mean_power(a, axis=None, keepdims=False)[source]

Calculates the mean power of a tensor along the given axis.

Input
  • a (tf.Tensor) – Tensor of which the mean power is calculated.

  • axis (int or list of int) – Axis along which the mean power is calculated. If None, the mean power is calculated over all axes.

  • keepdims (bool) – If True, the reduced dimensions are kept with size 1.

Output

tf.Tensor – Contains the mean power of a, calculated along axis.

pyjama.utils.sample_complex_gaussian(shape, dtype)[source]

Sample complex gaussian.

Input
  • shape (list of int) – Shape of tensor to return.

  • dtype (tf.complex) – Datatype of tensor to return.

Output

[shape], dtype – Each element is sampled from a complex gaussian with variance 1/2. This results in element-wise \(E[|x|^2] = 1\).

pyjama.utils.sample_complex_uniform_disk(shape, dtype)[source]

Sample uniform circle on complex plane.

Input
  • shape (list of int) – Shape of tensor to return.

  • dtype (tf.complex) – Datatype of tensor to return.

Output

[shape], dtype – Each element is sampled from within a circle with radius \(\sqrt{2}\). This results in element-wise \(E[|x|^2] = 1\).

pyjama.utils.sample_function(sampler, dtype)[source]

Returns function which samples from a constellation or a distribution.

Input
  • sampler (str | Constellation | callable) – String in [“uniform”, “gaussian”], an instance of Constellation, or function with signature (shape, dtype) -> tf.Tensor, where elementwise \(E[|x|^2] = 1\).

  • dtype (tf.Dtype) – Defines the datatype the returned function should return.

Output

callable – Function with signature (shape, dtype) -> tf.Tensor which returns a tensor of shape shape with dtype dtype.