Simulation
The main simulation code.
- class pyjama.simulation_model.Model(*args, **kwargs)[source]
Model to simulate OFDM MIMO transmissions, mainly over a 3GPP 38.901 model. It supports different simulations in frequency and time domain, as well as simulation of communication with jammer(s) present. Several jammer mitigation algorithms are implemented as well.
In all these simulations, trainable elements like jammers, receivers etc. are supported (only jammers are implemented so far).
- Parameters
scenario (str, optional) – The 3GPP 38.901 scenario to simulate. The default is “umi”. Must be one of [“umi”, “uma”, “rma”, “rayleigh”, “multitap_rayleigh”].
carrier_frequency (float, optional) – The carrier frequency in Hz. The default is 3.5e9.
fft_size (int, optional) – The number of OFDM subcarriers. The default is 128.
subcarrier_spacing (float, optional) – The subcarrier spacing in Hz. The default is 30e3.
num_ofdm_symbols (int, optional) – The number of OFDM symbols per resource grid. The default is 14.
cyclic_prefix_length (int, optional) – The length of the cyclic prefix in samples. The default is 20.
maximum_delay_spread (float, optional) – The maximum delay spread in seconds. This parameter is only relevant for time domain simulations. The default is 3e-6.
num_bs_ant (int, optional) – The number antennas on the basestation. The default is 16.
num_ut (int, optional) – The number of User Terminals (UTs) simulated. The default is 4.
num_ut_ant (int, optional) – The number of antennas on each UT. The default is 1.
num_bits_per_symbol (int, optional) – The number of bits per constellation symbol. The default is 2 (QPSK).
coderate (float, optional) – The coderate of the LDPC code. The default is None, which means no coding.
decoder_parameters (dict, optional) – Additional parameters for the LDPC decoder. By default, the encoder will be set, and hard_out will be set to False (i.e. soft output).
domain (str, optional) – The domain in which the simulation takes place. The default is “freq”. Must be one of [“freq”, “time”].
los (bool or None, optional) – If not None, all UTs are forced to be in line-of-sight (LOS) with the basestation or not.
indoor_probability (float, optional) – Probability of a UT being indoors. The default is 0.8.
min_ut_velocity (None or tf.float) – Minimum UT velocity [m/s]
max_ut_velocity (None or tf.float) – Maximim UT velocity [m/s]
resample_topology (bool, optional) – If True, the topology is resampled for each batch. The default is True.
perfect_csi (bool, optional) – If True, the channel is assumed to be perfectly known at the receiver. The default is False.
perfect_jammer_csi (bool, optional) – If True, the jammer-BS channel is assumed to be perfectly known at the receiver (for mitigation). The default is False.
num_silent_pilot_symbols (int, optional) – The number of pilot symbols where no UT tranmits (can be used for jammer channel estimation). The default is 0.
channel_parameters (dict, optional) – Additional parameters for the channel model. The default is {}.
jammer_present (bool, optional) – If True, a jammer/jammers are present in the simulation. The default is False.
jammer_power (
tf.Tensor
broadcastable to [batch_size, num_jammer, num_jammer_ant, num_ofdm_symbols, fft_size], float; or float, optional) – Power of the jamming signal (of each jammer antenna). SeeOFDMJammer
for details. The default is 1.0.jammer_parameters (dict, optional) – Additional parameters for the jammer, which is an instance of
OFDMJammer
orTimeDomainOFDMJammer
. The default is {“num_tx”: 1, “num_tx_ant”: 1, “normalize_channel”: True}.min_jammer_velocity (None or tf.float) – Minimum jammer velocity [m/s]
max_jammer_velocity (None or tf.float) – Maximim jammer velocity [m/s]
jammer_mitigation (str or None, optional) – The jammer mitigation algorithm to use. The default is None, i.e. the jammer will not be mitigated. If it is a string, it must be one of [“pos”, “ian”]. See Jammer Mitigation for details.
jammer_mitigation_dimensionality (int or None, optional) – The dimensionality of the jammer mitigation. For each dimension, the largest remaining eigenvalue of the jammer covariance matrix is removed, at the cost of ~1 degree of freedom (depending on the mitigation strategy). If None, the dimensionality will be set to the rank of the jammer covariance matrix. I.e. the total number of jammer antennas transmitting (when perfect_jammer_csi is True) or the maximum of the number of jammer antennas transmitting in the silent pilot symbols and num_silent_pilot_symbols (when perfect_jammer_csi is False). The default is None.
return_jammer_signals (bool, optional) – If True, the received jammer signals during the silent pilot symbols are return additionally. The default is False.
return_symbols (bool, optional) – If True, the estimated symbols are returned instead of the estimated bit-LLRs. The default is False.
return_decoder_iterations (bool, optional) – If True, instead of the final LLRs, the LLRs after each iteration of the LDPC decoder are returned. The default is False.
- Input
batch_size (int) – The number of resource grids to simulate.
ebno_db (float) – The SNR (Eb/N0) of the signal sent by the UTs in dB.
training (bool, optional) – If True, the model is in training mode. If False, the model is in inference mode. The default is None, which means inference mode. Currently this only matters for the LDPC decoder, which returns intermediate results during training and the only the final result during inference.
- Output
(b, llr) or (x, x_hat), or (b, llr, jammer_signals) or (x, x_hat, jammer_signals) – Either the transmitted bits and the estimated bit-LLRs, or the transmitted symbols and the estimated symbols (depending on
return_symbols
). Ifreturn_jammer_signals
is True, the jammer signals during the silent pilot symbols are returned as well.b ([batch_size, num_tx * num_streams_per_tx * k], tf.float32) – The transmitted bits.
llr ([batch_size, num_tx * num_streams_per_tx * k] or [batch_size, num_tx * num_streams_per_tx * k * num_decoder_iterations], tf.float32) – The estimated bit-LLRs. If
return_decoder_iterations
is True, the LLRs after each iteration of the LDPC decoder are returned.x ([batch_size, num_tx * num_streams_per_tx * num_data_symbols], tf.complex64) – The transmitted symbols.
x_hat ([batch_size, num_tx * num_streams_per_tx * num_data_symbols], tf.complex64) – The estimated symbols.
jammer_signals ([batch_size, num_rx, num_rx_ant, num_silent_pilot_symbols, fft_size], tf.complex64) – The jammer signals during the silent pilot symbols.
Note
This simulation model uses the assumption that the jammer is not included in the noise estimation (only :math:N_0). This is a simplification, as the noise is normally estimated separately from the signal (and hence would include noise caused by the jammer). In consequence, the LLR estimation is over-confident, as the noise is underestimated. This is unproblematic for uncoded training (as only the sign of the LLR matters), but degrades the performance of the LDPC decoder. The stronger the jammer, the more the performance is degraded.
- build(input_shape)[source]
Builds the model based on input shapes received.
This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.
This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).
- Parameters
input_shape – Single tuple, TensorShape instance, or list/dict of shapes, where shapes are tuples, integers, or TensorShape instances.
- Raises
ValueError –
In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict). 2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature). 3. If not all layers were properly built. 4. If float type inputs are not supported within the layers.
In each of these cases, the user should build their model by calling –
it on real tensor data. –
- class pyjama.simulation_model.ReturnIntermediateLDPC5GDecoder(*args, **kwargs)[source]
A wrapper around
sionna.fec.ldpc.decoding.LDPC5GDecoder
which returns the llrs after each iteration. This can be useful if gradients through a normal LDPC5GDecoder are not good enough.- Parameters
encoder (LDPC5GEncoder) – An instance of
LDPC5GEncoder
containing the correct code parameters.trainable (bool) – Defaults to False. If True, every outgoing variable node message is scaled with a trainable scalar.
cn_type (str) – A string defaults to ‘“boxplus-phi”’. One of {“boxplus”, “boxplus-phi”, “minsum”} where ‘“boxplus”’ implements the single-parity-check APP decoding rule. ‘“boxplus-phi”’ implements the numerical more stable version of boxplus [Ryan]. ‘“minsum”’ implements the min-approximation of the CN update rule [Ryan].
hard_out (bool) – Defaults to True. If True, the decoder provides hard-decided codeword bits instead of soft-values.
track_exit (bool) – Defaults to False. If True, the decoder tracks EXIT characteristics. Note that this requires the all-zero CW as input.
return_infobits (bool) – Defaults to True. If True, only the k info bits (soft or hard-decided) are returned. Otherwise all n positions are returned.
prune_pcm (bool) – Defaults to True. If True, all punctured degree-1 VNs and connected check nodes are removed from the decoding graph (see [Cammerer] for details). Besides numerical differences, this should yield the same decoding result but improved the decoding throughput and reduces the memory footprint.
num_iter (int) – Defining the number of decoder iteration (no early stopping used at the moment!).
output_dtype (tf.DType) – Defaults to tf.float32. Defines the output datatype of the layer (internal precision remains tf.float32).
- Input
llrs_ch ([…,n], tf.float32) – 2+D tensor containing the channel logits/llr values.
- Output
[…, n, k], tf.float32 – 3+D Tensor of same shape up until the last dimension as
inputs
containing, for each decoder iteration, bit-wise soft-estimates (or hard-decided bit-values) of all codeword bits. The output of the final decoder iteration is located at [:, -1]. Ifreturn_infobits
is True, only the k information bits are returned.
- call(llr_ch)[source]
Iterative BP decoding function.
This function performs
num_iter
belief propagation decoding iterations and returns the estimated codeword.- Parameters
inputs (tf.float32) – Tensor of shape […,n] containing the channel logits/llr values.
- Returns
Tensor of shape […,n] or […,k] (
return_infobits
is True) containing bit-wise soft-estimates (or hard-decided bit-values) of all codeword bits (or info bits, respectively).- Return type
tf.float32
- Raises
ValueError – If
inputs
is not of shape [batch_size, n].ValueError – If
num_iter
is not an integer greater (or equal) 0.InvalidArgumentError – When rank(
inputs
)<2.
- pyjama.simulation_model.bar_plot(values)[source]
Display a bar plot of the values.
- Input
values (list)
- pyjama.simulation_model.load_weights(model, weights_filename='weights.pickle')[source]
Loads the weights from a file into the model.
- Input
model (
Model
) – The model to load the weights into.weights_filename (str, optional) – The name of the file containing the weights. The default is “weights.pickle”.
- pyjama.simulation_model.negative_function(fn)[source]
Given a callable with numerical output, returns a callable with the same output, but with the sign flipped.
- Input
fn (callable) – A callable with numerical output.
- Output
callable – A callable with the same output as
fn
, but with the sign flipped.
- pyjama.simulation_model.relative_singular_values(jammer_signals)[source]
Returns the normalized singular values of the jammer signals.
- Input
jammer_signals ([batch_size, num_rx, num_rx_ant, num_ofdm_symbols, fft_size]) – received jammer signals during silent pilot symbols
- Output
[num_rx * num_rx_ant], float – Singular values, sorted in descending order. Normalize to sum of eigenvalues = 1.
- pyjama.simulation_model.simulate_model(model, legend, add_bler=False, verbose=True)[source]
Simulates the model on SNRs given by
ebno_dbs
and plots the BER.- Input
model (
Model
) – The model to simulate.legend (str) – The legend to use in the plot.
add_bler (bool, optional) – If True, the BLER will be calculated and added to the plot. The default is False.
- pyjama.simulation_model.singular_values(jammer_signals)[source]
Returns the unnormalized (and not averaged) singular values of the jammer signals.
- Input
jammer_signals ([batch_size, num_rx, num_rx_ant, num_ofdm_symbols, fft_size]) – received jammer signals during silent pilot symbols
- Output
[batch_size, fft_size, num_rx * num_rx_ant], float – Singular values, sorted in descending order. Normalize to sum of eigenvalues = 1.
- pyjama.simulation_model.tensorboard_validate_model(model, log_dir)[source]
TODO Simple BER validation on tensorboard
- pyjama.simulation_model.train_model(model, learning_rate=0.001, loss_fn=None, ebno_db=0.0, loss_over_logits=None, num_iterations=5000, weights_filename='weights.pickle', log_tensorboard=False, validate_ber_tensorboard=False, log_weight_images=False, show_final_weights=False)[source]
Trains the model on a single SNR and saves all model weights in a file. Optionally, the loss as well as the weights can be logged to tensorboard.
- Input
model (
Model
) – The model to train.loss_fn (callable, optional) – The loss function to use. It should subclass (or implement the interface of)
tf.keras.losses.Loss
. Depending onmodel._return_symbols
andloss_over_logits
, the loss function is applied to the symbols, bit-LLRs or the bit estimates. If None, an L1-loss is used. The default is None.ebno_db (float, optional) – The SNR on which to train the model. The default is 0.0.
loss_over_logits (bool, optional) – If True, a sigmoid function is applied to the input of the loss function before calculating the loss. This is only done if
model._return_symbols
is False (otherwise, this parameter is ignored). The default is None, which means the input is used as-is.num_iterations (int, optional) – The number of batches to train the model. The default is 5000.
weights_filename (str, optional) – The name of the file to save the weights in. The default is “weights.pickle”.
log_tensorboard (bool, optional) – If True, the loss is logged to tensorboard. The default is False.
validate_ber_tensorboard (bool, optional) – If True, after training, fast BER-plot is logged to tensorboard as a scalar. The default is False.
log_weight_images (bool, optional) – If True, the learned weights are logged to tensorboard as images every 500 iterations. The default is False.
show_final_weights (bool, optional) – If True, the learned weights are plotted after training. The default is False.