easy_rec.python.layers

easy_rec.python.layers.dnn

class easy_rec.python.layers.dnn.DNN(dnn_config, l2_reg, name='dnn', is_training=False)[source]

Bases: object

__init__(dnn_config, l2_reg, name='dnn', is_training=False)[source]

Initializes a DNN Layer.

Parameters
  • dnn_config – instance of easy_rec.python.protos.dnn_pb2.DNN

  • l2_reg – l2 regularizer

  • name – scope of the DNN, so that the parameters could be separated from other dnns

  • is_training – train phase or not, impact batchnorm and dropout

property hidden_units
property dropout_ratio

easy_rec.python.layers.embed_input_layer

class easy_rec.python.layers.embed_input_layer.EmbedInputLayer(feature_groups_config, dump_dir=None)[source]

Bases: object

__init__(feature_groups_config, dump_dir=None)[source]

easy_rec.python.layers.fm

class easy_rec.python.layers.fm.FM(name='fm')[source]

Bases: object

__init__(name='fm')[source]

Initializes a FM Layer.

Parameters

name – scope of the FM

easy_rec.python.layers.input_layer

class easy_rec.python.layers.input_layer.InputLayer(feature_configs, feature_groups_config, variational_dropout_config=None, wide_output_dim=- 1, use_embedding_variable=False, embedding_regularizer=None, kernel_regularizer=None, is_training=False)[source]

Bases: object

Input Layer for generate input features.

This class apply feature_columns to input tensors to generate wide features and deep features.

__init__(feature_configs, feature_groups_config, variational_dropout_config=None, wide_output_dim=- 1, use_embedding_variable=False, embedding_regularizer=None, kernel_regularizer=None, is_training=False)[source]
has_group(group_name)[source]
target_attention(dnn_config, deep_fea, name)[source]
call_seq_input_layer(features, all_seq_att_map_config, feature_name_to_output_tensors=None)[source]
single_call_input_layer(features, group_name, is_combine=True, feature_name_to_output_tensors=None)[source]

Get features by group_name.

Parameters
  • features – input tensor dict

  • group_name – feature_group name

  • is_combine – whether to combine sequence features over the time dimension.

  • feature_name_to_output_tensors – if set sequence_features, feature_name_to_output_tensors will

  • reuse. (take key tensors to) –

Returns

all features concatenate together group_features: list of features seq_features: list of sequence features, each element is a tuple:

3 dimension embedding tensor (batch_size, max_seq_len, embedding_dimension), 1 dimension sequence length tensor.

Return type

features

get_wide_deep_dict()[source]

Get wide or deep indicator for feature columns.

Returns

WideOrDeep }

Return type

dict of { feature_name

easy_rec.python.layers.layer_norm

class easy_rec.python.layers.layer_norm.LayerNormalization(*args, **kwargs)[source]

Bases: tensorflow.python.keras.legacy_tf_layers.base.Layer

Layer normalization for BTC format: supports L2(default) and L1 modes.

__init__(hidden_size, params={})[source]
build(_)[source]

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of Layer subclasses.

Parameters

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(x)[source]

This is where the layer’s logic lives.

Note here that call() method in tf.keras is little bit different from keras API. In keras API, you can pass support masking for layers as additional arguments. Whereas tf.keras has compute_mask() method to support masking.

Parameters
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments. Currently unused.

Returns

A tensor or list/tuple of tensors.

easy_rec.python.layers.seq_input_layer

class easy_rec.python.layers.seq_input_layer.SeqInputLayer(feature_configs, feature_groups_config, use_embedding_variable=False)[source]

Bases: object

__init__(feature_configs, feature_groups_config, use_embedding_variable=False)[source]
get_wide_deep_dict()[source]

easy_rec.python.layers.multihead_attention_layer

class easy_rec.python.layers.multihead_attention.MultiHeadAttention(head_num, head_size, l2_reg, use_res=False, name='')[source]

Bases: object

__init__(head_num, head_size, l2_reg, use_res=False, name='')[source]

Initializes a MultiHeadAttention Layer.

Parameters
  • head_num – The number of heads

  • head_size – The dimension of a head

  • l2_reg – l2 regularizer

  • use_res – Whether to use residual connections before output.

  • name – scope of the MultiHeadAttention, so that the parameters could be separated from other MultiHeadAttention

easy_rec.python.layers.mmoe

class easy_rec.python.layers.mmoe.MMOE(expert_dnn_config, l2_reg, num_task, num_expert=None, name='mmoe', is_training=False)[source]

Bases: object

__init__(expert_dnn_config, l2_reg, num_task, num_expert=None, name='mmoe', is_training=False)[source]

Initializes a DNN Layer.

Parameters
  • expert_dnn_config – a instance or a list of easy_rec.python.protos.dnn_pb2.DNN, if it is a list of configs, the param num_expert will be ignored, if it is a single config, the number of experts will be specified by num_expert.

  • l2_reg – l2 regularizer.

  • num_task – number of tasks

  • num_expert – number of experts, default is the list length of expert_dnn_configs

  • name – scope of the DNN, so that the parameters could be separated from other dnns

  • is_training – train phase or not, impact batchnorm and dropout

property num_expert
gate(unit, deep_fea, name)[source]