easy_rec.python.layers

easy_rec.python.layers.dnn

class easy_rec.python.layers.dnn.DNN(dnn_config, l2_reg, name='dnn', is_training=False, last_layer_no_activation=False, last_layer_no_batch_norm=False)[source]

Bases: object

__init__(dnn_config, l2_reg, name='dnn', is_training=False, last_layer_no_activation=False, last_layer_no_batch_norm=False)[source]

Initializes a DNN Layer.

Parameters:
  • dnn_config – instance of easy_rec.python.protos.dnn_pb2.DNN

  • l2_reg – l2 regularizer

  • name – scope of the DNN, so that the parameters could be separated from other dnns

  • is_training – train phase or not, impact batch_norm and dropout

  • last_layer_no_activation – in last layer, use or not use activation

  • last_layer_no_batch_norm – in last layer, use or not use batch norm

property hidden_units
property dropout_ratio

easy_rec.python.layers.embed_input_layer

class easy_rec.python.layers.embed_input_layer.EmbedInputLayer(feature_groups_config, dump_dir=None)[source]

Bases: object

__init__(feature_groups_config, dump_dir=None)[source]

easy_rec.python.layers.fm

class easy_rec.python.layers.fm.FM(name='fm')[source]

Bases: object

__init__(name='fm')[source]

Initializes a FM Layer.

Parameters:

name – scope of the FM

easy_rec.python.layers.input_layer

class easy_rec.python.layers.input_layer.InputLayer(feature_configs, feature_groups_config, variational_dropout_config=None, wide_output_dim=-1, ev_params=None, embedding_regularizer=None, kernel_regularizer=None, is_training=False, is_predicting=False)[source]

Bases: object

Input Layer for generate input features.

This class apply feature_columns to input tensors to generate wide features and deep features.

__init__(feature_configs, feature_groups_config, variational_dropout_config=None, wide_output_dim=-1, ev_params=None, embedding_regularizer=None, kernel_regularizer=None, is_training=False, is_predicting=False)[source]
has_group(group_name)[source]
get_combined_feature(features, group_name, is_dict=False)[source]

Get combined features by group_name.

Parameters:
  • features – input tensor dict

  • group_name – feature_group name

  • is_dict – whether to return group_features in dict

Returns:

all features concatenate together group_features: list of features feature_name_to_output_tensors: dict, feature_name to feature_value, only present when is_dict is True

Return type:

features

get_plain_feature(features, group_name)[source]

Get plain features by group_name. Exclude sequence features.

Parameters:
  • features – input tensor dict

  • group_name – feature_group name

Returns:

all features concatenate together group_features: list of features

Return type:

features

get_sequence_feature(features, group_name)[source]

Get sequence features by group_name. Exclude plain features.

Parameters:
  • features – input tensor dict

  • group_name – feature_group name

Returns:

list of sequence features, each element is a tuple:

3d embedding tensor (batch_size, max_seq_len, embedding_dimension), 1d sequence length tensor.

Return type:

seq_features

single_call_input_layer(features, group_name, feature_name_to_output_tensors=None)[source]

Get features by group_name.

Parameters:
  • features – input tensor dict

  • group_name – feature_group name

  • feature_name_to_output_tensors – if set sequence_features, feature_name_to_output_tensors will take key tensors to reuse.

Returns:

all features concatenate together group_features: list of features

Return type:

features

get_wide_deep_dict()[source]

Get wide or deep indicator for feature columns.

Returns:

WideOrDeep }

Return type:

dict of { feature_name

easy_rec.python.layers.layer_norm

class easy_rec.python.layers.layer_norm.LayerNormalization(*args, **kwargs)[source]

Bases: Layer

Layer normalization for BTC format: supports L2(default) and L1 modes.

__init__(hidden_size, params={})[source]
build(_)[source]

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of Layer subclasses.

Parameters:

input_shape – Instance of TensorShape, or list of instances of TensorShape if the layer expects a list of inputs (one instance per input).

call(x)[source]

This is where the layer’s logic lives.

Note here that call() method in tf.keras is little bit different from keras API. In keras API, you can pass support masking for layers as additional arguments. Whereas tf.keras has compute_mask() method to support masking.

Parameters:
  • inputs – Input tensor, or list/tuple of input tensors.

  • **kwargs – Additional keyword arguments. Currently unused.

Returns:

A tensor or list/tuple of tensors.

easy_rec.python.layers.seq_input_layer

class easy_rec.python.layers.seq_input_layer.SeqInputLayer(feature_configs, feature_groups_config, embedding_regularizer=None, ev_params=None)[source]

Bases: object

__init__(feature_configs, feature_groups_config, embedding_regularizer=None, ev_params=None)[source]
get_wide_deep_dict()[source]

easy_rec.python.layers.multihead_attention_layer

class easy_rec.python.layers.multihead_attention.MultiHeadAttention(head_num, head_size, l2_reg, use_res=False, name='')[source]

Bases: object

__init__(head_num, head_size, l2_reg, use_res=False, name='')[source]

Initializes a MultiHeadAttention Layer.

Parameters:
  • head_num – The number of heads

  • head_size – The dimension of a head

  • l2_reg – l2 regularizer

  • use_res – Whether to use residual connections before output.

  • name – scope of the MultiHeadAttention, so that the parameters could be separated from other MultiHeadAttention

easy_rec.python.layers.mmoe

class easy_rec.python.layers.mmoe.MMOE(expert_dnn_config, l2_reg, num_task, num_expert=None, name='mmoe', is_training=False)[source]

Bases: object

__init__(expert_dnn_config, l2_reg, num_task, num_expert=None, name='mmoe', is_training=False)[source]

Initializes a DNN Layer.

Parameters:
  • expert_dnn_config – a instance or a list of easy_rec.python.protos.dnn_pb2.DNN, if it is a list of configs, the param num_expert will be ignored, if it is a single config, the number of experts will be specified by num_expert.

  • l2_reg – l2 regularizer.

  • num_task – number of tasks

  • num_expert – number of experts, default is the list length of expert_dnn_configs

  • name – scope of the DNN, so that the parameters could be separated from other dnns

  • is_training – train phase or not, impact batchnorm and dropout

property num_expert
gate(unit, deep_fea, name)[source]