POSTS

News - Insights - Case Studies

Email Insights from Data Science – Part 4

email_analysis_blog

Supervised Model Development

In Part 3 of the series, I implemented several different methods for classifying email content based upon target lexicons and word similarities.  We used a number of different technologies; including CountVectorization, Latent Dirichlet Allocation and Non-Negative Matrix Factorization.  The output from Part 3 was a dataset of email features (including the email content) and two sets of classification labels (six total); three different labels for sentiment and three different labels for professional alignment.

For this post the focus will be modeling the new supervised dataset into machine learning models that can be used for inference labeling future emails without the need for an extensive pipeline of operations.   Leveraging a deep learning model, email content can be quickly assessed and labeled on the fly or in periodic batches so corporate morale and alignment can be assessed on a recurring basis.

Since the classification step performed in Part 3 produced varying labels based upon the technique implemented, I will generate models using all three methods and combine the outputs into an ensemble to help boost performance.  To also increase accuracy I will implement several different time-series model types (including recurrent LSTM / GRU, the Transformer architecture and the pretrained Roberta base transformer model from HuggingFace).

Each model will have 3 folds using cross validation that will be aggregated for a final score which will then be aggregated again with all other models to produce a final classification.

Training Pipeline

The first step in the modeling process for supervised learning is the setup and network training.  For this exercise I will be using the PyTorch library to implement custom training modules.  The implementation could have just as easily been implemented with Tensorflow, Tensorflow/Keras or a number of commercial options with the same level of effectiveness.

Since NLP exercises are time-series in nature we’ll focus on technologies that specialize in this scenario; specifically recurrent and transformer networks.  As everyone into NLP is aware, transformers have been the “go to” standard for a while when dealing with sequential text problems, but LSTM and GRU are still viable candidates for certain scenarios so this exercise will include both recurrent and transformer models to compare the differences in performance.

With the exception of the pretrained model, we will train a word embedding layer specific to the email dataset.  In my opinion this will better align our classifications, but one could also use pretrained embeddings from Glove or Gensim.  The key benefit of training a custom word embedding is to capture the common vocabulary of the corporation.

Objectives

Our task for this project has been to create machine learning models that predict the morale (i.e. sentiment) and alignment (i.e. work/life balance) from raw corporate email content.  Several of the sentiment solutions could be ran independently from a neural network, but the level of accuracy would not be comparable so I chose to implement both the sentiment and topic classification models in a similar manner.

By the end of this post I will have shown the outputs from nine different combinations of labels and model types.  We will focus on just the sentiment labels for time’s sake…given the alignment labels will process in the same way using the same methods.

For each sentiment label (from sentiment lexicon 1, sentiment lexicon 2 and Vader), an LSTM and GRU recurrent model will be created, and a custom transformer encoder model.  To help identify which method is being used in this article, future references will be made as Sentiment Method 1, Sentiment Method 2 and Sentiment Method 3 respectively.

Note.  For reference, “training” means the process of teaching the neural network how to interpret the data in a way advantageous to our goals, while “inference” is the act of using the model to predict an outcome on previously unseen and potentially untrained  data.

Machine Learning Background

Just as a brief overview of how basic machine learning works, for those that may be newer to the subject, I will provide my high-level explanation for these technologies without hopefully getting too far into the weeds.

A machine learning network is a collection of internally managed values that when related with user supplied input values produces an output that solves a problem.  Think of a situation where you might want to know what kind of snack to offer elementary students based upon a number of attributes like age, allergies, diet, past dislikes, etc.  These attributes are user supplied values (also called features) and the network calculates a snack choice based upon these values and its secret (or hidden) values.

To develop the hidden values of the network, training is required.  Current deep-learning methodologies train in an iterative manner to slowly adjust the internal hidden values of the network in a way necessary to best produce an accurate output.  This is not a closed-form calculation done only once, the final training “answer” is achieved over many retries until the optimal outcome is achieved.

I’m sure this all sounds pretty vague so I will attempt an analogy to help clarify.  Pretend you are a manager at a restaurant that only serves one dish.  It is your job to accurately predict how many meals should be ready every 15 minutes to match customer demand, improve server tips and to reduce food waste.  Using past knowledge you have your servers prepare 10 meals before opening, but 20 customers show up to order so you are able to satisfy 50% of customers and the remainder either wait or leave.  Now your servers are a bit upset at the lost wages so you decide to adjust the ratio and have 12 meals prepared for new customers.  This time only 11 customers order so all customers are happy, servers are happy and only 1 meal is lost.  This fictitious scenario is very rudimentary, but try to think of “number of meals” as the internal network values (or hidden values), the “number of customers”, “day of week” and “time of day” as the user input values (or features) and the “lost tips”, “wasted food”, and “unhappy customers” as the attributes driving the network to make adjustments to its hidden values so future predictions are more accurate (i.e. the loss function).

Put in technical terms, modern neural networks require a “forward” pass to calculate (i.e. predict) an output class or numerical value based upon the user input data.  For training the network, the “difference” between the predicted values and the actual values is the error or “loss” for that batch of user inputs and is calculated via a “loss function“.  Once the loss is known, the first step of a “backward” pass is performed to calculate the rate of change (i.e. derivatives / gradients) back through the computational graph (i.e. the neural network) for each layer/computation in the model.  Once gradients are known, the final step of the backward pass is the “optimization” step where the gradients are applied to the weights and biases (i.e. the hidden values) in preparation for the next forward training pass.

The image below shows the basic flow of a single neuron network with one input feature that expects the network to predict the same value as an output.  This is an overly simplified example to show the operation of the network and without going into the math and all of the extraneous details (there are many blogs on the topic), the input value comes into the network on the first forward pass and is multiplied with the neuron weight to produce an output that is 0.98 different than the expected value (in this case 1).  The loss function calculates the gradient back through the graph (in this case the one neuron) and the optimization step adjusts the weight (usually a small value, but in this case a fictitious 0.10 for clarity) and gets ready for the next forward pass.  The process starts again for the second pass, but now the loss comes out to 0.88…closer to our target (0.0 loss is a perfect score).  This iterative loop is called “gradient descent” and continues until the loss is minimized as much as possible (i.e. is optimal).

Dataset Preparation

To kick off the modeling process we’ll need to put our data into the proper format and structure.  Even though most of the data preparation was completed in previous steps, there are still a few modifications needed before we can begin processing.

Label encoding.  For starters, our labels are currently in string format (i.e. ‘neg’, ‘pos’, etc.) and machine learning systems only work with numerical data so we will need to convert our text labels into integer classes.  The process involves “encoding” each cardinal (or unique) value with a number.  For example, if our labels include four values “neg”, “pos”, “neu” and “unknown” then our encoded classes will be 0=”neg”, 1=”pos”, 2=”neu” and 3=”unknown”.  In this case I consider unknown classes to be neutral so we’ll combine class type 2 and 3 together for a total of three possible classes.

There are a number of ways to encode classes.  The Scikit-Learn library has a LabelEncoder, Pytorch includes one as well, but since this exercise is very simple I didn’t want to incur the overhead so I created a quick mapping dictionary and encoded each label myself using the Pandas “apply” method.

				
					self.class_encoders = { 
            # can also use sklearn.preprocessing.LabelEncoder or PyTorch LabelEncoder or others...
            # regardless of method, for non-labels the same encoding scheme must be used during model inference
            'Outside_Hours':lambda x: 0 if x is False else 1,
            'Forwarded':lambda x: 0 if x is False else 1,
            'Source':lambda x: 0 if x == 'deleted' else 1 if x == 'responded' else 2 if x == 'sent' else 3,
            'Class_Alignment_1':lambda x: 0 if x == 'work' else 1 if x == 'fun' else 2,
            'Class_Alignment_2':lambda x: 0 if x == 'work' else 1 if x == 'fun' else 2,
            'Class_Alignment_3':lambda x: 0 if x == 'work' else 1 if x == 'fun' else 2,
            'Class_Sentiment_1':lambda x: 0 if x == 'neg' else 1 if x == 'pos' else 2,
            'Class_Sentiment_2':lambda x: 0 if x == 'neg' else 1 if x == 'pos' else 2,
            'Class_Sentiment_Vader':lambda x: 0 if x == 'neg' else 1 if x == 'pos' else 2,
        }
				
			

Column names.  Another problem that needs to be addressed involves the names used for the features and labels.  Incoming data columns will often have strange names attached and either the modeling code needs to be designed to handle random naming conventions or to simplify the logic, columns can be renamed to something consistent.  In this case I will rename the “Body” column to “content” and each label selected for processing will be renamed to “label”.  I am not implementing a multi-label graph so only one label will be active at a time.  This way all of the downstream code after the initial data load can focus on the “content” and “label” columns.

Data split.  We ended up with approximately 83K individual classified emails and that dataset needs to be split into separate datasets; one for training/testing and one for evaluation.  The training dataset will be split again during cross validation.  The evaluation dataset will be reserved for final inference testing.  Once again there are multiple ways to accomplish this using either SciKit-Learn preprocessing functions or Pytorch tensor methods.  I ended up using Pandas to “sample” the dataset into subsets including the labels in order to perform additional processing across all columns.

Vocabulary.  Working with text data means more encoding is necessary.  Similar to how each string label was encoded, the email content (i.e “body”) also needs to be encoded into unique numeric values.  This is called tokenization and before that can occur a vocabulary of all the known unique values must be created with a unique numeric value assigned.  This vocabulary is used during training as well as during inference so it must be preserved for the life of the model…and ideally updated and the model fine-tuned periodically.  For this implementation we will skip words that are not in the initial list.

Tokenization.  And finally we tokenize the email content into dense vectors of encoded integers to represent each word in the content.  The token vector will be the initial input for the machine learning model and is consumed by the word embedding layer (more on that later).  For models that have already been pretrained on previous documents, the same tokenization method used to pretrain the model must be used during fine-tuning.

Common Framework

In general, each of the models being implemented will train and be evaluated using the same framework of processing steps.  Pytorch is similar to Tensorflow with regard to low-level handling of the training/test loop and the flexibility to implement custom network layers.  In my practice I prefer this level of control and flexibility.  The higher level libraries work well for sequential tasks, but in NLP I find it necessary to pull parameters out at various levels of the network and that requirement dictates the need for more intricate access.

Regardless of the technologies used to generate a data model, it becomes apparent very quickly that a lot of the plumbing can be abstracted and reused across many implementations so choosing a solution that supports re-use is imperative to maintaining overhead and controlling quality.

For this exercise I created a simple pipeline process rather than adding additional overhead.  In a production scenario, the example code could be used, but a number of modifications would be needed to ensure reliability.  Other options, like MLFlow or AWS Sagemaker would be good examples of pipeline frameworks to leverage in lieu of custom software.

Graphing

Being able to visualize what the network is doing at any point in time can be a very effective way of determining how well a model is performing or not performing due to sub-optimal hyperparameters.

For each individual parameter set within a network (i.e. weights, biases, gradients and outputs), I prefer the basic histogram to represent how the network is behaving based upon initialization values, processing progress and interactions between each network element.

By taking snapshots of the network at various time steps we can see a picture of how the network evolves and determine if the data supports our expectations.  I use these graphs to determine if learning rate is set correctly, if the weight distribution matches best practices for each network component, to inspect gradient sizes and to evaluate layer outputs.

Vanishing or exploding gradients can sink a network’s performance quickly so make sure to capture and evaluate the first dozen or so training iterations.

Other options.  I like to use my own graphing methods for network evaluation, but an alternative method is to generate frequent model checkpoints for use with Tensorboard.  This graphical tool is very useful to inspect network performance and provides many options to organize model data into different perspectives.

Weights and Biases

Depending upon the type of layer used within a neural network there could be one or more weight vectors along with a bias vector for each layer.  For example, a fully connected (or linear, or dense) layer has a weight vector and a bias vector.  

The weight vector holds the internal state (or hidden values, or coefficients) for the layer and is what the input feature vector is calculated against.  The weights are the values that are adjusted iteratively (via gradient descent) until the most optimal value for the training dataset is obtained.

The bias vector works similar to the weights vector, but is not affected by the input values directly and can be thought of as values used during the layer output calculation to adjust the results in a way that better fits the expected outcome and helps improve the effectiveness of subsequent layers.  The bias vector values are updated during gradient descent using a calculation specific to biases, but adjustments are still based upon the partial derivatives calculated during the backward pass.

The images below represent initial weights and biases values versus the results at the end of training.  These images are for the GRU-based model just because there are fewer parameters to graph.  The images are most likely difficult to read in place so open each image in a separate tab to see the finer details.

Even without the details it is easy to see the biases all set to zero within the initial graph and then by the end of training have been modified to produce a distribution over a range (for this training usually between -0.02 and 0.02 roughly.  

The change in weight values is more difficult to see.  Since I am using ReLU activation functions I initialized all weights following the Kaiming Normalization algorithm (PyTorch built-in) and this produces a Gaussian-like distribution that does adjust over time as training occurs, but generally retains its shape.  If you look closely though there are differences.

Model GRU Weights and Biases - Fold 0
Model GRU Weights and Biases - Fold 2

Gradients

 

Watching gradient distributions for each layer is a quick way to see how backpropagation is performing and whether or not gradients are becoming too small (i.e. vanishing) or becoming too large (i.e. exploding).

I use a similar histogram to display network gradients.  The example code included with this article has the routine I created to display the graphs.

The distribution of gradients is much more narrow than the weight distributions and depends upon where the layer is located in the network.  Since the backward pass (or backpropagation) starts at the end of the graph and works backwards, the gradients for the layers at the beginning can become very small.  Generating a graph every fold or epoch can help determine if this scenario is occurring.

Below is an example graph for the GRU network at the end of fold 0 and at the end of training.

Visually, a comparison of the two images shows definite changes between the two time periods.  Since the number of bins I’m using for the histogram is only 100 the resolution isn’t the greatest, but what is important is the overall distribution range for each subplot.  For example, the gradient range for the embedding layer was between -0.05 – 0.05 at the end of fold 0 and between -0.02 – 0.04 by the end of training; which is acceptable.  If there was a vanishing gradient problem the range would be in the neighborhood of -1e-6 to 1e-6 or worse. 

Model GRU Gradients - Fold 0
Model GRU Gradients - Fold 2

Outputs

 

The same can be said for graphing a few epochs worth of outputs to check how each layer is performing in your network.  A lot of zero values would be a bad sign as well as strange patterns and clumping values at extreme ranges.

Recurrent Networks

To model the supervised dataset I’ll start with the tried and true LSTM and GRU networks.  Both of these network technologies work well with time-series data and both are relatively easy to setup and train.

To keep the example as simple as possible and still train to an acceptable level of accuracy I kept the number of layers to a minimum with an embedding layer, a recurrent layer and a fully connected layer to calculate the target class.

				
					##############################################################################################################################
# Supervised Recurrent Model
##############################################################################################################################

class SupervisedRNN(nn.Module):
    ''' Recurrent network for time-series model of email content '''

    def __init__(self, mode:str, config:dict):
        super().__init__()
        self.batch_size = config['batch_size'] # batch length
        self.max_tokens = config['max_tokens'] # sequence length
        self.embedding_len = config['embedding_len'] # feature length
        self.number_classes = config['number_classes'] # number target classes
        self.dropout = config['dropout']
        self.epochs = config['epochs']
        self.vocab_size = config['vocab_size']
        self.bidirectional = config['bidirectional']
        self.rnn_layers = config['rnn_layers']
        self.config = config

        # define network

        # Embedding layer - training custom token relationships rather than pretrained
        # Note: should train this separately 
        self.embedding = nn.ModuleList([
            nn.Embedding(self.vocab_size, self.embedding_len, scale_grad_by_freq=True),
        ])

        # Recurrent network (Input = BatchSize x MaxTokens x Embedding Length)
        ln_input_len = (2 if self.bidirectional else 1) * self.rnn_layers * self.embedding_len
        self.rnn = nn.ModuleList([
            nn.LSTM(self.embedding_len, self.embedding_len, batch_first=True, num_layers=self.rnn_layers, bidirectional=self.bidirectional) if mode=='lstm' 
            else nn.GRU(self.embedding_len, self.embedding_len, batch_first=True, num_layers=self.rnn_layers, bidirectional=self.bidirectional) if mode=='gru'
            else nn.RNN(self.embedding_len, self.embedding_len, batch_first=True, num_layers=self.rnn_layers, bidirectional=self.bidirectional),
            nn.LayerNorm(ln_input_len), # input size is doubled if birectional LSTM
            nn.LeakyReLU(),
            nn.Dropout(self.dropout),
        ])

        # Fully connected network to reduce recurrent weights to log odds
        fc_input_len = (2 if self.bidirectional else 1) * self.rnn_layers * self.embedding_len
        self.fc = nn.ModuleList([
            nn.Linear(fc_input_len, fc_input_len * 2),
            nn.LeakyReLU(),
            nn.Dropout(0.3),
            nn.Linear(fc_input_len * 2, fc_input_len),
            nn.LeakyReLU(),
            nn.Dropout(0.3),
            nn.Linear(fc_input_len, self.number_classes),
            nn.LogSoftmax(),
        ])

        self.reset_weights(mode='init')

        self.to(DEVICE) # move to GPU if available
        return

    def forward(self, text, **kwargs):
        ''' Forward pass for recurrent network '''
        output_checks = {}
        output_pos = 0

        x = text

        # process the embedding layer
        for m in self.embedding:
            x = m(x)
            if self.config['check_outputs']:
                output_checks[f'{m.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1

        # spin through the modules defined within the recurrent network portion of the model
        for m in self.rnn:
            if isinstance(m, (nn.LSTM)):
                _, (x,_) = m(x) # use last hidden state since we are labeling the entire sequence
                x = torch.transpose(x, 0, 1) # back to batch first
                x = x.reshape(-1, np.prod(x.shape[-2:])) if x.ndim == 3 else x # collapse the sequence dimension if bidirectional
            elif isinstance(m, (nn.GRU)):
                _, x = m(x) # use last hidden state since we are labeling the entire sequence
                x = torch.transpose(x, 0, 1) # back to batch first
                x = x.reshape(-1, np.prod(x.shape[-2:])) if x.ndim == 3 else x # collapse the sequence dimension if bidirectional
            else:
                x = m(x)

            if self.config['check_outputs']:
                output_checks[f'{m.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1

        # finish up the pass regressing the recurrent summarized sequence weights into class log odds
        for m in self.fc:
            x = m(x)
            if self.config['check_outputs']:
                output_checks[f'{m.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1

        return x, output_checks

    def reset_weights(self, mode='fold'):
        ''' Method for initial weights and resetting weights between cross-validation folds '''
        minit = ModelSupport(self.config)
        _ = minit.init_weights(self.embedding) if mode == 'init' else None
        minit.init_weights(self.rnn)
        minit.init_weights(self.fc)
        return
				
			

The Embedding Layer

The embedding layer is used to convert the email content that we tokenized into matrices of word associations.  As the network trains this layers becomes more aware of words that are related to other words in the content of the input emails.  Similar in a way to how Latent Dirichlet Allocation creates associations based upon the frequency of word occurrences.  

FYI, each input word of an email content text sequence generates its own embedding matrix.  So if your content contains 10 integer tokens the input for the embedding layer will be a 2-dimensional vector of shape [batch size, 10] to match the number of samples (i.e. batch size) and the number of words in each sample.  The size of the embedding layer dictates the output dimension.  If you select an embedding size of 64 the layer will output a 3-dimensional vector of shape [batch size, 10, 64].

To further explain how the embedding layer will function, the input sequence “Sam I am” tokenized as a five integer padded input vector of [1,2,3,0,0] would generate three embedding matrices of N length; one for each input word.  The length of the embedding matrix is configurable and the length selected is based upon the size of the input corpus vocabulary and how varied the token associations are expected to be.  Ultimately the embedding layer outputs a 3-dimensional matrix of shape [batch size,  sequence length,  embedding length].

The Recurrent Layer

The main processing layer of the network is the recurrent layer.  This layer takes the word association matrix from the embedding layer and builds associations between each of the tokens in sequence.  Where the embedding layer captures word relations (with no regard to order) the recurrent network keeps track of where and when it sees each token in the sequence.  

For this exercise the max email length was set at 192 tokens since stop words, words less than three characters and out-of-vocabulary words were removed.  Even with the reduction of input tokens some emails were truncated during processing.

Since the embedding layer and recurrent layer weights were all initialized with a distribution of float values between -1.0 and 1.0 the type of activation function used will have a significant impact on training.  In this scenario I tend to use the TanH function with plenty of dropout to allow values from the entire range through each layer, but in this exercise I went with the LeakyReLU function and less dropout.  Contrary to how the TanH function works, LeakyReLU allows some negative values to filter through rather than all values between -1.0 and 1.0.

Note. Activation functions are used to teach the network how to fit the best non-linear path through the training data.  Without an activation function the network would tend to learn a straight line (i.e. linear) path through the data rather than a wavy line that adjusts to the best probabilities for Y given X.

Note. Dropout is used to help the network forget about certain input relationships in a stochastic (or random) manner.  This helps keep the network from remembering too much about the input data and “overfitting”.  Overfitting causes the network to become less effective at predicting accurate values on future data.

For the recurrent layer I also included a normalization layer to help keep gradients in check during backpropagation.  By normalizing the outputs from the recurrent layer the values will all be within a tolerance that is less impactful when calculating the derivatives for network updates.

The Linear Layer

The last layer implemented is the fully connected or linear layer.  This layer helps reduce the outputs of the recurrent network into logits that are then turned into probabilities.  The number of outputs from the linear layer matches the number of possible answers.  For example, in this exercise the possible answers can be “negative”, “positive” or “neutral” so the number of output classes will be three.  

The Softmax function is used to calculate probabilities from linear outputs.  In this case I’m using the LogSoftmax function paired with the NLLLoss function (negative log loss) to calculate the logarithmic odds; which are then calculated into the “difference” or loss function between the predicted class and the actual class.

Note. The LogSoftmax and NLLLoss functions could be combined into one step using CrossEntropyLoss, but I like to keep them separate for debugging purposes.

Processing

The forward pass is a standard implementation that simply passes the inputs into the network in the proper sequence with only one exception.  Since we are only interested in the entire sequence and not each token in the sequence I don’t want to use the outputs from the recurrent network, but rather the last hidden state from the network; which is the last token in the sequence.  Doing this also requires a little juggling of the data to reshape into an acceptable format.

Note.  In a production scenario I would enhance this routine to determine the length of the sequence and extract the weights at the actual end of the sequence rather than the “max tokens” end of the sequence.

And finally, the reset_weights function is used to support cross validation and is called after each fold is completed to ensure the network is restored to its original state.

Transformer Networks

Transformer networks got their start as interconnected recurrent networks for encoding text sequences for subsequent decoding (i.e. language translation).  Over time the architecture evolved to replace the recurrent network portion with an “attention” layer that is similar to an RNN as it remembers sequence state (i.e. time series information).  By combining multiple attention layers together the network is able to more effectively capture the relationships between tokens in sentences and sentences in paragraphs and so on.

Researchers quickly discovered that the architecture was also effective at tasks other than translation.  By using only the encoder layer it was possible to capture details about input text that could later be used for sequence-to-sequence comparisons (i.e. sentence comparison), part-of-speech tagging, question and answer, multiple choice and text generation.

For this exercise I used the encoder pattern to model the input email content.  As with the recurrent networks, the transformer network has an embedding layer to learn word associations as well as a linear layer to calculate the output probabilities.

There are a few differences to call out.

  • There is an extra layer included that is not part of the recurrent model.  Specifically a positional encoder function that adds additional token state information to the word embedding layer outputs.  Even though the transformer attention heads keep track of sequence state, adding additional state information helps the network learn that much faster.
  • The LSTM/GRU layer is replaced with a TransformerEncoder network object that is made up of multiple TransformerEncoderLayer objects (or attention heads).  Note. The network object names used here are Pytorch specific.
  • The output from the encoder layer is much larger than the recurrent network so the linear layer reduces the size more aggressively.
  • During the forward pass, rather than extracting only the last hidden state, the full layer output is used and flattened to two dimensions for input into the linear layer. 
				
					##############################################################################################################################
# Supervised Transformer Model
##############################################################################################################################

class SupervisedTransformer(nn.Module):
    ''' Transformer network for time-series model of email content '''
    def __init__(self, config:dict):
        super().__init__()
        self.batch_size = config['batch_size'] # batch length
        self.max_tokens = config['max_tokens'] # sequence length
        self.embedding_len = config['embedding_len'] # feature length
        self.number_classes = config['number_classes'] # number target classes
        self.dropout = config['dropout']
        self.epochs = config['epochs']
        self.vocab_size = config['vocab_size']
        self.attention_heads = config['attention_heads']
        self.encoder_layers = config['encoder_layers']
        self.config = config

        # define network

        # Embedding layer - training custom token relationships rather than pretrained
        self.embedding = nn.ModuleList([
            nn.Embedding(self.vocab_size, self.embedding_len, scale_grad_by_freq=True),
        ])

        # Positional encoding layer
        self.pos_encoder = PositionalEncoding(self.embedding_len, dropout=self.dropout, max_len=self.max_tokens, batch_first=True)

        # Encoder network (Input = BatchSize x MaxTokens x Embedding Length)
        attention_layer = nn.TransformerEncoderLayer(self.embedding_len, self.attention_heads, dim_feedforward=self.max_tokens*4, dropout=self.dropout, batch_first=True)
        self.encoder = nn.ModuleList([
            nn.TransformerEncoder(attention_layer, self.encoder_layers),
            nn.LayerNorm(self.embedding_len),
            nn.LeakyReLU(),
            nn.Dropout(0.3),
        ])

        # Fully connected network to reduce encoder weights to log odds
        fc_input_len = self.max_tokens * self.embedding_len # 2d size after flatten of 3d transformer output
        self.fc = nn.ModuleList([
            nn.Linear(fc_input_len, fc_input_len // 4),
            nn.LeakyReLU(),
            nn.Dropout(0.3),
            nn.Linear(fc_input_len // 4, fc_input_len // 2),
            nn.LeakyReLU(),
            nn.Dropout(0.3),
            nn.Linear(fc_input_len // 2, self.number_classes),
            nn.LogSoftmax(),
        ])

        self.reset_weights(mode='init')

        self.to(DEVICE) # move to GPU if available
        return

    def forward(self, text, **kwargs):
        ''' Forward pass for transformer network '''
        output_checks = {}
        output_pos = 0

        x = text

        # process the embedding layer
        for m in self.embedding:
            x = m(x)
            if self.config['check_outputs']:
                output_checks[f'{m.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1

        # encode with position information
        x = self.pos_encoder(x)

        # spin through the modules defined within the recurrent network portion of the model
        for m in self.encoder:
            x = m(x)
            if self.config['check_outputs']:
                output_checks[f'{m.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1

        # flatten the encoder output
        x = torch.flatten(x, start_dim=1) # squash down to 2d

        # finish up the pass regressing the sequence weights into class log odds
        for m in self.fc:
            x = m(x)
            if self.config['check_outputs']:
                output_checks[f'{m.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1

        return x, output_checks

    def reset_weights(self, mode='fold'):
        ''' Method for initial weights and resetting weights between cross-validation folds '''
        minit = ModelSupport(self.config)
        _ = minit.init_weights(self.embedding) if mode == 'init' else None
        minit.init_weights(self.encoder)
        minit.init_weights(self.fc)
        return

class PositionalEncoding(nn.Module):
    '''
        Positional embedding encoder

        Modified from Pytorch tutorial -> https://pytorch.org/tutorials/beginner/transformer_tutorial.html
    '''
    def __init__(self, d_model: int, dropout: float = 0.1, max_len: int = 5000, batch_first=True):
        super().__init__()
        self.dropout = nn.Dropout(p=dropout)
        self.batch_first = batch_first

        position = torch.arange(max_len).unsqueeze(1)
        div_term = torch.exp(torch.arange(0, d_model, 2) * (-math.log(10000.0) / d_model))
        if batch_first:
            pe = torch.zeros(1, max_len, d_model)
            pe[0, :, 0::2] = torch.sin(position * div_term)
            pe[0, :, 1::2] = torch.cos(position * div_term)
        else:
            pe = torch.zeros(max_len, 1, d_model)
            pe[:, 0, 0::2] = torch.sin(position * div_term)
            pe[:, 0, 1::2] = torch.cos(position * div_term)
        self.register_buffer('pe', pe)

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        """
            x -> shape [seq_len, batch_size, embedding_dim] if not batch_first else [batch_size, seq_len, embedding_dim]
        """
        dim = 1 if self.batch_first else 0
        x = x + self.pe[:x.size(dim)]
        return self.dropout(x)
				
			

Pretrained Networks

I have also included support within this article for the pretrained Roberta transformer model from Huggingface.  I did not include training results for this approach due to the high processing overhead involved, but I wanted to include the option since this method has produced good results in past implementations.

Leveraging the knowledge accumulated from a previously trained network is called “transfer learning” and is a great way of reusing models without having to incur the processing overhead again and again.  The transformer architecture is especially well suited to transfer learning for natural language processing projects.

Note.  One concern to be cognizant of when selecting a pretrained model is the source of information used during training.  Make sure the source of information is aligned with your goals.  For example, if you are working on a network with fictional text and the pretrained model was built from business documents the information may not match up well with your target.

To take advantage of an existing model most generally requires a small amount of additional training or “fine tuning” to teach the model about the targets you’re interested in predicting.  For example, the Roberta model from Huggingface is trained in an unsupervised manner from general internet text by using the next word in each sentence as the target for the model to predict.  That doesn’t help with predicting sentiment so we need to teach the network to take a sentence and learn that the tokens in the sentence should be classified as either negative, positive or neutral.  We do this by including the model as part of the training loop so the weights are adjusting accordingly.

My implementation is very simple and includes only the pretrained model, a normalization layer and the final connected layer to generate the class predictions.  The one big difference is, within the forward pass, I summarize and concatenate the last four hidden layers of the pretrained network to use as the inputs for the fully connected layer.  This method has been shown to be the most accurate way to extract information from transformer networks for full sequences.

				
					##############################################################################################################################
# Prebuilt Supervised Transformer Model
##############################################################################################################################

class SupervisedPrebuilt(nn.Module):
    ''' Prebuilt fine-tuning of transformer network for time-series model of email content 

        Using Roberta prebuilt model from Huggingface, api at https://huggingface.co/transformers/model_doc/roberta.html
    '''
    def __init__(self, config):
        super().__init__()
        self.number_classes = config['number_classes'] # number target classes
        self.dropout = config['dropout']
        self.encoder_layers = config['encoder_layers']
        self.config = config

        self.mconfig = RobertaConfig.from_pretrained(f'{self.config["pretrained_dir"]}{self.config["pretrained_model"]}')
        self.mconfig.update({"is_decoder":False, "num_layers":self.encoder_layers, "output_hidden_states":True, "hidden_dropout_prob": self.dropout, "layer_norm_eps": 1e-7})
        # self.model is created in reset_weights - if kfold CV is used then model will need to be reset multiple times
        
        # define network
        
        # Normalization layer
        ln_input_len = self.mconfig.to_dict()['hidden_size'] * 4
        self.norm = nn.ModuleList([
            nn.LayerNorm(ln_input_len),
            nn.LeakyReLU(),
            nn.Dropout(0.2),
        ])
        
        # Fully connected network to reduce encoder weights to log odds
        fc_input_len = self.mconfig.to_dict()['hidden_size'] * 4
        self.fc = nn.ModuleList([
            nn.Linear(fc_input_len, fc_input_len),
            nn.LeakyReLU(),
            nn.Dropout(0.3),
            nn.Linear(fc_input_len, fc_input_len // 2),
            nn.LeakyReLU(),
            nn.Dropout(0.3),
            nn.Linear(fc_input_len // 2, self.number_classes),
            nn.LogSoftmax(),
        ])

        self.reset_weights() # in this case the prebuilt Roberta model is created in reset_weights

        self.to(DEVICE) # move to GPU if available
        return

    def forward(self, input_ids, attention_mask, **kwargs):
        ''' Forward pass for prebuilt network. '''
        output_checks = {}
        output_pos = 0

        layers = [-4, -3, -2, -1]

        outputs = self.model(input_ids, attention_mask)
        x = outputs.hidden_states
        amask = attention_mask.unsqueeze(-1).expand(x[layers[0]].size())

        x = torch.cat(tuple(torch.sum(x[l]*amask, dim=1) for l in layers), dim=1) # sum each of the last four layers by sequence then concatenate
        if self.config['check_outputs']:
            output_checks[f'{self.model.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1
            
        # process normalization layer
        for m in self.norm:
            x = m(x)
            if self.config['check_outputs']:
                output_checks[f'{m.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1        
            
        # finish up the pass regressing the sequence weights into class log odds
        for m in self.fc:
            x = m(x)
            if self.config['check_outputs']:
                output_checks[f'{m.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1

        return x, output_checks

    def reset_weights(self, mode=None):
        ''' Method for initial weights and resetting weights between cross-validation folds '''
        # Prebuilt network is instantiated here so if CV is used a fresh model is reintroduced for each new fold.
        self.model = RobertaModel.from_pretrained(f'{self.config["pretrained_dir"]}{self.config["pretrained_model"]}', config=self.mconfig).to(DEVICE)
        self.model.train()
        minit = ModelSupport(self.config)
        minit.init_weights(self.norm)
        minit.init_weights(self.fc)
        return
				
			

Training Outcomes

To train both recurrent networks (LSTM and GRU) and the custom transformer network I used KFold cross validation to essentially build three different models of each network that are combined at the end of training to form an ensemble of the different training checkpoints.

Note. Because this is an example exercise the number of folds and epochs were set at a level just high enough to achieve good results, but not take an excessive amount of time.  In this case, folds were set at 3 per model and epochs set at 15 for the recurrent networks and 20 for the transformer network.

The idea behind cross validation is to shuffle the dataset in a manner that allows the network to see different associations during each “fold” or unique training session.  At the beginning of each fold the network is reset back to its original state and retrained.

The following graph shows the loss and accuracy readings for each fold of training for one of the LSTM models.  The network reset point can be clearly seen for each unique iteration through the dataset.

Note. Generally I would employ early stopping logic here as well.  It could be argued that for this example training could have been stopped after 6-7 epochs rather than allowing the full run.

LSTM Classification of Sentiment Method 1 Training Performance

Recurrent Networks

Both of the recurrent networks produced similar results during training; with the LSTM networks averaging roughly 89% accuracy and the GRU networks in the ballpark with 88.5%.  Not too bad given the simplicity of the networks.

The outputs below show the final details of each network.  Notice how each fold produced a similar, but different outcome due to the shuffled dataset used for training each iteration.

Final prediction accuracy represents the “ensemble” of the results from all three folds.  To create the final accuracy figure I used a simple summation approach by aggregating the probability score for each class from each model to predict the target class.  This method gives equal weight to each fold.  Another method is to use a “majority vote” by counting the number of classes predicted rather than adding up the probabilities.

The LSTM final score jumped 2% using this method to produce a decent 91.3% accuracy and the GRU ensemble increased roughly 2.5% for a similar end score of 91% prediction accuracy.

NoteI should mention that the evaluation scores were made against a previously unseen dataset so the scores can be consider accurate and closely match the test scores during training so I feel confident there isn’t an overfitting situation occurring for these models.

LSTM Training Performance Details - Sentiment Method 2

Note. The runtime difference between the two networks was roughly 2.5 minutes longer for LSTM while gaining only 0.3% in accuracy.  Based upon scale this could add up to a significant difference given parallel GPU processing for recurrent networks is more difficult to implement and may not be a viable option.

GRU Training Performance Details - Sentiment Method 2

Transformer Network

As expected the transformer network produced superior results, but at a processing cost of nearly 5x that of the recurrent networks.  It will be interesting to see how inference evaluation plays out.

Unlike the recurrent networks, the transformer model used all of the available 20 epochs per fold and could have used more based upon how test scores tracked against training scores in the graphs below.

TRX Classification of Sentiment Method 1 Training Performance

From the detailed results for Sentiment Method 2 we can see the transformer network scored a healthy average of approximately 92.5% for each training fold.  This is 3.5% – 4.0% higher than the recurrent networks.

One notable difference though is the final ensemble score.  While higher at 93.9% the increase due to ensemble averaging was not as high as the recurrent networks.

Note. As I mentioned, with additional training I believe the transformer network would continue to inch upwards for additional accuracy gains.

On the surface I would say the transformer network is the preferred method based solely on accuracy.  When processing time is considered each application will dictate the necessary direction.  If significant scale is involved then additional time must be spent implementing a parallel GPU processing approach to keep compute time in check.  At nearly a 5X difference in processing time, accuracy must be considered a premium requirement.

Transformer Encoder Performance Details - Sentiment Method 2

Inference Pipeline

Now that training is complete we can move on to the final step, evaluating our models against a completely separate sample dataset.

Ensemble.  The inference pipeline is essentially the same routine as the test step during training, but with an additional piece of logic to take the outcomes from all nine model checkpoints and aggregate into a final score.

At a high level, the inference process loads each model checkpoint in turn and predicts outcomes for each of the inference data samples.  Once all of the samples have been predicted the results are combined using one of the two voting methods and the final prediction saved.

I tried two different approaches for this step to see which worked the best.

  • Add up the probabilities for all nine models into one final prediction.
  • Using a majority vote of all nine models.

To test whether or not the solutions will perform adequately against fictitious data I created a few random emails to represent positive, negative and neutral sentiment samples.

				
					    # some fictitious emails
    samples = [
        'Hey! We\'re planning on visiting the lake house this weekend, do you want to go along?  -Robert',
        'JoAnn, the TPS report you promised to have yesterday is not on my desk. Please have that to me by the end of the day.',
        'Attached is the NYOs listing notice for training in Topeka next week. If you would like to attend, make arrangements ASAP as space is limited.',
        'Good Morning John, hope you\'re doing well.  It has been very rainy here.  Anyway, I wanted to ask you a question on the odds of me being picked for early retirement? Let me know. Steve',
        'Hi, thanks for the email and keeping me in the loop.  My daughter\'s classes at school have been hectic and I need a break this weekend.  The open house is this Saturday, hopefully I will be able to see you there. Sue',
        'Thanks for the heads up!',
        'There\'s a business conference next week and I think there are several good bars in the area.  We should plan to meet up there for drinks afterwards.',
        'Please plan to attend the quarterly disaster preparation meeting this Tuesday.  The location is TBD, but the time will from 10a - 2p.  Lunch will be included.',
        'Dave, good time playing poker last week.  I\'m heading out for a round of golf this afternoon and could use a partner.  How about 2pm?',
        'The weather today is expected to be rainy with a chance of thunderstorms and then clearing off for tomorrow...',
        'The systems here are awful!  The building is rundown and in shambles. I wish they would fix this mess instead of sucking up to investors.  I hate this!',
        'I expect many arrests soon given the catastrophic consequences.',
        'I\'m bored and I hate my job.  My life is terrible, unfair and full of regrets.  My coworkers are being jerks and are the worst kind of people.',
    ]
				
			

Outcomes

Finding the best combination of sentiment methods and models to include in the final ensemble was a trial and error approach since the number of combinations was relatively small.  For larger implementations I recommend automating this step.

Note. For reference concerning the output predictions, 0 = negative, 1 = positive and 2 = neutral.

All models with all sentiment methods included.   

The first run of the final ensemble included all three sentiment label methods along with all three network types for a total of 27 individual networks to evaluate (3 labels X 3 models X 3 folds each).

…The results were not stellar; especially with regard to classifying negative sentiment.  Outcomes were okay for positive and neutral, but definitely not optimal.  I expected the last three samples to be negative.  It is arguable that sample 11 could be considered neutral as the results show, but in my opinion the term “arrests” in the context of “catastrophic consequences” should be considered negative.  I also believe that there are several positive samples that would be better classified as neutral (for example number 9).

Inference Results for All Models and All Classification Methods

Removing the GRU models. 

The next step for me was to remove a network from the ensemble and evaluate the results.  For this test I removed the GRU network and reran the process.  The results were exactly the same.

Removing a sentiment labeling method.  

Since removing the GRU network didn’t change any results the next logical step would be to remove one of the sentiment classification methods.  If you recall, the weakest method during the unsupervised classification process was the AFINN lexicon so I’ll start by removing that method and excluding the GRU models as well.

…This time results were different and closer to expectations.  Several of the previously positive samples (including sample number 9) are now classified as neutral and one of the negative samples is now properly labeled.  the sample “Thanks for the heads up!” seems positive, but doesn’t really have any meaningful context so in my opinion this is okay as neutral.

Note. Only the scores from the probability averaging method improved.  The majority voting results remained the same as before.  It would seem the prediction probabilities are very close together and the majority voting method is too course to catch the differences.

Inference Results Sans GRU and Sentiment Method 2

Trying only one labeling method.  

Let’s try dropping another sentiment labeling method from the evaluation.  Since the Vader method was rule-based I’m going to remove it from the equation; along with the AFINN labeling method and the GRU models.

…Some improvements were achieved from this modification as well.  Another of the previously positive samples (number 7) is now neutral and the majority voting method improved slightly.  Otherwise the results were the same, indicating the Vader method performed similarly to Sentiment Method 1’s lexicon approach.

Inference Results with only Sentiment Method 1

Test with just the transformer model and Sentiment Method 1 label.  

Now it’s time to remove another network and test results once again.  The previous evaluation had included the LSTM and Transformer models along with the first sentiment classification label.  This time the evaluation will be done with only the Transformer models and the Sentiment Method 1 label.

…Results for the probability weighting ensemble method are now pretty close.  Sample number 7 went back to positive, but another one of the negative samples (number 10) is now properly classified.  The number 7 sample is debatable and could be construed as positive or neutral.

Note. It is also good to see that the majority voting scores are now much closer to the weighted predictions.

Inference Results with Transformer Network and Sentiment Method 1

Conclusion

Overall, at least for the toy dataset I used for the evaluation, the best results were achieved with only one classification method and one learning model.  

Note.  It is not surprising that the best performing model was the transformer encoder architecture.  It is also not surprising that the most extensive sentiment lexicon produces the more accurate results.

I will stress that the evaluation dataset is tiny and not a good representation of overall performance.  Changing words in any of the examples will change the classification (as it potentially should), but the results are not always as expected.  More work is needed to ensure token relationships and context are captured properly for the majority of scenarios.

It is also important to point out that a significant portion of the classification results from the unsupervised modeling step produced positive and neutral-leaning results.  Spot checking the classification outputs more closely revealed many situations where mixed contexts existed within a single email and the classification label didn’t appear to be accurate on more subtle contents.

Improvements

There are a number of areas to focus on to improve these results.   Most of the enhancements are needed for both the sentiment and behavior classification functions.  And additional logic will be needed in both the unsupervised classification and supervised modeling pipelines.

Unsupervised Classification Enhancements

  1. Use a different sentiment classification model; such as sentiment specific word embedding (SSWE) or weighted text feature modeling (WTFM). 
  2. Use a more robust sentiment lexicon like Emolex or build a custom lexicon from email contents.
  3. Include more token types to classify content than just nouns and adjectives.
  4. Center email context around part-of-speech tags, root terms and token phrases; rather than individual words.
  5. Weight phrases individually within each email sample and enhance the classification range to include more options than just negative, positive and neutral. 
  6. Use multi-token word sequences/n-grams, characters, stems and syllables to help lock in context.
  7. Create a more robust word embedding layer that is used for both unsupervised and supervised functions.
  8. Leave corporate specific terms in the vocabulary and as part of the word embedding layer to create more unique sequences for classification.
 
Supervised Modeling Enhancements
 
  1. Develop a better word embedding layer based upon the email contents.
  2. Enhance the positional encoding algorithm to include other factors than just sequence placement.
  3. Add weights to the final ensemble calculation to help the lesser performing models increase accuracy for certain classification scenarios.
  4. Add other features to the model like “time of day” or “day of week”; which may help lock in sentiment or behavior classes.
  5. Increase the complexity and depth of the network to capture details currently not being utilized.
  6. Ensure the model vocabulary is robust enough to include words not found within the training contents.
  7. Use a length-based masking technique to extract hidden network values rather than using the entire vector.
  8. For transformer networks, extract hidden values at key layers/positions rather than using the raw outputs.
  9. Increase the size of the “max tokens” setting and allow longer email sequences to be trained.

Wrapping Up

Wrapping up this series on extracting insights from corporate email content, I provided a detailed process for extracting raw email data, code for analyzing data imperfections, a process for classifying the data using unsupervised techniques and finally a modeling solution that leverages the generated supervised dataset to produce an inference model for all future classifications.

Given this exercise was not intended to be a production ready solution, there are a number of enhancements needed to ensure reliable service, but I feel the results are promising and the framework presented could be used as a starting point for building your own email analysis platform.

If there are questions about the code or processing logic for this blog series or your company is in need of assistance implementing an AI or machine learning solution, please feel free to contact me at mike@avemacconsulting.com and I will help answer your questions or setup a quick call to discuss your project. 

Source Code

For those interested in implementing this machine learning model, the source code I developed for this exercise is included.  Even though this code is specific to email classification it can easily be changed to process any type of supervised dataset for NLP or other classification tasks.

All of the code for this series can be accessed from my Github repository at:

github.com/Mike-Schmidt-Avemac/ai-email-insights.

You are also welcome to run this code as a notebook on Kaggle.  Public link at:

www.kaggle.com/mikeschmidtavemac/email-insights-models

				
					'''
MIT License

Copyright (c) 2021 Avemac Systems LLC

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
'''

########################################################################################
# This source contains model preparation, training, evaluation and inference routines 
# to model supervised datasets for email classification tasks.
#
# -- This is the supporting code for my email insights and analysis blog series part 4 
# available on my website at https://www.avemacconsulting.com.  Part 4 of the series at
# https://www.avemacconsulting.com/2021/10/12/email-insights-from-data-science-part-4/
#
# The code has been tested for text tokenization to single label classification.  
# Multi-label and multi-feature support is built-in, but specific models will need to be
# developed to take advantage of the framework.
#
# Currently implements classification using recurrent networks (LSTM/GRU/RNN), 
# Transformer (encoder layer only), and prebuilt Roberta fine-tuning.
#
# K-Fold cross validation is implemented...wouldn't take much to abstract the routines 
# to accept other CV methods.  CV can be turned off for a "blended" overfitting techique.
#
# Ensemble evaluation logic has been implemented to support each model trained...plus 
# a final inference prediction output from an aggregated ensemble of all models.
# 
# -- The dataset used for this exercise was specifically created from the raw Enron
# email repository located at https://www.cs.cmu.edu/~enron/ with 3 labels generated
# for sentiment (positive/negative/neutral/unknown) and alignment(business/personal).  
# 
# The code for formatting the raw email content, performing basic analysis and creating
# the supervised dataset can be found in this Github repo with details referenced on my website.
# 
# Part 1. https://www.avemacconsulting.com/2021/08/24/email-insights-from-data-science-techniques-part-1/
# Part 2. https://www.avemacconsulting.com/2021/08/27/email-insights-from-data-science-part-2/
# Part 3. https://www.avemacconsulting.com/2021/09/23/email-insights-from-data-science-part-3/
#
# ---- Classes ----
#  class ContentDataset(Dataset) - Custom Pytorch "Dataset" implementation for tokenized email content.
#  class Vocabulary() - Class for saving / retrieving / generating custom vocabulary from email content.
#  class RawDataLoader() - Class methods for retrieving and formatting raw dataset into Pandas dataframe.
#  class ModelSupport() - Weight/Bias initialization and graphing routines.
#  class SupervisedRNN(nn.Module) - Pytorch recurrent model implementation (LSTM/GRU/RNN)
#  class SupervisedTransformer(nn.Module) - Pytorch TransformerEncoder model.
#  class PositionalEncoding(nn.Module) - SupervisedTransformer supporting function for positional embeddings.
#  class SupervisedPrebuilt(nn.Module) - HuggingFace Robert-Base prebuilt transformer model implementation.
#  class ModelManagement() - Common training/eval and state management routines for model creation.
#  class PipelineConfig() - Common configuration class for Training and Inference pipeline logic.
#  class TrainingPipeline() - Training/Eval pipeline logic.
#  class InferencePipeline() - Inference pipeline logic.
#
# ---- Main ----
#  Train/Eval Processing - Label and model selection point and main train/eval processing loop.
#  Inference Testing - Label and model selection point and main inference processing loop.
#
########################################################################################

#!/usr/bin/python3 -W ignore::DeprecationWarning
import os
import sys

if not sys.warnoptions:
    import warnings
    warnings.simplefilter("ignore")
import re
import math
import glob
import pickle
import gc
from time import time
import pandas as pd
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt

import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader, SubsetRandomSampler, Dataset

from sklearn.model_selection import KFold

from transformers import RobertaConfig, RobertaTokenizerFast, RobertaModel
from nltk.corpus import stopwords

pd.set_option('display.max_rows', 100)
pd.set_option('display.min_rows', 20)
pd.set_option('display.max_colwidth', 100)

DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
PAD_KEY = '<pad>'
NBR_KEY = '<nbr>'

KAGGLE_BASEDIR = '/kaggle'
LOCAL_BASEDIR = '/proto/learning/avemac/email_analysis_blog'
LOCAL_PRETRAIN_BASEDIR = '/proto/models/hf'
IS_LOCAL = not os.path.exists(KAGGLE_BASEDIR) # running in Kaggle Environment or not


##############################################################################################################################
# Custom Content Dataset
##############################################################################################################################

class ContentDataset(Dataset):
    ''' 
        Custom Pytorch "Dataset" implementation for tokenized email content.
        Implement the PyTorch dataset functions - example at https://pytorch.org/docs/stable/data.html#data-loading-order-and-sampler

        - Requires label column be renamed to "label" if not already.
        - Requires text column be renamed to "content" if not already.
    '''
    def __init__(self, df:pd.DataFrame, config:dict, vocab:dict) -> None:
        super(ContentDataset).__init__()
        self.config = config

        print(f'\n--- Building ContentDataset for embedding_type "{config["embedding_type"]}"')
        
        # tokenization is different if using trained versus pretrained models
        if config['embedding_type'] == 'train':
            df['text'] = self._custom_text_encoder(df['content'].values, vocab, max_tokens=config['max_tokens'])
            columns = list(df.columns); columns.remove('content')
        elif config['embedding_type'] == 'hgf_pretrained':
            tokenizer = RobertaTokenizerFast.from_pretrained(f'{config["pretrained_dir"]}{config["pretrained_model"]}')
            t = tokenizer(df['content'].to_list(), add_special_tokens=True, return_attention_mask=True, padding='max_length', truncation=True, max_length=config['max_tokens'], return_tensors='np')
            df['input_ids'] = t['input_ids'].tolist()
            df['attention_mask'] = t['attention_mask'].tolist()
            columns = ['input_ids','attention_mask']
            if 'label' in df.columns: columns.append('label')
        else:
            raise AssertionError('Invalid tokenization mode')

        self.tds = {x : torch.tensor(df[x].to_list()).view(len(df),-1) for x in columns}
        self.length = len(df)
        return

    def __iter__(self) -> iter:
        self.pos = 0
        return self

    def __next__(self) -> dict:
        if self.pos < self.__len__(): 
            slice = {key:tensor[self.pos] for key, tensor in self.tds.items()}
            self.pos += 1
            return slice
        else:
            raise StopIteration
    
    def __len__(self) -> int:
        return self.length

    def __getitem__(self, index) -> tuple():
        return {key:tensor[index] for key, tensor in self.tds.items()}

    def _custom_text_encoder(self, corpus, vocab, max_tokens=500):
        ''' Encode raw text content into dense token vectors for future embedding layer input. '''
        encoded = []
        
        for text in tqdm(corpus):
            encode = []
            tokens = re.findall(r"[a-zA-Z][a-z'][a-z']+", text)
            for t in tokens:
                t = t.lower()
                if t in vocab: # only work with words in vocab
                    encode.append(vocab[t]) # encode token
                if len(encode) >= max_tokens: break # stop if beyond max tokens
            if len(encode) < max_tokens: # pad manually instead of using nn.utils.rnn.pad_packed_sequence in model
                encode.extend([vocab[PAD_KEY] for _ in range(len(encode), max_tokens)])
            encoded.append(encode) # add sample row
        
        return encoded


##############################################################################################################################
# Vocabulary Tokenizer
##############################################################################################################################

class Vocabulary():
    ''' Class for saving / retrieving / generating custom vocabulary from email content. '''
    def __init__(self, config) -> None:
        self.config = config
        self.vocab = None
        return

    def get_vocabulary(self, corpus=None, force_build=False) -> dict:
        ''' Retrieve the vocabulary if exists or generate new if forced or not found. '''
        if self.vocab == None:
            vocab_fn = f'{self.config["checkpoint_dir"]}{self.config["vocabulary_fn"]}'
            if os.path.exists(vocab_fn) and not force_build:
                with open(vocab_fn, 'rb') as f:
                    self.vocab = pickle.load(f)
            elif corpus is not None:
                self.vocab = self._create_vocabulary(corpus)
                with open(vocab_fn, 'wb') as f:
                    pickle.dump(self.vocab, f) 
        return self.vocab

    def _create_vocabulary(self, corpus) -> dict:
        ''' Iterate through the data samples and create the token vocabulary '''
        stop_words = [w.lower() for w in stopwords.words()]
        vocab = {PAD_KEY:0, NBR_KEY:1}; vocab_idx = 2
        
        for text in tqdm(corpus):
            tokens = re.findall(r"[a-zA-Z][a-z'][a-z']+", text)
            for t in tokens:
                t = t.lower()
                if len(t) > 20: continue # skip long "words", most likely not a real word
                if t in stop_words: continue # skip stopwords

                if t not in vocab: # update vocab if token missing
                    vocab[t] = vocab_idx
                    vocab_idx += 1
        return vocab


##############################################################################################################################
# Training and Eval Dataset Loader
##############################################################################################################################

class RawDataLoader():
    ''' Class methods for retrieving and formatting raw dataset into Pandas dataframe. '''
    def __init__(self, config) -> None:
        self.data_dir = config['data_dir']
        self.columns = config['input_columns'] # dictionary {<actual>:<renamed>}
        self.config = config

        self.class_encoders = { 
            # can also use sklearn.preprocessing.LabelEncoder or PyTorch LabelEncoder or others...
            # regardless of method, for non-labels the same encoding scheme must be used during model inference
            'Outside_Hours':lambda x: 0 if x is False else 1,
            'Forwarded':lambda x: 0 if x is False else 1,
            'Source':lambda x: 0 if x == 'deleted' else 1 if x == 'responded' else 2 if x == 'sent' else 3,
            'Class_Alignment_1':lambda x: 0 if x == 'work' else 1 if x == 'fun' else 2,
            'Class_Alignment_2':lambda x: 0 if x == 'work' else 1 if x == 'fun' else 2,
            'Class_Alignment_3':lambda x: 0 if x == 'work' else 1 if x == 'fun' else 2,
            'Class_Sentiment_1':lambda x: 0 if x == 'neg' else 1 if x == 'pos' else 2,
            'Class_Sentiment_2':lambda x: 0 if x == 'neg' else 1 if x == 'pos' else 2,
            'Class_Sentiment_Vader':lambda x: 0 if x == 'neg' else 1 if x == 'pos' else 2,
        }

        self.df = self._fetch_raw_data(self.data_dir + config['supervised_dataset_fn'], self.columns)
        self.vocab = Vocabulary(self.config).get_vocabulary(corpus=self.df['content'].values, force_build=True)
        return

    def _fetch_raw_data(self, fn:str, columns:dict) -> pd.DataFrame:
        ''' Routine to fetch the labeled training data, normalize some column names and encode classes'''
        limit = self.config['limit']
        keys = columns.keys()

        # retrieve input dataframe from unsupervised pipeline process - note: sample will auto shuffle so set state for test consistency
        raw_data = (pd.DataFrame)(pd.read_csv(fn)[keys]).sample(frac=limit, random_state=42)

        # encode class if included in the columns list
        for column in keys:
            if column in self.class_encoders.keys():
                raw_data[column] = raw_data[column].apply(self.class_encoders[column])

        # rename raw feature names to standard names
        raw_data.rename(columns=columns, inplace=True)
        return raw_data

    def split_dataset(self, df=None, split=0.1) -> tuple():
        ''' Break the dataset up based upon the split requested. '''
        df = self.df if df is None else df
        s2 = df.sample(frac=split, random_state=42)
        s1 = df.drop(s2.index)
        return (s1, s2) # return train/test


##############################################################################################################################
# Common Model Support Routines
##############################################################################################################################

class ModelSupport():
    ''' 
        Common model weight initialization and graphing support routines

        Note: uses recursion.  
        Known issues: init_weights Will not properly process nested container modules of the same type (i.e. a ModuleList directly inside a ModuleList)
    '''
    def __init__(self, config):
        self.config = config
        return

    def init_weights(self, module, mode='relu'):
        ''' Recurse through container modules and initialze weights/biases by module type. '''
        if isinstance(module, (nn.ModuleList, nn.ModuleDict, nn.Sequential, nn.Transformer, nn.TransformerEncoder, nn.TransformerDecoder, nn.TransformerEncoderLayer, nn.TransformerDecoderLayer)):
            for m in module.modules():
                _ = self.init_weights(m) if m != module else None
        elif isinstance(module, (nn.LayerNorm, nn.BatchNorm1d, nn.BatchNorm2d)):
            nn.init.ones_(module.weight)
            _ = nn.init.zeros_(module.bias) if module.bias is not None else None
        elif isinstance(module, (nn.Linear, nn.Embedding, nn.LSTM, nn.GRU, nn.MultiheadAttention, nn.Conv1d, nn.Conv2d, nn.Conv3d)):
            for name, param in module.named_parameters(): 
                if 'bias' in name:
                    nn.init.zeros_(param)
                elif 'weight' in name:
                    _ = nn.init.xavier_normal_(param) if mode=='tanh' else nn.init.kaiming_normal_(param)
        return

    def graph_outputs(self, fold, epoch, step, outputs:dict, mode=['hist'], layers=[]):
        ''' Debugging routine to visualze model layer outputs and detect anomalies. '''
        layers = layers if len(layers) > 0 else [k for k in outputs.keys()]
        subplot_rows = round(math.sqrt(len(layers)))
        subplot_cols = math.ceil(math.sqrt(len(layers)))

        # histogram
        if 'hist' in mode:
            idx = 0
            fig, axes1 = plt.subplots(subplot_rows, subplot_cols, figsize=(subplot_cols*5,subplot_rows*2))
            fig.suptitle('Output Tensor Distributions', y=0.99)
            fig.supylabel('Frequency')
            fig.subplots_adjust(top=0.90, bottom=0.1, wspace=0.35, hspace=0.80)
            axes1 = axes1.flatten()
            for name, output in outputs.items():
                if name in layers:
                    d = output.detach().cpu().numpy().flatten()
                    ax = axes1[idx]
                    ax.set_title(f'{name} - {d.size}', {'fontsize':8})
                    ax.tick_params(axis='both', which='major')
                    ax.hist(d, bins=100)
                    idx += 1
            fig.savefig(f'{self.config["graph_dir"]}Outputs_{self.config["model_id"]}_F{fold}E{epoch}S{step}.png', facecolor=fig.get_facecolor())
            plt.close(fig)
        return

    def graph_parameters(self, model, fold, epoch, step, mode=['hist'], types=['weight','bias'], module_types=(), spot_check=True):
        ''' Routine to plot layer weights and verify acceptable neural processing.

            Note -> using this routine for realtime analysis via debugger, could also save 
                    checkpoints every iteration and feed into TensorBoard.

            Should also create an abstract class for this at some point...
        '''
        if 'grad' in types and len(types) > 1:
            raise AssertionError('Cannot mix gradient and weights/bias on same graph')

        module_types = module_types if len(module_types) > 0 else (nn.Linear, nn.Embedding, nn.Conv1d, nn.Conv2d, nn.Conv3d, nn.LayerNorm, nn.BatchNorm1d, nn.BatchNorm2d, nn.LSTM, nn.GRU, nn.TransformerEncoder, nn.TransformerDecoder)
        m_list = []
        m_count = 0

        # create list of modules to display, plus count the number of weight vectors for subplot matrix
        for m in model.modules():
            if isinstance(m, module_types):
                m_list.append(m)
                for name, param in m.named_parameters():
                    if 'weight' in name and 'weight' in types:
                        m_count += 4 if isinstance(m, nn.LSTM) else 3 if isinstance(m, nn.GRU) else 1
                    if 'bias' in name and 'bias' in types:
                        m_count += 1
                    if 'grad' in types and param.grad is not None:
                        m_count += 1

        # nothing to graph
        if m_count <= 0:
            return

        subplot_rows = round(math.sqrt(m_count))
        subplot_cols = math.ceil(math.sqrt(m_count))

        # histogram
        if 'hist' in mode:
            idx = 0
            fig, axes1 = plt.subplots(subplot_rows, subplot_cols, figsize=(subplot_cols*5,subplot_rows*2))
            fig.suptitle('Parameter Distributions' if 'grad' not in types else 'Gradient Distribution', y=0.99)
            fig.supylabel('Frequency')
            fig.subplots_adjust(top=0.90, bottom=0.1, wspace=0.35, hspace=0.80)
            axes1 = axes1.flatten()
            for m in m_list:
                for name, param in m.named_parameters():
                    if 'weight' in name and 'weight' in types:
                        if isinstance(m, nn.LSTM):
                            w_i, w_f, w_c, w_o = param.chunk(4, 0)
                            d = {
                                'w_i':w_i.detach().cpu().numpy().flatten(),
                                'w_f':w_f.detach().cpu().numpy().flatten(),
                                'w_c':w_c.detach().cpu().numpy().flatten(),
                                'w_o':w_o.detach().cpu().numpy().flatten(),
                            }
                        elif isinstance(m, nn.GRU):
                            w_r, w_i, w_n = param.chunk(3, 0)
                            d = {
                                'w_i':w_i.detach().cpu().numpy().flatten(),
                                'w_r':w_r.detach().cpu().numpy().flatten(),
                                'w_n':w_n.detach().cpu().numpy().flatten(),
                            }
                        else:
                            d = {
                                'w_h': param.data.detach().cpu().numpy().flatten()
                            }

                        for k,v in d.items():
                            ax = axes1[idx]
                            ax.set_title(f'{m._get_name()} - {name}/{k} - {v.size}', {'fontsize':8})
                            ax.tick_params(axis='both', which='major')
                            ax.hist(v, bins=100)
                            idx += 1

                    if 'bias' in name and 'bias' in types:
                        d = param.data.detach().cpu().numpy().flatten()
                        ax = axes1[idx]
                        ax.set_title(f'{m._get_name()} - {name} - {d.size}', {'fontsize':8})
                        ax.tick_params(axis='both', which='major')
                        ax.hist(d, bins=100)
                        idx += 1

                    if 'grad' in types and param.grad is not None:
                        d = param.grad.detach().cpu().numpy().flatten()
                        ax = axes1[idx]
                        ax.set_title(f'{m._get_name()} - {name}/grad - {d.size}', {'fontsize':8})
                        ax.tick_params(axis='both', which='major')
                        ax.hist(d, bins=100)
                        idx += 1

            fig.savefig(f'{self.config["graph_dir"]}{"Gradients" if "grad" in types else "Parameters"}_{self.config["model_id"]}_F{fold}E{epoch}S{step}_{"Check" if spot_check else "Debug"}.png', facecolor=fig.get_facecolor())
            plt.close(fig)
        return


##############################################################################################################################
# Supervised Recurrent Model
##############################################################################################################################

class SupervisedRNN(nn.Module):
    ''' Recurrent network for time-series model of email content '''

    def __init__(self, mode:str, config:dict):
        super().__init__()
        self.batch_size = config['batch_size'] # batch length
        self.max_tokens = config['max_tokens'] # sequence length
        self.embedding_len = config['embedding_len'] # feature length
        self.number_classes = config['number_classes'] # number target classes
        self.dropout = config['dropout']
        self.epochs = config['epochs']
        self.vocab_size = config['vocab_size']
        self.bidirectional = config['bidirectional']
        self.rnn_layers = config['rnn_layers']
        self.config = config

        # define network

        # Embedding layer - training custom token relationships rather than pretrained
        # Note: should train this separately 
        self.embedding = nn.ModuleList([
            nn.Embedding(self.vocab_size, self.embedding_len, scale_grad_by_freq=True),
        ])

        # Recurrent network (Input = BatchSize x MaxTokens x Embedding Length)
        ln_input_len = (2 if self.bidirectional else 1) * self.rnn_layers * self.embedding_len
        self.rnn = nn.ModuleList([
            nn.LSTM(self.embedding_len, self.embedding_len, batch_first=True, num_layers=self.rnn_layers, bidirectional=self.bidirectional) if mode=='lstm' 
            else nn.GRU(self.embedding_len, self.embedding_len, batch_first=True, num_layers=self.rnn_layers, bidirectional=self.bidirectional) if mode=='gru'
            else nn.RNN(self.embedding_len, self.embedding_len, batch_first=True, num_layers=self.rnn_layers, bidirectional=self.bidirectional),
            nn.LayerNorm(ln_input_len), # input size is doubled if birectional LSTM
            nn.LeakyReLU(),
            nn.Dropout(self.dropout),
        ])

        # Fully connected network to reduce recurrent weights to log odds
        fc_input_len = (2 if self.bidirectional else 1) * self.rnn_layers * self.embedding_len
        self.fc = nn.ModuleList([
            nn.Linear(fc_input_len, fc_input_len * 2),
            nn.LeakyReLU(),
            nn.Dropout(0.3),
            nn.Linear(fc_input_len * 2, fc_input_len),
            nn.LeakyReLU(),
            nn.Dropout(0.3),
            nn.Linear(fc_input_len, self.number_classes),
            nn.LogSoftmax(),
        ])

        self.reset_weights(mode='init')

        self.to(DEVICE) # move to GPU if available
        return

    def forward(self, text, **kwargs):
        ''' Forward pass for recurrent network '''
        output_checks = {}
        output_pos = 0

        x = text

        # process the embedding layer
        for m in self.embedding:
            x = m(x)
            if self.config['check_outputs']:
                output_checks[f'{m.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1

        # spin through the modules defined within the recurrent network portion of the model
        for m in self.rnn:
            if isinstance(m, (nn.LSTM)):
                _, (x,_) = m(x) # use last hidden state since we are labeling the entire sequence
                x = torch.transpose(x, 0, 1) # back to batch first
                x = x.reshape(-1, np.prod(x.shape[-2:])) if x.ndim == 3 else x # collapse the sequence dimension if bidirectional
            elif isinstance(m, (nn.GRU)):
                _, x = m(x) # use last hidden state since we are labeling the entire sequence
                x = torch.transpose(x, 0, 1) # back to batch first
                x = x.reshape(-1, np.prod(x.shape[-2:])) if x.ndim == 3 else x # collapse the sequence dimension if bidirectional
            else:
                x = m(x)

            if self.config['check_outputs']:
                output_checks[f'{m.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1

        # finish up the pass regressing the recurrent summarized sequence weights into class log odds
        for m in self.fc:
            x = m(x)
            if self.config['check_outputs']:
                output_checks[f'{m.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1

        return x, output_checks

    def reset_weights(self, mode='fold'):
        ''' Method for initial weights and resetting weights between cross-validation folds '''
        minit = ModelSupport(self.config)
        _ = minit.init_weights(self.embedding) if mode == 'init' else None
        minit.init_weights(self.rnn)
        minit.init_weights(self.fc)
        return


##############################################################################################################################
# Supervised Transformer Model
##############################################################################################################################

class SupervisedTransformer(nn.Module):
    ''' Transformer network for time-series model of email content '''
    def __init__(self, config:dict):
        super().__init__()
        self.batch_size = config['batch_size'] # batch length
        self.max_tokens = config['max_tokens'] # sequence length
        self.embedding_len = config['embedding_len'] # feature length
        self.number_classes = config['number_classes'] # number target classes
        self.dropout = config['dropout']
        self.epochs = config['epochs']
        self.vocab_size = config['vocab_size']
        self.attention_heads = config['attention_heads']
        self.encoder_layers = config['encoder_layers']
        self.config = config

        # define network

        # Embedding layer - training custom token relationships rather than pretrained
        self.embedding = nn.ModuleList([
            nn.Embedding(self.vocab_size, self.embedding_len, scale_grad_by_freq=True),
        ])

        # Positional encoding layer
        self.pos_encoder = PositionalEncoding(self.embedding_len, dropout=self.dropout, max_len=self.max_tokens, batch_first=True)

        # Encoder network (Input = BatchSize x MaxTokens x Embedding Length)
        attention_layer = nn.TransformerEncoderLayer(self.embedding_len, self.attention_heads, dim_feedforward=self.max_tokens*4, dropout=self.dropout, batch_first=True)
        self.encoder = nn.ModuleList([
            nn.TransformerEncoder(attention_layer, self.encoder_layers),
            nn.LayerNorm(self.embedding_len),
            nn.LeakyReLU(),
            nn.Dropout(0.3),
        ])

        # Fully connected network to reduce encoder weights to log odds
        fc_input_len = self.max_tokens * self.embedding_len # 2d size after flatten of 3d transformer output
        self.fc = nn.ModuleList([
            nn.Linear(fc_input_len, fc_input_len // 4),
            nn.LeakyReLU(),
            nn.Dropout(0.3),
            nn.Linear(fc_input_len // 4, fc_input_len // 2),
            nn.LeakyReLU(),
            nn.Dropout(0.3),
            nn.Linear(fc_input_len // 2, self.number_classes),
            nn.LogSoftmax(),
        ])

        self.reset_weights(mode='init')

        self.to(DEVICE) # move to GPU if available
        return

    def forward(self, text, **kwargs):
        ''' Forward pass for transformer network '''
        output_checks = {}
        output_pos = 0

        x = text

        # process the embedding layer
        for m in self.embedding:
            x = m(x)
            if self.config['check_outputs']:
                output_checks[f'{m.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1

        # encode with position information
        x = self.pos_encoder(x)

        # spin through the modules defined within the recurrent network portion of the model
        for m in self.encoder:
            x = m(x)
            if self.config['check_outputs']:
                output_checks[f'{m.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1

        # flatten the encoder output
        x = torch.flatten(x, start_dim=1) # squash down to 2d

        # finish up the pass regressing the sequence weights into class log odds
        for m in self.fc:
            x = m(x)
            if self.config['check_outputs']:
                output_checks[f'{m.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1

        return x, output_checks

    def reset_weights(self, mode='fold'):
        ''' Method for initial weights and resetting weights between cross-validation folds '''
        minit = ModelSupport(self.config)
        _ = minit.init_weights(self.embedding) if mode == 'init' else None
        minit.init_weights(self.encoder)
        minit.init_weights(self.fc)
        return

class PositionalEncoding(nn.Module):
    '''
        Positional embedding encoder

        Modified from Pytorch tutorial -> https://pytorch.org/tutorials/beginner/transformer_tutorial.html
    '''
    def __init__(self, d_model: int, dropout: float = 0.1, max_len: int = 5000, batch_first=True):
        super().__init__()
        self.dropout = nn.Dropout(p=dropout)
        self.batch_first = batch_first

        position = torch.arange(max_len).unsqueeze(1)
        div_term = torch.exp(torch.arange(0, d_model, 2) * (-math.log(10000.0) / d_model))
        if batch_first:
            pe = torch.zeros(1, max_len, d_model)
            pe[0, :, 0::2] = torch.sin(position * div_term)
            pe[0, :, 1::2] = torch.cos(position * div_term)
        else:
            pe = torch.zeros(max_len, 1, d_model)
            pe[:, 0, 0::2] = torch.sin(position * div_term)
            pe[:, 0, 1::2] = torch.cos(position * div_term)
        self.register_buffer('pe', pe)

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        """
            x -> shape [seq_len, batch_size, embedding_dim] if not batch_first else [batch_size, seq_len, embedding_dim]
        """
        dim = 1 if self.batch_first else 0
        x = x + self.pe[:x.size(dim)]
        return self.dropout(x)


##############################################################################################################################
# Prebuilt Supervised Transformer Model
##############################################################################################################################

class SupervisedPrebuilt(nn.Module):
    ''' Prebuilt fine-tuning of transformer network for time-series model of email content 

        Using Roberta prebuilt model from Huggingface, api at https://huggingface.co/transformers/model_doc/roberta.html
    '''
    def __init__(self, config):
        super().__init__()
        self.number_classes = config['number_classes'] # number target classes
        self.dropout = config['dropout']
        self.encoder_layers = config['encoder_layers']
        self.config = config

        self.mconfig = RobertaConfig.from_pretrained(f'{self.config["pretrained_dir"]}{self.config["pretrained_model"]}')
        self.mconfig.update({"is_decoder":False, "num_layers":self.encoder_layers, "output_hidden_states":True, "hidden_dropout_prob": self.dropout, "layer_norm_eps": 1e-7})
        # self.model is created in reset_weights - if kfold CV is used then model will need to be reset multiple times
        
        # define network
        
        # Normalization layer
        ln_input_len = self.mconfig.to_dict()['hidden_size'] * 4
        self.norm = nn.ModuleList([
            nn.LayerNorm(ln_input_len),
            nn.LeakyReLU(),
            nn.Dropout(0.2),
        ])
        
        # Fully connected network to reduce encoder weights to log odds
        fc_input_len = self.mconfig.to_dict()['hidden_size'] * 4
        self.fc = nn.ModuleList([
            nn.Linear(fc_input_len, fc_input_len),
            nn.LeakyReLU(),
            nn.Dropout(0.3),
            nn.Linear(fc_input_len, fc_input_len // 2),
            nn.LeakyReLU(),
            nn.Dropout(0.3),
            nn.Linear(fc_input_len // 2, self.number_classes),
            nn.LogSoftmax(),
        ])

        self.reset_weights() # in this case the prebuilt Roberta model is created in reset_weights

        self.to(DEVICE) # move to GPU if available
        return

    def forward(self, input_ids, attention_mask, **kwargs):
        ''' Forward pass for prebuilt network. '''
        output_checks = {}
        output_pos = 0

        layers = [-4, -3, -2, -1]

        outputs = self.model(input_ids, attention_mask)
        x = outputs.hidden_states
        amask = attention_mask.unsqueeze(-1).expand(x[layers[0]].size())

        x = torch.cat(tuple(torch.sum(x[l]*amask, dim=1) for l in layers), dim=1) # sum each of the last four layers by sequence then concatenate
        if self.config['check_outputs']:
            output_checks[f'{self.model.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1
            
        # process normalization layer
        for m in self.norm:
            x = m(x)
            if self.config['check_outputs']:
                output_checks[f'{m.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1        
            
        # finish up the pass regressing the sequence weights into class log odds
        for m in self.fc:
            x = m(x)
            if self.config['check_outputs']:
                output_checks[f'{m.__class__.__name__}-Layer{output_pos}'] = x; output_pos += 1

        return x, output_checks

    def reset_weights(self, mode=None):
        ''' Method for initial weights and resetting weights between cross-validation folds '''
        # Prebuilt network is instantiated here so if CV is used a fresh model is reintroduced for each new fold.
        self.model = RobertaModel.from_pretrained(f'{self.config["pretrained_dir"]}{self.config["pretrained_model"]}', config=self.mconfig).to(DEVICE)
        self.model.train()
        minit = ModelSupport(self.config)
        minit.init_weights(self.norm)
        minit.init_weights(self.fc)
        return


##############################################################################################################################
# Common model training and eval functions
##############################################################################################################################

class ModelManagement():
    ''' Common training/eval and state management routines for model creation. '''
    def __init__(self, model:nn.Module, config:dict, training_set:ContentDataset, eval_set:ContentDataset):
        self.config = config
        self.model = model
        self.training_set = training_set
        self.eval_set = eval_set

        # state variables
        self.prev_loss = sys.float_info.max
        self.prev_acc = 0.0

        # metrics
        self.train_loss = []
        self.test_loss = []
        self.train_acc = []
        self.test_acc = []

        # graphing functions
        self.graphing = ModelSupport(self.config)

    def training_plot(self):
        ''' Basic loss/accuracy performance graph - overwrites previous graph '''
        # setup plot framework
        fig, axes = plt.subplots(2, 1, figsize=(10,8))
        fig.subplots_adjust(top=0.90, bottom=0.1, wspace=0.35, hspace=0.80)
        axes = axes.flatten()
        # loss subplot
        ax = axes[0]
        ax.grid(True)
        ax.tick_params(axis='both', which='major')
        ax.set_xlim([-1, 50])
        ax.set_ylim([-0.01, 2.0])
        ax.set_title('Train Loss -vs- Test Loss', {'fontsize':12})
        ax.set_xlabel('Epochs')
        ax.set_ylabel('Loss')
        ax.plot(self.train_loss,'-bo')
        ax.plot(self.test_loss,'-go')
        ax.legend(['Train Loss','Test Loss'])
        # accuracy subplot
        ax = axes[1]
        ax.grid(True)
        ax.tick_params(axis='both', which='major')
        ax.set_xlim([-1, 50])
        ax.set_ylim([0.2, 1.0])
        ax.set_title('Train Accuracy -vs- Test Accuracy', {'fontsize':12})
        ax.set_xlabel('Epochs')
        ax.set_ylabel('Accuracy')
        ax.plot(self.train_acc,'-ro')
        ax.plot(self.test_acc,'-co')
        ax.legend(['Train Accuracy','Test Accuracy'])

        fig.savefig(f'{self.config["graph_dir"]}{"Metrics"}_{self.config["model_id"]}.png', facecolor=fig.get_facecolor())
        plt.close(fig)
        return

    def state_management(self, fold, epoch, loss_train, loss_test=None, acc_train=None, acc_test=None, plot=True):
        ''' Save checkpoints and output performance graph. '''

        # metrics for performance graph
        self.train_loss.append(loss_train)
        self.test_loss.append(loss_test)
        self.train_acc.append(acc_train)
        self.test_acc.append(acc_test)

        # calculate progress
        working_loss = loss_train if loss_test is None else loss_test # use test loss for comparison if provided
        self.prev_loss = working_loss if working_loss < self.prev_loss else self.prev_loss
        working_acc = acc_train if acc_test is None else acc_test
        self.prev_acc = working_acc if working_acc > self.prev_acc else self.prev_acc

        if self.prev_loss == working_loss and self.prev_acc == working_acc and self.config['cv_mode'] == 'blend': # best model output so far, save it if blending CV
            fn = self.config['checkpoint_fn'].format(dir=self.config['checkpoint_dir'], fold='0', id=self.config['model_id']) # '{dir}fold_{fold}_{id}_checkpoint.pt',
            torch.save(self.model.state_dict(), fn)
        if epoch+1 == self.config['epochs'] and self.config['cv_mode'] != 'blend': # save each completed fold if using k-fold CV
            fn = self.config['checkpoint_fn'].format(dir=self.config['checkpoint_dir'], fold=str(fold), id=self.config['model_id']) # '{dir}fold_{fold}_{id}_checkpoint.pt',
            torch.save(self.model.state_dict(), fn)

        if plot:
            self.training_plot()

        if epoch+1 == self.config['epochs'] and self.config['spot_check']: # spot check weights, biases and gradients
            self.graphing.graph_parameters(self.model, fold, epoch, 0, types=['weight','bias'])
            self.graphing.graph_parameters(self.model, fold, epoch, 0, types=['grad']) 

        return # could add patience and divergence checks for early stopping

    def training(self):
        ''' Standard training loop w/ optional cross validation '''
        bs = self.config['batch_size']
        lr = self.config['learning_rate']
        epochs = self.config['epochs']

        dataset = self.training_set
        step_check = 5 * math.ceil(int((len(dataset) - len(dataset)*self.config['test_size']) // (bs*5)) / 5)

        kfolds = KFold(n_splits=self.config['kfolds'])
        
        loss_function = nn.NLLLoss()
        optimizer = torch.optim.AdamW(self.model.parameters(), lr=lr)

        # spot check initial weights, biases if option set
        if self.config['spot_check']:
            self.graphing.graph_parameters(self.model, 0, 0, 0, types=['weight','bias'])

        print(f'\n--- Training model {self.config["model_id"]} in mode "{self.config["cv_mode"]}" with {1 if self.config["cv_mode"]=="blend" else self.config["kfolds"]} folds\n')

        # setup k-fold cross-validation 
        for fold, (train_idx, test_idx) in enumerate(kfolds.split(dataset)):
            train_loader = DataLoader(dataset, batch_size=bs, sampler=SubsetRandomSampler(train_idx), drop_last=True)
            test_loader = DataLoader(dataset, batch_size=bs, sampler=SubsetRandomSampler(test_idx), drop_last=True)

            # if using k-fold "properly", reset model weights between folds
            if self.config['cv_mode'] != 'blend' and fold > 0:
                self.model.reset_weights(mode='init')

            # epochs per fold
            for e in range(0, epochs):

                ########
                # train
                ########
                losses_t = []
                acc_t = []
                self.model.train()
                for step, batch in enumerate(train_loader):
                    optimizer.zero_grad()
                    batch = {k:v.to(DEVICE) for k,v in batch.items()}
                    predictions, outputs = self.model(**batch)

                    loss = loss_function(predictions, (torch.tensor)(batch['label']).squeeze())
                    loss.backward()
                    if self.config['clip_gradients']:
                        nn.utils.clip_grad_norm_(self.model.parameters(), self.config['clip_max_norm']) # ensure exploding gradients are managed if not debugging

                    losses_t.append(loss.detach().cpu().item())
                    acc_t.append(torch.sum(torch.argmax(predictions, dim=1, keepdim=True) == batch['label']).detach().cpu().item())
                    if step % step_check == 0:
                        print('- Fold %2d / Epoch %2d / Step %3d --- train nll %.10f (acc %.10f)' % (fold, e, step, np.mean(losses_t), np.sum(acc_t)/((step+1)*bs)), flush=True)

                        if self.config['check_outputs']: # visualize output distribution
                            self.graphing.graph_outputs(fold, e, step, outputs)
                        if self.config['check_weights']: # visualize weight distribution
                            self.graphing.graph_parameters(self.model, fold, e, step, types=['weight','bias'], spot_check=False)
                        if self.config['check_gradients']: # visualize gradients or clip
                            self.graphing.graph_parameters(self.model, fold, e, step, types=['grad'], spot_check=False) 

                    optimizer.step()
                    del batch

                ########
                # test
                ########
                losses_e = []
                acc_e = []
                self.model.eval()
                for batch in test_loader:
                    batch = {k:v.to(DEVICE) for k,v in batch.items()}
                    with torch.no_grad():
                        predictions, _ = self.model(**batch)
                        loss = loss_function(predictions, (torch.tensor)(batch['label']).squeeze())
                        losses_e.append(loss.detach().cpu().item())
                        acc_e.append(torch.sum(torch.argmax(predictions, dim=1, keepdim=True) == batch['label']).detach().cpu().item())
                    del batch

                # calculate performance
                train_nll = np.mean(losses_t)
                test_nll = np.mean(losses_e)

                train_acc = np.sum(acc_t) / (bs * len(train_loader))
                test_acc = np.sum(acc_e) / (bs * len(test_loader))

                self.state_management(fold, e, loss_train=train_nll, loss_test=test_nll, acc_train=train_acc, acc_test=test_acc, plot=True)
                print('\n--- Fold %2d / Epoch %2d --- train nll %.10f (acc %.10f), --- test nll %.10f (acc %.10f)\n' % (fold, e, train_nll, train_acc, test_nll, test_acc), flush=True)
        return

    def evaluation(self):
        ''' 
            Method for evaluating a single model's effectiveness post training.
            Includes logic to aggregate an ensemble outcome if multiple fold checkpoints are available.
        '''
        bs = self.config['batch_size']

        dataset = self.eval_set

        eval_loader = DataLoader(dataset, batch_size=bs, drop_last=False)
        loss_function = nn.NLLLoss()

        # determine model iterations (i.e. using k-fold checkpoints or blended checkpoint)
        folds = 1 if self.config['cv_mode'] == 'blend' else self.config['kfolds']
        print(f'\n--- Evaluating model {self.config["model_id"]} in mode "{self.config["cv_mode"]}" with {1 if self.config["cv_mode"]=="blend" else self.config["kfolds"]} folds')

        ensemble_preds = []
        ensemble_labels = []
        for f in range(folds):

            model_fn = self.config['checkpoint_fn'].format(dir=self.config['checkpoint_dir'], fold=str(f), id=self.config['model_id'])
            if not os.path.exists(model_fn):
                raise AssertionError('model checkpoint does not exist...something is wrong')
            self.model.load_state_dict(torch.load(model_fn, map_location=DEVICE))

            ########
            # eval
            ########
            preds_e = []
            labels_e = []
            losses_e = []
            acc_e = []
            self.model.eval()
            for batch in eval_loader:
                batch = {k:v.to(DEVICE) for k,v in batch.items()}
                with torch.no_grad():
                    predictions, _ = self.model(**batch)
                    loss = loss_function(predictions, (torch.tensor)(batch['label']).squeeze())

                    # collect metrics
                    losses_e.append(loss.detach().cpu().item())
                    acc_e.append(torch.sum(torch.argmax(predictions, dim=1, keepdim=True) == batch['label']).detach().cpu().item())
                    preds_e.extend(predictions.detach().cpu().tolist())
                    labels_e.extend(batch['label'].detach().cpu().tolist())
                del batch

            # save fold predictions for ensemble calculations
            ensemble_preds.append(preds_e)
            ensemble_labels = np.array(labels_e).squeeze(axis=1) if len(ensemble_labels) == 0 else ensemble_labels

            # calculate metrics
            eval_nll = np.mean(losses_e)
            eval_acc = np.sum(acc_e) / (len(dataset))

            print('\n--- Model "%s" fold "%d" evaluation nll loss of %.10f with %.10f accuracy' % (self.config['model_id'], f, eval_nll, eval_acc))

        # ensemble calculations
        ensemble_preds = np.transpose(ensemble_preds, (1, 0, 2)) # alter matrix to (samples X folds X prediction probabilities)
        ensemble_preds = np.sum(ensemble_preds, axis=1) # sum all the probabilities by class and fold
        ensemble_preds = np.argmax(ensemble_preds, axis=1) # select the class with the highest sum
        acc_s = np.sum(ensemble_preds == ensemble_labels) / len(dataset) # compare predictions with actuals

        print('\n--- Ensemble "%s" accuracy prediction is %.10f' % (self.config['model_id'], acc_s))

        return


##############################################################################################################################
# Configuration
##############################################################################################################################
class PipelineConfig():
    ''' Common configuration class for Training and Inference pipeline logic. '''
    config = { # common configuration variables
        'data_dir': '{base}/data/',
        'checkpoint_dir': '{base}/checkpoints/{cv}/',
        'graph_dir': '{base}/graphs/',
        'pretrained_dir':'{base}/',
        'pretrained_model':'roberta-base',
        'supervised_dataset_fn': 'supervised_email_train.csv',
        'vocabulary_fn': 'supervised_email.vocab',
        'checkpoint_fn':'{dir}fold_{fold}_{id}_checkpoint.pt',
        # common control variables
        'test_size': 0.1, # split of training data for eval
        'limit': 1.0, # restrict input data for debugging performance
        'check_outputs': False,  # flag to graph/analysis layer outputs
        'check_weights': False,  # flag to graph/analysis weights/biases
        'check_gradients': False,  # flag to graph/analysis gradients
        'spot_check': True, # flag to graph parameters/gradients after every epoch
        'cv_mode': 'fold', # use proper k-fold CV or blend into one checkpoint
        'force_train': False, # force rebuild
    }

    # model specific configuration
    model_config = {
        'lstm': {'kfolds': 3, 'epochs': 15, 'batch_size': 128, 'embedding_len': 128, 'embedding_type':'train',
                 'max_tokens': 192, 'dropout': 0.6, 'learning_rate':8e-05, 'bidirectional':True, 'rnn_layers':2, 
                 'clip_gradients':False, 'clip_max_norm':5.0},
        'gru': {'kfolds': 3, 'epochs': 15, 'batch_size': 128, 'embedding_len': 128, 'embedding_type':'train',
                'max_tokens': 192, 'dropout': 0.7, 'learning_rate':8e-05, 'bidirectional':True, 'rnn_layers':2,
                'clip_gradients':False, 'clip_max_norm': 5.0},
        'trn': {'kfolds': 3, 'epochs': 20, 'batch_size': 64, 'embedding_len': 128, 'embedding_type':'train',
                'max_tokens': 192, 'dropout': 0.2, 'learning_rate':8e-05, 'attention_heads':8, 'encoder_layers':4,
                'clip_gradients':False, 'clip_max_norm': 5.0}, 
        'pre': {'kfolds': 3, 'epochs': 5, 'batch_size': 64, 'embedding_type':'hgf_pretrained',
                'max_tokens': 192, 'dropout': 0.1, 'learning_rate':1e-04, 'encoder_layers':1,
                'clip_gradients':False, 'clip_max_norm': 5.0}, 
    }

    def __init__(self, config):
        # tack on custom config if present
        _ = self.config.update(config) if config is not None else None
        # set directory structure and create if needed
        if IS_LOCAL:
            self.config.update({'data_dir':str(self.config['data_dir']).format(base=f'{LOCAL_BASEDIR}')})
            self.config.update({'checkpoint_dir':str(self.config['checkpoint_dir']).format(base=f'{LOCAL_BASEDIR}', cv=self.config['cv_mode'])})
            self.config.update({'graph_dir':str(self.config['graph_dir']).format(base=f'{LOCAL_BASEDIR}')})
            self.config.update({'pretrained_dir':str(self.config['pretrained_dir']).format(base=f'{LOCAL_PRETRAIN_BASEDIR}')})
        else: # running in Kaggle Environment
            self.config.update({'data_dir':str(self.config['data_dir']).format(base=f'{KAGGLE_BASEDIR}/input/emailblog')})
            self.config.update({'checkpoint_dir':str(self.config['checkpoint_dir']).format(base=f'{KAGGLE_BASEDIR}/working', cv=self.config['cv_mode'])})
            self.config.update({'graph_dir':str(self.config['graph_dir']).format(base=f'{KAGGLE_BASEDIR}/working')})
            self.config.update({'pretrained_dir':str(self.config['pretrained_dir']).format(base=f'{KAGGLE_BASEDIR}/input/robertamodels')})
        if not os.path.exists(self.config['checkpoint_dir']):
            os.makedirs(self.config['checkpoint_dir'], mode=711, exist_ok=True)
        if not os.path.exists(self.config['graph_dir']):
            os.makedirs(self.config['graph_dir'], mode=711, exist_ok=True)
        return

    def get_config(self, model=None) -> dict:
        ''' Returns the current base configuration plus model configuration if requested. '''
        config = self.config
        _ = config.update(self.model_config[model]) if model is not None else None
        return config


##############################################################################################################################
# Training/Test Pipeline
##############################################################################################################################

class TrainingPipeline():
    ''' Basic routine to control data prep and modeling flow for each specific model and type requested.'''

    def __init__(self, id:str, label_column:str, label_classes:int, config:dict=None, run_models=['lstm','gru','trn','pre']):
        self.id = id
        self.run_models = run_models
        self.label_column = label_column
        self.label_classes = label_classes

        self.config = PipelineConfig(config).get_config().copy()
        self.config.update({'input_columns':{'Body':'content', self.label_column:'label'}, 'number_classes': self.label_classes})

        # load training data
        raw_data = RawDataLoader(self.config) # prep input data
        self.train_df, self.eval_df = raw_data.split_dataset(split=self.config['test_size'])
        self.vocab = raw_data.vocab
        self.config.update({'vocab_size':len(self.vocab)})
        return

    def run_pipeline(self):
        ''' Main train/eval processing loop. '''

        # process all of the models in run_models
        for m in self.run_models:
            start = time()
            config = PipelineConfig(None).get_config(model=m).copy()
            config.update({'model_id':f'{m}-{self.id}'})
            config.update({'input_columns':{'Body':'content', self.label_column:'label'}, 'number_classes': self.label_classes})
            config.update({'vocab_size':len(self.vocab)})

            pipeline = self._get_pipeline(m, config)
            
            # skip train if exists already and train_mode is 'normal'
            _ = pipeline.training() if config['force_train'] or not self.is_training_complete(config['model_id']) else None
            # evaluate model
            pipeline.evaluation()
            
            del pipeline
            gc.collect()
            torch.cuda.empty_cache()

            print(f'\n--- Pipeline runtime for model_id "{config["model_id"]}" complete in {time()-start} seconds\n')
        return

    def is_training_complete(self, model_id):
        ''' Determine if model checkpoint already exists. '''
        filter = self.config['checkpoint_fn'].format(dir=self.config['checkpoint_dir'], fold=0, id=model_id)
        checkpoints = glob.glob(filter)
        return len(checkpoints) > 0

    def _get_pipeline(self, id, config):
        ''' Common function for setting up model training/eval pipeline '''
        model = SupervisedRNN(id, config) if id in ['lstm','gru'] else SupervisedTransformer(config) if id=='trn' else SupervisedPrebuilt(config)
        training_set = ContentDataset(self.train_df, config, vocab=self.vocab)
        eval_set = ContentDataset(self.eval_df, config, vocab=self.vocab)
        pipeline = ModelManagement(model, config, training_set, eval_set)
        return pipeline


##############################################################################################################################
# Inference Pipeline
##############################################################################################################################

class InferencePipeline(): 
    '''Inference pipeline logic. '''

    # some fictitious emails
    samples = [
        'Hey! We\'re planning on visiting the lake house this weekend, do you want to go along?  -Robert',
        'JoAnn, the TPS report you promised to have yesterday is not on my desk. Please have that to me by the end of the day.',
        'Attached is the NYOs listing notice for training in Topeka next week. If you would like to attend, make arrangements ASAP as space is limited.',
        'Good Morning John, hope you\'re doing well.  It has been very rainy here.  Anyway, I wanted to ask you a question on the odds of me being picked for early retirement? Let me know. Steve',
        'Hi, thanks for the email and keeping me in the loop.  My daughter\'s classes at school have been hectic and I need a break this weekend.  The open house is this Saturday, hopefully I will be able to see you there. Sue',
        'Thanks for the heads up!',
        'There\'s a business conference next week and I think there are several good bars in the area.  We should plan to meet up there for drinks afterwards.',
        'Please plan to attend the quarterly disaster preparation meeting this Tuesday.  The location is TBD, but the time will from 10a - 2p.  Lunch will be included.',
        'Dave, good time playing poker last week.  I\'m heading out for a round of golf this afternoon and could use a partner.  How about 2pm?',
        'The weather today is expected to be rainy with a chance of thunderstorms and then clearing off for tomorrow...',
        'The systems here are awful!  The building is rundown and in shambles. I wish they would fix this mess instead of sucking up to investors.  I hate this!',
        'I expect many arrests soon given the catastrophic consequences.',
        'I\'m bored and I hate my job.  My life is terrible, unfair and full of regrets.  My coworkers are being jerks and are the worst kind of people.',
    ]

    def __init__(self, run_labels:dict, label_classes:int, config:dict=None, run_models=['lstm','gru','trn','pre']):
        self.run_models = run_models
        self.run_labels = run_labels
        self.label_classes = label_classes

        self.config = PipelineConfig(config).get_config().copy()
        self.vocab = Vocabulary(self.config).get_vocabulary()

        return

    def run_pipeline(self, samples=None):
        ''' 
            Inference prediction processing, runs samples through each model in run_models and for each label in run_labels.
            Aggregates scores and prints final results.
        '''
        start = time()

        # build data loader for inference samples
        self.samples = samples if samples is not None else self.samples
        outputs = pd.DataFrame(self.samples, columns=['emails'])
        inputs = pd.DataFrame(self.samples, columns=['content'])

        # process each model within each label type
        ensemble_preds = []
        ensemble_votes = []
        model_count = 0
        for key in self.run_labels.keys(): # e.g. cs1, cs2, cs3
            for m in self.run_models: # e.g. lstm, gru, trn, pre
                config = PipelineConfig(None).get_config(model=m).copy()
                config.update({'number_classes': self.label_classes})
                config.update({'model_id':f'{m}-{key}'})
                config.update({'vocab_size':len(self.vocab)})

                model_fn_wildcard = config['checkpoint_fn'].format(dir=config['checkpoint_dir'], fold='*', id=config['model_id'])
                model_checkpoint_fns = glob.glob(model_fn_wildcard)
                if len(model_checkpoint_fns) <= 0:
                    print(f'\n!!! Error - no checkpoints for model {config["model_id"]}')
                    continue

                print(f'\n--- Infering model {config["model_id"]} in "{config["cv_mode"]}" mode with {len(model_checkpoint_fns)} folds')
                model_count += 1

                inference_set = ContentDataset(inputs, config, vocab=self.vocab)
                inference_loader = DataLoader(inference_set, batch_size=config['batch_size'], drop_last=False)
                model = SupervisedRNN(m, config) if m in ['lstm','gru'] else SupervisedTransformer(config) if m=='trn' else SupervisedPrebuilt(config)

                for model_fn in model_checkpoint_fns:

                    # load the model state for the current fold checkpoint
                    model.load_state_dict(torch.load(model_fn, map_location=DEVICE))

                    ########
                    # infer
                    ########
                    preds_e = []
                    model.eval()
                    for batch in inference_loader:
                        batch = {k:v.to(DEVICE) for k,v in batch.items()}
                        with torch.no_grad():
                            predictions, _ = model(**batch)
                            preds_e.extend(predictions.detach().cpu().tolist())
                        del batch

                    # save fold predictions for ensemble calculations
                    ensemble_preds.append(preds_e)
                    ensemble_votes.append(np.argmax(preds_e, axis=1).tolist())

                del model
                gc.collect()
                torch.cuda.empty_cache()
            
        # ensemble predictions
        ensemble_preds = np.transpose(ensemble_preds, (1, 0, 2)) # alter matrix to (samples X folds X prediction probabilities)
        ensemble_preds = np.sum(ensemble_preds, axis=1) # sum all the probabilities by class and fold
        ensemble_preds = np.argmax(ensemble_preds, axis=1) # select the class with the highest sum
        outputs['pred_prob'] = ensemble_preds.tolist()

        ensemble_votes = np.transpose(ensemble_votes, (1, 0))
        ensemble_votes = np.median(ensemble_votes, axis=1).astype(int)
        outputs['pred_vote'] = ensemble_votes.tolist()

        print(f'\n--- Inference pipeline runtime for {model_count} models complete in {time()-start} seconds\n')
        print(outputs.head(50))

        return outputs


##############################################################################################################################
# Main - Train/Eval Processing
##############################################################################################################################
print(f'\n--- Device is {DEVICE}')

#
# Input dataframe feature columns: 
#  From_Address, To_Address, CC_Address
#  DateTime, DateTime_HOUR, DateTime_MONTH, DateTime_TS, Day, Outside_Hours
#  Body, Subject, Source, Forwarded
#
# labels = ['Class_Sentiment_1','Class_Sentiment_2','Class_Sentiment_Vader','Class_Alignment_1','Class_Alignment_2','Class_Alignment_3']
# models = ['lstm','gru','trn','pre']
#
run_labels = {'cs1':'Class_Sentiment_1','cs2':'Class_Sentiment_2','csv':'Class_Sentiment_Vader'}
run_models = ['lstm','gru','trn'] # 'pre'

# Process each label type and network model defined in 'run_labels' and 'run_models'
# 'blend' will not reset weights between folds, use some other value (i.e. not 'blend') to separate folds for mean selection
for key, run_label in run_labels.items():
    pipeline = TrainingPipeline(key, run_label, 3, config=None, run_models=run_models)
    pipeline.run_pipeline()


##############################################################################################################################
# Main - Inference Testing
##############################################################################################################################

# Aggregate models to predict most likely outcome
# TODO add weights
run_labels = {'cs1':''} 
run_models = ['trn']
predictions = InferencePipeline(run_labels, 3, run_models=run_models).run_pipeline()

exit()
				
			

Schedule a Meeting

More Posts