Deep Learning Workflow in PyTorch

RMAG news

Prepare data.
Prepare (untrained) model.
Train model.
Evaluate model.
Save model.

1. Prepare data.

Get data like images, video, sound, text, etc.
Divide the data into the one for training(Train data) and the one for testing(Test data). *Basically, train data is 80% and test data is 20%.

2. Prepare (untrained) model.

Select the suitable layers activation function for the data.
Select the activation function for the data if necesarry.

3. Train model.

Select the suitable loss(cost) function and optimizer for the data.
*Memos:

A loss function is the algorithm which can get the gap between predictions and train data.
An optimizer is the algorithm which can minimize the loss between predictions and train data with gradient descent. *Gradient Descent(GD) is the algorithm which can find the minimum(or maximum) gradient(slope) of a function.

Calculate predictions with train data.
Calculate the loss between predictions and train data with a loss(cost) function.
Zero out the gradients of all tensors every epoch for proper calculation. *The gradients are accumulated in buffers, then they are not overwritten until backward() is called.
Do backpropagation. *Backpropagation is the algorithm which can minimize the loss between predictions and train data, working from output layer to input layer.
Optimize the model to minimize the loss between predictions and train data with an optimizer.

*Repeat the epoch of 2., 3., 4., 5. and 6. to minimize the loss between predictions and train data.

4. Evaluate model.

Calculate predictions with test data.
Calculate the loss between predictions and test data with a loss(cost) function.
Check the loss with train and test data by text or graph.

5. Save model.

Finally, save the model if the model is the enough quality which you want.