How to fine tune a machine learning algorithm? Fine tuning refers to a technic in machine learning where the goal is to find the optimal parameters. Fine tuning helps in increasing model performance and accuracy. Obviously, fine tuning is performed on training data and tested on validation data or test data. Usually, before fine tuning an algorithm, it is important to try several algorithms to find the better one. Fine tuning comes at the end of the training phase. Note that fine tune also refers to an approach in transfer learning. Fine tuning can mean training a neural network algorithm using another trained neural network parameters. This is done by initializing a neural network algorithm with a parameter from another neural network model and usually in the same domain problem. Fine tuning is the last step in the training phase as it comes after trying multiple machine learning algorithms and selecting the best ones. Fine tuning is considered as a non-necessary phase as it is possible to create a machine learning model without fine tuning it. However, if the idea is to increase accuracy, fine tuning is the best way. Fine tuning can also be called hyperparameter optimization and there are multiple technics to perform the optimization. Manual search is a technic that uses the data scientist’s experience to select the best parameters and find the optimal ones. For example, a data scientist can decide to reduce the value of the batch size in training a neural network algorithm to help get a faster converges. Manual search is not the most optimal technic but can be combined with other technics. Random search is a technic that creates a grid of hyperparameters and tries different random combination of hyperparameters. Random search is usually used in combination with cross validation as each combination of hyperparameters is tested with a specific fold from the dataset. Grid search is a technic that sets up a grid of hyperparameters and trains the model on each possible combination. The parameters to be used in the grid search are usually selected from a prior random search. Bayesian optimization is considered to be the best technic over the others as it uses probabilities to find the optimal search spaces for the hyperparameters. So, fine tuning is a set of technics that can help in improving the performance. When it refers to hyperparameter tuning it can be is used at the end of the training phase and can make a difference between a good model and a very good model. When it refers to transfer learning it can help improve deep neural network model performance.

视频信息