Machine learning is a scientific technique where computers learn how to solve a problem without directly programming them. The ML race is currently led by deep learning fuelled by improved algorithms, computing power and big data. The classical algorithms of ML still have a firm place in the field. In this article, we compare various supervised machine learning techniques, such as Logistic Regression, K nearest neighbours, and Support Vector Machine. We will take a look at their fundamental reasoning, benefits, and drawbacks. Did you know? Netflix saved $1 billion in 2017 by using machine learning to make personalized recommendations. Try your hand in machine learning by starting a machine learning course.
Learning of Blog
- Logistic Regression
- K-Nearest Neighbours
- Support Vector Machine
- Conclusion
Join a course that provides machine learning certificate online at your home’s comfort and become a Certified Machine Learning Expert!
Logistic Regression
Logistic regression is the correct algorithm for starting with classification algorithms, much like linear regression. Eventually, when the term ‘Regression’ appears, it is not a model of regression, but a model of classification. To frame the binary output model, it utilizes a logistic function. The output of the logistic regression will be a probability (0≤x≤1), and can be adopted to predict the binary 0 or 1 as the output (if x<0.5, output= 0, else output=1).
Basic Theory:
Slightly quite similar to linear regression, logistic regression behaves. The linear output, accompanied by a stashing function over the output of regression, is also determined. The commonly used logistic function is the Sigmoid function.
The benefits:
- A convenient, quick and straightforward method of classification.
- Parameters explain the direction and intensity of significance of the independent variables over the dependent variable.
- Can be used for multiclass classifications also.
- The function for loss is always convex.
The drawbacks:
- It can not be extended to problems of non-linear classification.
- Proper feature selection is required.
- A good ratio of signal to noise is required.
- The precision of the LR model tampers with colinearity and outliers.
K-Nearest Neighbours
A non-parametric approach used for classification and regression is K-nearest neighbours. It is one of the simplest methods used for ML. It is a lazy model for learning, with local approximation.
Basic Theory:
The fundamental logic behind KNN is to explore your neighbourhood, assume that they are comparable to the test data point and extract the output. We search for k neighbours in KNN and come up with the forecast. In the case of the KNN classification, a plurality vote is used over the k closest data points, while the mean of the k closest data points is calculated as the output in the KNN regression. As a rule of thumb, we select odd numbers as k. KNN is a sluggish learning model where the only runtime exists in the computations.
The benefits:
- A quick and straightforward model of machine learning.
- A few tuneable hyperparameters.
The drawbacks:
- K should be chosen wisely.
- High runtime computing costs if the sample size is large.
- For equal treatment between features, proper scaling should be given.
Support Vector Machine
A type of ML technique that can be used for both classification and regression is the Support Vector Machine. To help linear and non-linear concerns, it has two main variants. Linear SVM has no kernel and seeks a linear solution to the problem with a minimum margin. When the solution is not linearly separable, SVMs with kernels are used.
Basic Theory:
Vector Machine Support is a supervised learning tool commonly used in text classification, classification of images, bioinformatics, etc.
In Linear SVM, the problem space must be segregated linearly. The model produces a hyperplane that maximizes the classification margin. When there are N features present, the hyperplane will be an N-1 dimensional subspace. Support vectors are called the boundary nodes in the feature space. The maximum margin is extracted based on their relative position, and an optimal hyperplane is drawn at the midpoint.
The benefits:
- To solve complex solutions, SVM uses Kernel Trick.
- SVM uses a convex optimization function, which is often possible to achieve global minima.
- Hinge loss provides improved precision.
- Using Soft Margin Constant C, outliers can be treated well.
The drawbacks:
- Loss of hinges contributes to sparsity.
- For adequate precision, the hyperparameters and kernels must be carefully tuned.
- A more extended period of training with larger datasets.
Conclusion
Logistic regression, KNN and SVM are excellent instruments for classification and regression problems in preparation. In order to save computing costs and time, it is good to know when to use any of them. Generally, machine learning experts suggest, first attempting to use logistic regression to see how the model performs is generally suggested, if it fails, then you should try using SVM without a kernel (otherwise referred to as SVM with a linear kernel) or try using KNN. Logistic regression and SVM with a linear kernel have similar efficiency, but one might be more potent than the other, depending on your features.
Leave a Reply