In machine learning, classification refers to supervised learning approach where the computer program uses the data given to it to learn, understand, and classify new observation.

The data set can either be bi-class or multi-class. Some examples of classification problems include: handwriting recognition, speech recognition, document classification, bio metric identification, and more.

Related course: Complete Machine Learning Course with Python

Classification Algorithms in Machine Learning

The different types of classification algorithms in Machine Learning include the following:

  1. Logistic Regression
  2. Naive Bayes Classifier
  3. Support Vector Machines (SVM)
  4. Decision Trees
  5. Boosted Trees
  6. Random Forest
  7. Neural Networks
  8. Nearest Neighbor

Understanding the Advantages of Some Classification Algorithms

  • Logistic Regression:

    With logistic regression, you can find many different ways on how to regularize your model, without having to worry about whether or not features are correlated — which can be an issue with Naive Bayes.

    Moreover, it has good probabilistic interpretation and enables you to easily update your model to acquire new sets of data, which you may not experience with SVMs or decision trees.

    Logitsic regression is ideally used if you need a probabilistic framework — for instance, if you need to get confidence intervals, to adjust classification thresholds, or to say when you’re unsure. Furthermore, it is also helpful in circumstances when you expect to receiev more training data in the future and want to easily and quickly incorporate them into your model.

  • Naive Bayes:

    The Naive Bayes classifier is very easy to build and use, because it generally has no requirements. It is particularly useful with huge data sets and several categories of variables.. Aside from simplicity, NB is capable of outperforming even the highly sophisticated classification methods.

  • Support Vector Machines (SVMs):

    The advantage of SVMs is its high-accuracy and high-peformance characteristic. It provides excellent theoretical guarantees regarding overfitting and has flexible selection of kernels for data that isn’t linearly separable.

    It is particularly popular in text classification problems, especially when very high-dimensional spaces exist.

  • Decision Trees:

    Decision trees are basically quite simple to interpret and explain. It is also non-parametric and does not require any distribution. Having said this, wehen you use decision trees, you don’t have to worry about outliers or whether or not the data set can be separated linearly.

    Considered a heuristic algorithm, decision trees do not suffer suffer multicollinearity and are good for few categories variables.

  • Trees Ensembles:

    Tree Ensembles is a family of algorithms that covers two distinct algorithms, namely Boost Trees and Random Forests. One of their primary advantages is that they do not expect linear features, as well as features that interact linearly. Created as combination of many Decision Trees, the Trees Ensembles can easily handle categorical (binary) features.

    Furthermore, because of their general constrution (using boosting and bagging), they can very well handle dimensional space and large number of training examples.