SKLearner Home | About | Contact | Examples

Naive Bayes

Helpful examples of using Naive Bayes (NB) machine learning algorithms in scikit-learn.

The Naive Bayes algorithm is a probabilistic classifier based on Bayes’ Theorem, used primarily for classification tasks.

It assumes that the features in a dataset are conditionally independent given the class label, an assumption known as “naive.”

Despite this simplification, Naive Bayes often performs well in practice, particularly for large datasets and high-dimensional data.

The algorithm calculates the posterior probability of each class given a set of features and classifies the instance to the class with the highest posterior probability. It comes in several variants, including Gaussian, Multinomial, and Bernoulli Naive Bayes, each suited for different types of data distributions.

Naive Bayes is fast, scalable, and particularly effective for text classification problems like spam detection and sentiment analysis. However, its performance can be limited if the independence assumption is strongly violated.

ExamplesTags
Configure GaussianNB "priors" Parameter
Configure GaussianNB "var_smoothing" Parameter
Scikit-Learn "CategoricalNB" versus "MultinomialNB"
Scikit-Learn BernoulliNB Model
Scikit-Learn CategoricalNB Model
Scikit-Learn ComplementNB Model
Scikit-Learn GaussianNB Model
Scikit-Learn GridSearchCV BernoulliNB
Scikit-Learn GridSearchCV CategoricalNB
Scikit-Learn GridSearchCV ComplementNB
Scikit-Learn GridSearchCV GaussianNB
Scikit-Learn GridSearchCV MultinomialNB
Scikit-Learn MultinomialNB Model
Scikit-Learn RandomizedSearchCV BernoulliNB
Scikit-Learn RandomizedSearchCV CategoricalNB
Scikit-Learn RandomizedSearchCV ComplementNB
Scikit-Learn RandomizedSearchCV GaussianNB
Scikit-Learn RandomizedSearchCV MultinomialNB