classification In python

Classification In Python

What Is Classification ?

Classification Is Processes for find pattern from structured or unstructured observed data when output variable is categorical. In Classification model Give one or more than one inputs will try to predict the finite and discrete values outcomes.

example: house price prediction, whether monsoon prediction, gender of a person on hand writing,

Logistic Regression

Logistic Regression is a predict the binary outcome from a linear combination of one or more predictor (independent) variables. it also known as logit model. Logistic Regression work when depend variable is binary. and independent variable is independent of each other.

Types Of Logistic Regression:

  • Binary logistic regression – It has only two possible depend variable . Example- win or loss
  • Multinomial logistic regression – three or more nominal categories. Example- eye, hair, skin colour.
  • Ordinal logistic regression- three or more ordinal categories, ordinal means category in an order format. Example- user ratings (1-10).
#create Logistic Regression model
logmodel = LogisticRegression()
logmodel.fit(X_train, y_train)

#predict on test data
predictions = logmodel.predict(X_test)

#confusion matrix, accurarcy

print(classification_report(y_test, predictions))
print(confusion_matrix(y_test, predictions))
print(accuracy_score(y_test, predictions))

Full Code: click Hear

K-NN

KNN is lazy and non-parametric algorithm. Means creates boundary to classify the point (data). When newly data point come in, try to predict that to the nearest of the boundary line called as non-parametric. There is no or minimal training phase because of which training phase is pretty fast.in K-nearest neighbor algorithm training data is used during the testing phase.K-nearest neighbor sort form is K-NN.

KNN classification algorithm
#make model
knn = KNeighborsClassifier(n_neighbors=7) 
knn.fit(X_train, y_train) 

#see score
print(knn.score(X_test, y_test))  

Full Code : Click hear

SVM

Support Vector Networks (SVM) Classifier plot each data point in n-dimensional space with the value.it value each dimension being the value of a particular coordinate. Then, we perform classification for finding the hyperplane that differentiate the classes very well.On SVM handle categorical and multiple continuous variables. categorical variables have to be converted to numeric by creating dummy variables. Kernel, Regularization, Gamma And Margin tuning parameters mathematical computations that require numeric variables.

Regularization : Regularization parameter give a value for how avoid misclassifying each training observation.

regularization svm parameter

Margin : Margin is the separation line (Gap) to the closest class data points. Larger the margin width, better is the classification.

Support Vector Machine

Kernel SVM

Kernel : transformations applied on input variables which separate non-separable data to separable data. In non-linear separation problem helps to build more accurate classifier.

There are 9 different kernel parameter : linear, nonlinear, polynomial, Gaussian kernel, Radial basis function (RBF), sigmoid, Laplace, Hyperbolic tangent And ANOVA.

Gamma : Gamma is the kernel coefficient in the nonlinear kernel. Gamma use for how far the impact of a single training example reaches: Example: RBF (Radial basis function), Polynomial, and Sigmoid. Higher values of Gamma will make the model more accuracy, more complex, overfits and biased.

from sklearn import svm
#create a classifier
cls = svm.SVC(kernel="linear")
#train the model
cls.fit(X_train,y_train)
#predict the response
pred = cls.predict(X_test)

Full Code Link: Click Hear

Naive Bayes

Naive Bayes is classification algorithm based on Bayes Theorem with all independent (features) variables are independent and not related each covariates(predictors). that’s why Naive Bayes call so ‘naive’.

Bayes Theorem finds the probability of an event occurring given the probability of another event that has already occurred. Mathematically it is given as P(A|B) = [P(B|A)P(A)]/P(B) where A & B are events. P(A|B) called as Posterior Probability, is the probability of event A(response) given that B(independent) has already occured. P(B|A) is the likelihood of the training data i.e., probability of event B(indpendent) given that A(response) has already occured. P(A) is the probability of the response variable and P(B) is the probability of the training data or evidence.

# implement Model
ignb = GaussianNB()
pred_gnb = ignb.fit(Xtrain,ytrain).predict(Xtest)

#multinomial naive bayes

imnb = MultinomialNB()
pred_mnb = imnb.fit(Xtrain,ytrain).predict(Xtest)

Full Code: Click Hear

Decision Tree Classification

Decision Tree is algorithm of supervised machine learning. tree like structure As A Root Node, Internal Node And Leaf Node in a decision tree. Decision Tree starts at the Root Node, this is the first node of the decision tree. Data set is split based on Root Node, again nodes are selected to further split the already splitted data. This process of splitting the data goes on till we get leaf nodes, which are nothing but the classification labels.

Decision Tree is capture Non-linear patterns, visualize and interpret with out any assumptions. it is also use in feature engineering. data set is biased when imbalance dataset.

Information Gain: The process of selecting Root Nodes and Internal Nodes is done using the statistical measure called as Gain. Gain is the reduction of this uncertainity measure.Gain for any column is calculated by differencing Information Gain of a dataset with respect to a variable from the Information Gain of the entire dataset.

Information gain

Gini index: Gini is metrics used for deciding how to split a Decision Tree. if select two items from a population at random then they must be of same class and probability. population is pure then it denoted by 1. Gini measurement is the probability of a random sample being classified correctly if you randomly pick a label according to the distribution in the branch.

Entropy : Entropy is probabilistic measure of uncertainty or impurity or calculate the lack of information when spilt the data. when a node is homogeneous.it is denoted by 0. this is desirable for data scientist.

# Create Decision Tree classifer object
clf = DecisionTreeClassifier()

# Train Decision Tree Classifer
clf = clf.fit(X_train,y_train)

#Predict the response for test dataset
y_pred = clf.predict(X_test)

Full Code : Click Hear

Random Forest Classification

Random Forest is Algorithm of supervised machine learning. It Build Multiple decision tree and randomly select sample data and predict. and compare all decision tree prediction also say voting method. using Random forest classifier select most contribute features and missing values. it slow in generate slow predictions that’s why it is time-consuming but highly accurate Algorithm. also well known as Bootstrap Aggregation And bagging Algorithm

#Create a Gaussian Classifier
clf=RandomForestClassifier(n_estimators=100)

#Train the model using the training
clf.fit(X_train,y_train)

y_pred=clf.predict(X_test)

Full Code : Click Hear

Conclusion

Classification algorithm work on discrete outcome. this classify algorithm pattern pattern on data. example face detection, speech recognition, document classification, handwriting recognition.

Leave a Comment

Your email address will not be published. Required fields are marked *