# Bayes theorem questions and answers pdf

Please forward this error screen to 158. I am finding it hard to bayes theorem questions and answers pdf the process of Naive Bayes, and I was wondering if someone could explain it with a simple step by step process in English.

35a7 7 0 1 1 1. 9 2 2 2h16a2 2 0 0 0 2-2v-4. 44A2 2 0 0 0 15. 68A1 1 0 0 1 5. 12a1 1 0 0 1 . M9 1a8 8 0 1 0 0 16A8 8 0 0 0 9 1zm. 69a4 4 0 0 0-.

29 0 0 1 1. 34 0 0 0 . 8 0 0 0 2. 07A8 8 0 0 0 8. 8 0 0 1 0-3. 83a8 8 0 0 0 0 7. 3A8 8 0 0 0 1.

77 0 0 1 4. I understand it takes comparisons by times occurred as a probability, but I have no idea how the training data is related to the actual dataset. Please give me an explanation of what role the training set plays. It’s quite easy if you understand Bayes’ Theorem. NOTE: The accepted answer below is not a traditional example for Naïve Bayes.

It’s mostly a k Nearest Neighbor implementation. How you calculate the probabilities is all up to you. Naive Bayes calculates it using prior multiplied by likelihood so that is what Yavar has shown in his answer. How to arrive at those probabilities is really not important here. The answer is absolutely correct and I see no problems in it. Use comments to ask for more information or suggest improvements. Avoid answering questions in comments.

Your question as I understand is divided in two parts. In general all of Machine Learning Algorithms need to be trained for supervised learning tasks like classification, prediction etc. This is what most of the Machine Learning techniques like Neural Networks, SVM, Bayesian etc. Remember your basic objective would be that your system learns and classifies new inputs which they have never seen before in either Dev set or test set. The test set typically has the same format as the training set. Now I come to your other question about Naive Bayes. Our task is to classify new cases as they arrive, i.

In the Bayesian analysis, this belief is known as the prior probability. X, the more likely that the new cases belong to that particular color. Then we calculate the number of points in the circle belonging to each class label. In the Bayesian analysis, the final classification is produced by combining both sources of information, i. The answer was proceeding nicely till the likelihood came up. Yavar has used K-nearest neighbours for calculating the likelihood. If it is, what are some other methods to calculate the likelihood?

You used a circle as an example of likelihood. I read about Gaussian Naive bayes where the likelihood is gaussian. How can that be explained? Actually, the answer with knn is correct. If you don’t know the distribution and thus the probability densitiy of such distribution, you have to somehow find it. This can be done via kNN or Kernels. I think there are some things missing.

I realize that this is an old question, with an established answer. Conceptually, k-NN uses the idea of “nearness” to classify new entities. In k-NN ‘nearness’ is modeled with ideas such as Euclidean Distance or Cosine Distance. Since the question is about Naive Bayes, here’s how I’d describe the ideas and steps to someone.