Independent and Identically distributed, conditional independent and Naive bayes
I'm reading about Naive Bayes classification concept, noting that we make the conditionally independence assumption. But isn't this the general assumption that is always made dealing with machine learning algorithms?
Suppose we have a supervised binary classification problem setup, with a dataset (xn,tn)} where and
I've read everywhere that we always make the assumption that data are iid (independent and with the same probability dstribution, this would mean that ..right?). At this point it is reasonable to think of a Bernoulli distribution to model the data. Let the likelihood function: then we want to find
where
Here we should use a conditional independence hypothesis in order to go on. So in every situation we use the naive bayes hypothesis? I'm having troubles trying to distinguish..