Sunday, June 17, 2007

Generative Model and Disriminative Model

==
Ref: MLWiki
A generative model is one which explicitly states how the observations are assumed to have been generated. Hence, it defines the joint probability of the data and latent variables of interest.



==
See also: Generative Model from wikipedia.

==
ref: A simple comparison
of
generative model (model likelihood and prior) <-- NB..
and discriminative model (model posterior) <-- SVM

==
ref: Classify Semantic Relations in Bioscience Texts.

Generative models learn the prior probability of the class and the probability of the features given the class; they are the natural choice in cases with hidden variables (partially observed or missing data). Since labeled data is expensive to collect, these models may be useful when no labels are available. However, in this paper we test the generative models on fully observed data and show that , although not as accurate as the discriminative model, their performance is promising enough to encourage their use for the case of partially observed data.

Discriminative models learn the probability of the class given the features. When we have fully observed data and we just need to learn the mapping from features to classes(classification), a discriminative approach may be more appropriate.

It must be pointed out that the neural network (discriminative model) is much slower than the graphical models (HMM-like generative models), and requires a great deal of memory.

1 comment:

Zhihui Jin said...

Just give some comments for discussion.

The key point in generative model(GM) is that it assumes a casual mechanism in the generation of sample data.
A causal mechanism means the things happens previously (or decisions made before) will influence the future step,
so a GM usually is history-based model (eg. NB and HMM). And the casual mechanism is usually expressed as a derivation process in generating the sample data.

In GM, the training process is to distribute the probability on the decisions in the derivation. In order to simplify the statistical model, an independence assumption in the derivation history would be indispensable. This is one of the serious shortcomings in GM, since the independence assumption could be incorrect. Another drawback of GM is that it is very difficult to generalize GM in order to integrate new features into the model, since the derivation process is usually constrained.

A discriminative model (DM), in contrast, never care about the hidden generation mechanism, but pay attention to the surface features in the sample data, and the probability is calculated based on feature weighting. So any feature can be added in to the model, and no independence assumption is necessary. So the model have the great generalization ability. Since the features can be quite arbitrary, so the parameter space can be very big, and the training process will be slow compare to GM.