NlpTools

Natural language processing in php

Maximum entropy model for sentiment detection Jun 20th, 2013

In the second post in the series of sentiment detection we will train a maximum entropy model to perform the exact same classification on the exact same data and compare the procedure and the results.

Getting the optimizer

Later on NlpTools will support more external optimizers like Megam or Mallet but for now the only optimizer supported is the one developed specifically for NlpTools in Go that performs parallel batch gradient descent.

You can download binaries for you architecture or build from source.

Training a Maximum Entropy model

There are only three changes required (to our previous file) to train and evaluate a Maxent model instead of a Naive Bayes model.

Feature Factory

You need to create a different type of feature factory. In Maxent features should target specific classes so in each feature the class name should be prepended. In addition, we should model presence and not frequency of features. To achieve the above we will use the FeatureFactory FunctionFeatures.

  1. $ff = new FunctionFeatures(
  2. function ($c, $d) {
  3. return array_map(
  4. function ($t) use($c) {
  5. return "$c ^ $t"; // target the feature to a specific class
  6. },
  7. $d->getDocumentData()
  8. );
  9. }
  10. )
  11. );

Model instantiation

We should change the model instatiation code to now create and train a Maxent model.

  1. // create empty Maxent
  2. $model = new Maxent(array());
  3. $model->train(
  4. $ff, // the feature factory
  5. $tset, // the documents
  6. new ExternalMaxentOptimizer(
  7. 'gradient-descent' // the path to the external optimizer
  8. )
  9. );

Classifier

Finally, we need to change the classifier from MultinomialNaiveBayes to FeatureBasedLinearClassifier.

  1. $cls = new FeatureBasedLinearClassifier($ff, $model);

We then run the php script with the exact same parameters as in the previous post. Training time will take longer this time.

Results

Maximum entropy models usually perform a bit better than Naive Bayes (on par with SVM). They are, though, much harder to train. Our model confirms the rule by achieving 86% accuracy on Pang's and Lee's dataset (No features other than the data).

« Previous