PyText is a deep-learning based NLP modeling framework built on PyTorch. PyText addresses the often-conflicting requirements of enabling rapid experimentation and of serving models at scale. It achieves this by providing simple and extensible interfaces and abstractions for model components, and by using PyTorch’s capabilities of exporting models for inference via the optimized Caffe2 execution engine. We are using PyText in Facebook to iterate quickly on new modeling ideas and then seamlessly ship them at scale.
Core PyText features:
- Production ready models for various NLP/NLU tasks:
- Text classifiers
- Sequence taggers
- Joint intent-slot model
- Contextual intent-slot models
- Distributed-training support built on the new C10d backend in PyTorch 1.0
- Extensible components that allows easy creation of new models and tasks
- Reference implementation and a pretrained model for the paper: Gupta et al. (2018): Semantic Parsing for Task Oriented Dialog using Hierarchical Representations
- Ensemble training support
To get started on a Cloud VM, checkout our guide
We recommend using a virtualenv:
$ python3 -m virtualenv venv
$ source pytext/bin/activate
(venv) $ pip install pytext-nlp
Detailed instructions can be found in our Documentation
For this first example, we'll train a CNN-based text-classifier that classifies text utterances, using the examples in tests/data/train_data_tiny.tsv
.
(venv) $ pytext train < demo/configs/docnn.json
By default, the model is created in /tmp/model.pt
Now you can export your model as a caffe2 net:
(venv) $ pytext export < config.json
You can use the exported caffe2 model to predict the class of raw utterances like this:
(venv) $ pytext --config-file config.json predict <<< '{"raw_text": "create an alarm for 1:30 pm"}'
PyText is BSD-licensed, as found in the LICENSE file.