A basic Core NLP pipeline served with Finch. It exposes the following endpoints:
GET /jobs
POST /jobs
DELETE /jobs/<id>
DELETE /jobs
Upon receiving a job through POST
, the service will attempt to tokenize, tag, and dependency-parse the text
input.
A successful response will include a list of tokens and parse tree. For example, this request using HTTPie:
http POST :8081/jobs text="It was all a dream."
would return the following:
{
"id": "6d34b552-c075-489f-8c6e-c4f5f58b18a8",
"parseTrees": [
"(ROOT (S (NP (PRP It)) (VP (VBD was) (NP (PDT all) (DT a) (NN dream))) (. .)))"
],
"text": "It was all a dream.",
"tokens": [
{
"partOfSpeech": "PRP",
"token": "It"
},
{
"partOfSpeech": "VBD",
"token": "was"
},
{
"partOfSpeech": "PDT",
"token": "all"
},
{
"partOfSpeech": "DT",
"token": "a"
},
{
"partOfSpeech": "NN",
"token": "dream"
},
{
"partOfSpeech": ".",
"token": "."
}
]
}
Thanks to the SBT Native Packager, we can build this code into a Docker image:
sbt docker:publishLocal
To run the service inside a container:
docker run --rm -p8081:8081 finch-nlp-server:0.1