NLP-Overview is an up-to-date
overview of deep learning techniques applied to NLP, including theory,
implementations, applications, and state-of-the-art results. This is a
great Deep NLP Introduction for researchers.
NLP-Progress tracks the
progress in Natural Language Processing, including the datasets and the
current state-of-the-art for the most common NLP tasks
The Berkeley NLP
Group - Notable contributions include a tool to reconstruct long
dead languages, referenced here
and by taking corpora from 637 languages currently spoken in Asia and
the Pacific and recreating their descendant.
NLP research
group, Columbia University - Responsible for creating BOLT (
interactive error handling for speech translation systems) and an
un-named project to characterize laughter in dialogue.
Deep NLP
Course by Yandex Data School, covering important ideas from text
embedding to machine translation including sequence modeling, language
models and so on.
fast.ai
Code-First Intro to Natural Language Processing - This covers a
blend of traditional NLP topics (including regex, SVD, naive bayes,
tokenization) and recent neural network approaches (including RNNs,
seq2seq, GRUs, and the Transformer), as well as addressing urgent
ethical issues, such as bias and disinformation. Find the Jupyter
Notebooks here
Applied
Natural Language Processing- Lecture series from IIT Madras taking
from the basics all the way to autoencoders and everything. The github
notebooks for this course are also available here
TextAttack -
Adversarial attacks, adversarial training, and data augmentation in
NLP
TextBlob - Providing
a consistent API for diving into common natural language processing
(NLP) tasks. Stands on the giant shoulders of Natural Language Toolkit (NLTK) and Pattern, and plays nicely
with both :+1:
spaCy - Industrial
strength NLP with Python and Cython :+1:
Speedster
- Automatically apply SOTA optimization techniques to achieve the
maximum inference speed-up on your hardware
gensim -
Python library to conduct unsupervised semantic modelling from plain
text :+1:
scattertext -
Python library to produce d3 visualizations of how language differs
between corpora
GluonNLP - A deep
learning toolkit for NLP, built on MXNet/Gluon, for research prototyping
and industrial deployment of state-of-the-art models on a wide range of
NLP tasks.
AllenNLP - An NLP
research library, built on PyTorch, for developing state-of-the-art deep
learning models on a wide variety of linguistic tasks.
PyTorch-NLP
- NLP research toolkit designed to support rapid prototyping with better
data loaders, word vector loaders, neural network layer representations,
common NLP metrics such as BLEU
Rosetta
- Text processing tools and wrappers (e.g. Vowpal Wabbit)
PyNLPl - Python
Natural Language Processing Library. General purpose NLP library for
Python, handles some specific formats like ARPA language models, Moses
phrasetables, GIZA++ alignments.
foliapy - Python
library for working with FoLiA, an XML format for
linguistic annotation.
PySS3 - Python
package that implements a novel white-box machine learning model for
text classification, called SS3. Since SS3 has the ability to visually
explain its rationale, this package also comes with easy-to-use
interactive visualizations tools (online
demos).
jPTDP - A
toolkit for joint part-of-speech (POS) tagging and dependency parsing.
jPTDP provides pre-trained models for 40+ languages.
NLP
Architect - A library for exploring the state-of-the-art deep
learning topologies and techniques for NLP and NLU
Flair - A
very simple framework for state-of-the-art multilingual NLP built on
PyTorch. Includes BERT, ELMo and Flair embeddings.
Kashgari -
Simple, Keras-powered multilingual NLP framework, allows you to build
your models in 5 minutes for named entity recognition (NER),
part-of-speech tagging (PoS) and text classification tasks. Includes
BERT and word2vec embedding.
FARM - Fast &
easy transfer learning for NLP. Harvesting language models for the
industry. Focus on Question Answering.
Haystack -
End-to-end Python framework for building natural language search
interfaces to data. Leverages Transformers and the State-of-the-Art of
NLP. Supports DPR, Elasticsearch, HuggingFace’s Modelhub, and much
more!
Rita DSL - a DSL,
loosely based on RUTA on
Apache UIMA. Allows to define language patterns (rule-based NLP)
which are then translated into spaCy, or
if you prefer less features and lightweight - regex patterns.
Transformers -
Natural Language Processing for TensorFlow 2.0 and PyTorch.
Tokenizers -
Tokenizers optimized for Research and Production.
fairSeq Facebook AI
Research implementations of SOTA seq2seq models in Pytorch.
corex_topic -
Hierarchical Topic Modeling with Minimal Domain Knowledge
CRF++ - Open source
implementation of Conditional Random Fields (CRFs) for
segmenting/labeling sequential data & other Natural Language
Processing tasks.
CRFsuite -
CRFsuite is an implementation of Conditional Random Fields (CRFs) for
labeling sequential data.
BLLIP Parser -
BLLIP Natural Language Parser (also known as the Charniak-Johnson
parser)
colibri-core -
C++ library, command line tools, and Python binding for extracting and
working with basic linguistic constructions such as n-grams and
skipgrams in a quick and memory-efficient way.
ucto -
Unicode-aware regular-expression based tokenizer for various languages.
Tool and C++ library. Supports FoLiA format.
OpenRegex An
efficient and flexible token-based regular expression language and
engine.
CogcompNLP -
Core libraries developed in the U of Illinois’ Cognitive Computation
Group.
MALLET - MAchine Learning
for LanguagE Toolkit - package for statistical natural language
processing, document classification, clustering, topic modeling,
information extraction, and other machine learning applications to
text.
RDRPOSTagger -
A robust POS tagging toolkit available (in both Java & Python)
together with pre-trained models for 40+ languages.
tm - Implementation of
topic modeling based on regularized multilingual PLSA.
word2vec-scala -
Scala interface to word2vec model; includes operations on vectors like
word-distance and word-analogy.
Epic - Epic is a high
performance statistical parser written in Scala, along with a framework
for building complex structured prediction models.
Spark NLP -
Spark NLP is a natural language processing library built on top of
Apache Spark ML that provides simple, performant & accurate NLP
annotations for machine learning pipelines that scale easily in a
distributed environment.
Amazon Comprehend -
NLP and ML suite covers most common tasks like NER, tagging, and
sentiment analysis
Google Cloud
Natural Language API - Syntax Analysis, NER, Sentiment Analysis, and
Content tagging in atleast 9 languages include English and Chinese
(Simplified and Traditional).
ParallelDots
- High level Text Analysis API Service ranging from Sentiment Analysis
to Intent Analysis
Textalytic - Natural
Language Processing in the Browser with sentiment analysis, named entity
extraction, POS tagging, word frequencies, topic modeling, word clouds,
and more
NLP Cloud - SpaCy NLP models
(custom and pre-trained ones) served through a RESTful API for named
entity recognition (NER), POS tagging, and more.
Cloudmersive -
Unified and free NLP APIs that perform actions such as speech tagging,
text rephrasing, language translation/detection, and sentence
parsing
Annotation Tools
GATE - General
Architecture and Text Engineering is 15+ years old, free and open
source
Anafora is free
and open source, web-based raw text annotation tool
brat - brat rapid annotation
tool is an online environment for collaborative text annotation
doccano -
doccano is free, open-source, and provides annotation features for text
classification, sequence labeling and sequence to sequence
INCEpTION - A
semantic annotation platform offering intelligent assistance and
knowledge management
tagtog, team-first web tool to
find, create, maintain, and share datasets - costs $
prodigy is an annotation tool
powered by active learning, costs $
LightTag - Hosted and managed text
annotation tool for teams, costs $
rstWeb -
open source local or online tool for discourse tree annotations
GitDox -
open source server annotation tool with GitHub version control and
validation for XML data and collaborative spreadsheet grids
Label Studio - Hosted and
managed text annotation tool for teams, freemium based, costs $
Datasaur support various NLP
tasks for individual or teams, freemium based
Konfuzio - team-first hosted
and on-prem text, image and PDF annotation tool powered by active
learning, freemium based, costs $
UBIAI - Easy-to-use text
annotation tool for teams with most comprehensive auto-annotation
features. Supports NER, relations and document classification as well as
OCR annotation for invoice labeling, costs $
Shoonya -
Shoonya is free and open source data annotation platform with wide
varities of organization and workspace level management system. Shoonya
is data agnostic, can be used by teams to annotate data with various
level of verification stages at scale.
Annotation
Lab - Free End-to-End No-Code platform for text annotation and DL
model training/tuning. Out-of-the-box support for Named Entity
Recognition, Classification, Relation extraction and Assertion Status
Spark NLP models. Unlimited support for users, teams, projects,
documents. Not FOSS.
FLAT - FLAT is a
web-based linguistic annotation environment based around the FoLiA format, a rich XML-based
format for linguistic annotation. Free and open source.
UDPipe is a trainable
pipeline for tokenizing, tagging, lemmatizing and parsing Universal
Treebanks and other CoNLL-U files. Primarily written in C++, offers a
fast and reliable solution for multilingual NLP processing.
NLP-Cube : Natural
Language Processing Pipeline - Sentence Splitting, Tokenization,
Lemmatization, Part-of-speech Tagging and Dependency Parsing. New
platform, written in Python with Dynet 2.0. Offers standalone
(CLI/Python bindings) and server functionality (REST API).
UralicNLP is an
NLP library mostly for many endangered Uralic languages such as Sami
languages, Mordvin languages, Mari languages, Komi languages and so on.
Also some non-endangered languages are supported such as Finnish
together with non-Uralic languages such as Swedish and Arabic. UralicNLP
can do morphological analysis, generation, lemmatization and
disambiguation.
spanlp -
Python library to detect, censor and clean profanity, vulgarities,
hateful words, racism, xenophobia and bullying in texts written in
Spanish. It contains data of 21 Spanish-speaking countries.
Corpora/Datasets
that need a login/access can be gained via email
SAIL 2015 Twitter and
Facebook labelled sentiment samples in Hindi, Bengali, Tamil,
Telugu.
IIT
Bombay NLP Resources Sentiwordnet, Movie and Tourism parallel
labelled corpora, polarity labelled sense annotated corpus, Marathi
polarity labelled corpus.
iNLTK - A Natural
Language Toolkit for Indic Languages (Indian subcontinent languages)
built on top of Pytorch/Fastai, which aims to provide out of the box
support for common NLP tasks.
ViText2SQL
- A dataset for Vietnamese Text-to-SQL semantic parsing (EMNLP-2020
Findings)
EVB Corpus -
20,000,000 words (20 million) from 15 bilingual books, 100 parallel
English-Vietnamese / Vietnamese-English texts, 250 parallel law and
ordinance texts, 5,000 news articles, and 2,000 film subtitles.
Parsivar: A Language
Processing Toolkit for Persian
Perke: Perke is a
Python keyphrase extraction package for Persian language. It provides an
end-to-end keyphrase extraction pipeline in which each component can be
easily modified or extended to develop new models.
Perstem: Persian
stemmer, morphological analyzer, transliterator, and partial
part-of-speech tagger
Bijankhan
Corpus: Bijankhan corpus is a tagged corpus that is suitable for
natural language processing research on the Persian (Farsi) language.
This collection is gathered form daily news and common texts. In this
collection all documents are categorized into different subjects such as
political, cultural and so on. Totally, there are 4300 different
subjects. The Bijankhan collection contains about 2.6 millions manually
tagged words with a tag set that contains 40 Persian POS tags.
Uppsala
Persian Corpus (UPC): Uppsala Persian Corpus (UPC) is a large,
freely available Persian corpus. The corpus is a modified version of the
Bijankhan corpus with additional sentence segmentation and consistent
tokenization containing 2,704,028 tokens and annotated with 31
part-of-speech tags. The part-of-speech tags are listed with
explanations in this
table.
Large-Scale Colloquial
Persian: Large Scale Colloquial Persian Dataset (LSCP) is
hierarchically organized in asemantic taxonomy that focuses on
multi-task informal Persian language understanding as a comprehensive
problem. LSCP includes 120M sentences from 27M casual Persian tweets
with its dependency relations in syntactic annotation, Part-of-speech
tags, sentiment polarity and automatic translation of original Persian
sentences in English (EN), German (DE), Czech (CS), Italian (IT) and
Hindi (HI) spoken languages. Learn more about this project at LSCP webpage.
ArmanPersoNERCorpus:
The dataset includes 250,015 tokens and 7,682 Persian sentences in
total. It is available in 3 folds to be used in turn as training and
test sets. Each file contains one token, along with its manually
annotated named-entity tag, per line. Each sentence is separated with a
newline. The NER tags are in IOB format.
FarsiYar
PersianNER: The dataset includes about 25,000,000 tokens and about
1,000,000 Persian sentences in total based on Persian
Wikipedia Corpus. The NER tags are in IOB format. More than 1000
volunteers contributed tag improvements to this dataset via web panel or
android app. They release updated tags every two weeks.
PERLEX: The first
Persian dataset for relation extraction, which is an expert translated
version of the “Semeval-2010-Task-8” dataset. Link to the relevant
publication.
Persian Syntactic
Dependency Treebank: This treebank is supplied for free
noncommercial use. For commercial uses feel free to contact us. The
number of annotated sentences is 29,982 sentences including samples from
almost all verbs of the Persian valency lexicon.
Hamshahri: Hamshahri
collection is a standard reliable Persian text collection that was used
at Cross Language Evaluation Forum (CLEF) during years 2008 and 2009 for
evaluation of Persian information retrieval systems.