amaiya/ktrain
ktrain is a Python library that makes deep learning and AI more accessible and easier to apply
repo name | amaiya/ktrain |
repo link | https://github.com/amaiya/ktrain |
homepage | |
language | Jupyter Notebook |
size (curr.) | 43926 kB |
stars (curr.) | 203 |
created | 2019-02-06 |
license | Apache License 2.0 |
Overview | Tutorials | Examples | Installation
ktrain
News and Announcements
- 2020-03-03:
- ktrain v0.10.x is released and now includes ready-to-use NER for English, Chinese, and Russian with no training required.
- Also in v0.10.x: Ability to train community-uploaded Hugging Face transformer models like SciBERT and BioBERT:
import ktrain
from ktrain import text
MODEL_NAME = 'monologg/scibert_scivocab_uncased'
t = text.Transformer(MODEL_NAME, maxlen=500, class_names=label_list)
trn = t.preprocess_train(x_train, y_train)
val = t.preprocess_test(x_test, y_test)
model = t.get_classifier()
learner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=6)
learner.fit_onecycle(3e-5, 1)
- 2020-01-31:
- ktrain v0.9.x is released and now includes out-of-the-box support for text regression in addition to support for custom data formats. See this tutorial notebook for more information on both these topics.
Overview
ktrain is a lightweight wrapper for the deep learning library TensorFlow Keras (and other libraries) to help build, train, and deploy neural networks. With only a few lines of code, ktrain allows you to easily and quickly:
- estimate an optimal learning rate for your model given your data using a Learning Rate Finder
- utilize learning rate schedules such as the triangular policy, the 1cycle policy, and SGDR to effectively minimize loss and improve generalization
- employ fast and easy-to-use pre-canned models for
text
,vision
, andgraph
data:text
data:- Text Classification: BERT, DistilBERT, NBSVM, fastText, and other models [example notebook]
- Text Regression: BERT, DistilBERT, Embedding-based linear text regression, fastText, and other models [example notebook]
- Sequence Labeling: Bidirectional LSTM-CRF with optional pretrained word embeddings [example notebook]
- Unsupervised Topic Modeling with LDA [example notebook]
- Document Similarity with One-Class Learning: given some documents of interest, find and score new documents that are semantically similar to them using One-Class Text Classification [example notebook]
- Document Recommendation Engine: given text from a sample document, recommend documents that are semantically-related to it from a larger corpus [example notebook]
vision
data:- image classification (e.g., ResNet, Wide ResNet, Inception) [example notebook]
graph
data:- graph node classification with graph neural networks (e.g., GraphSAGE) [example notebook]
- perform multilingual text classification (e.g., Chinese Sentiment Analysis with BERT, Arabic Sentiment Analysis with NBSVM)
- Ready-to-Use NER for English, Chinese, and Russian (no training required)
- load and preprocess text and image data from a variety of formats
- inspect data points that were misclassified and provide explanations to help improve your model
- leverage a simple prediction API for saving and deploying both models and data-preprocessing steps to make predictions on new raw data
Tutorials
Please see the following tutorial notebooks for a guide on how to use ktrain on your projects:
- Tutorial 1: Introduction
- Tutorial 2: Tuning Learning Rates
- Tutorial 3: Image Classification
- Tutorial 4: Text Classification
- Tutorial 5: Learning from Unlabeled Text Data
- Tutorial 6: Text Sequence Tagging for Named Entity Recognition
- Tutorial 7: Graph Node Classification with Graph Neural Networks
- Tutorial A1: Additional tricks, which covers topics such as previewing data augmentation schemes, inspecting intermediate output of Keras models for debugging, setting global weight decay, and use of built-in and custom callbacks.
- Tutorial A2: Explaining Predictions and Misclassifications
- Tutorial A3: Text Classification with Hugging Face Transformers
- Tutorial A4: Using Custom Data Formats and Models: Text Regression with Extra Regressors
Some blog tutorials about ktrain are shown below:
ktrain: A Lightweight Wrapper for Keras to Help Train Neural Networks
Text Classification with Hugging Face Transformers in TensorFlow 2 (Without Tears)
Examples
Tasks such as text classification and image classification can be accomplished easily with only a few lines of code.
Example: Text Classification of IMDb Movie Reviews Using BERT
import ktrain
from ktrain import text as txt
# load data
(x_train, y_train), (x_test, y_test), preproc = txt.texts_from_folder('data/aclImdb', maxlen=500,
preprocess_mode='bert',
train_test_names=['train', 'test'],
classes=['pos', 'neg'])
# load model
model = txt.text_classifier('bert', (x_train, y_train), preproc=preproc)
# wrap model and data in ktrain.Learner object
learner = ktrain.get_learner(model,
train_data=(x_train, y_train),
val_data=(x_test, y_test),
batch_size=6)
# find good learning rate
learner.lr_find() # briefly simulate training to find good learning rate
learner.lr_plot() # visually identify best learning rate
# train using 1cycle learning rate schedule for 3 epochs
learner.fit_onecycle(2e-5, 3)
Example: Classifying Images of Dogs and Cats Using a Pretrained ResNet50 model
import ktrain
from ktrain import vision as vis
# load data
(train_data, val_data, preproc) = vis.images_from_folder(
datadir='data/dogscats',
data_aug = vis.get_data_aug(horizontal_flip=True),
train_test_names=['train', 'valid'],
target_size=(224,224), color_mode='rgb')
# load model
model = vis.image_classifier('pretrained_resnet50', train_data, val_data, freeze_layers=80)
# wrap model and data in ktrain.Learner object
learner = ktrain.get_learner(model=model, train_data=train_data, val_data=val_data,
workers=8, use_multiprocessing=False, batch_size=64)
# find good learning rate
learner.lr_find() # briefly simulate training to find good learning rate
learner.lr_plot() # visually identify best learning rate
# train using triangular policy with ModelCheckpoint and implicit ReduceLROnPlateau and EarlyStopping
learner.autofit(1e-4, checkpoint_folder='/tmp/saved_weights')
Example: Sequence Labeling for Named Entity Recognition using a randomly initialized Bidirectional LSTM CRF model
import ktrain
from ktrain import text as txt
# load data
(trn, val, preproc) = txt.entities_from_txt('data/ner_dataset.csv',
sentence_column='Sentence #',
word_column='Word',
tag_column='Tag',
data_format='gmb')
# load model
model = txt.sequence_tagger('bilstm-crf', preproc)
# wrap model and data in ktrain.Learner object
learner = ktrain.get_learner(model, train_data=trn, val_data=val)
# conventional training for 1 epoch using a learning rate of 0.001 (Keras default for Adam optmizer)
learner.fit(1e-3, 1)
Example: Node Classification on Cora Citation Graph using a GraphSAGE model
import ktrain
from ktrain import graph as gr
# load data with supervision ratio of 10%
(trn, val, preproc) = gr.graph_nodes_from_csv(
'cora.content', # node attributes/labels
'cora.cites', # edge list
sample_size=20,
holdout_pct=None,
holdout_for_inductive=False,
train_pct=0.1, sep='\t')
# load model
model=gr.graph_node_classifier('graphsage', trn)
# wrap model and data in ktrain.Learner object
learner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=64)
# find good learning rate
learner.lr_find(max_epochs=100) # briefly simulate training to find good learning rate
learner.lr_plot() # visually identify best learning rate
# train using triangular policy with ModelCheckpoint and implicit ReduceLROnPlateau and EarlyStopping
learner.autofit(0.01, checkpoint_folder='/tmp/saved_weights')
Example: Text Classification with Hugging Face Transformers on 20 Newsgroups Dataset Using DistilBERT
# load text data
categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']
from sklearn.datasets import fetch_20newsgroups
train_b = fetch_20newsgroups(subset='train', categories=categories, shuffle=True)
test_b = fetch_20newsgroups(subset='test',categories=categories, shuffle=True)
(x_train, y_train) = (train_b.data, train_b.target)
(x_test, y_test) = (test_b.data, test_b.target)
# build, train, and validate model (Transformer is wrapper around transformers library)
import ktrain
from ktrain import text
MODEL_NAME = 'distilbert-base-uncased'
t = text.Transformer(MODEL_NAME, maxlen=500, class_names=train_b.target_names)
trn = t.preprocess_train(x_train, y_train)
val = t.preprocess_test(x_test, y_test)
model = t.get_classifier()
learner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=6)
learner.fit_onecycle(5e-5, 4)
learner.validate(class_names=t.get_classes()) # class_names must be string values
# Output from learner.validate()
# precision recall f1-score support
#
# alt.atheism 0.92 0.93 0.93 319
# comp.graphics 0.97 0.97 0.97 389
# sci.med 0.97 0.95 0.96 396
#soc.religion.christian 0.96 0.96 0.96 398
#
# accuracy 0.96 1502
# macro avg 0.95 0.96 0.95 1502
# weighted avg 0.96 0.96 0.96 1502
Using ktrain on Google Colab? See this simple demo of Multiclass Text Classification with BERT.
Additional examples can be found here.
Installation
Make sure pip is up-to-date with: pip3 install -U pip
.
- Ensure TensorFlow 2 is installed if it is not already
For GPU:
pip3 install "tensorflow_gpu>=2.0.0"
For CPU:
pip3 install "tensorflow>=2.0.0"
- Install ktrain:
pip3 install ktrain
Some things to note:
- As of v0.8.x, ktrain requires TensorFlow 2. TensorFlow 1.x (1.14, 1.15) is no longer suppoted.
- Since some ktrain dependencies have not yet been migrated to
tf.keras
in TensorFlow 2 (or may have other issues), ktrain is temporarily using forked versions of some libraries. Specifically, ktrain uses forked versions of theeli5
andstellargraph
libraries. If not installed, ktrain will complain when a method or function needing either of these libraries is invoked. To install these forked versions, you can do the following:
pip3 install git+https://github.com/amaiya/eli5@tfkeras_0_10_1
pip3 install git+https://github.com/amaiya/stellargraph@no_tf_dep_082
This code was tested on Ubuntu 18.04 LTS using TensorFlow 2.0 (Keras version 2.2.4-tf).
Creator: Arun S. Maiya
Email: arun [at] maiya [dot] net