December 24, 2019

154 words 1 min read

CyberZHG/keras-radam

CyberZHG/keras-radam

RAdam implemented in Keras & TensorFlow

repo name CyberZHG/keras-radam
repo link https://github.com/CyberZHG/keras-radam
homepage https://pypi.org/project/keras-rectified-adam/
language Python
size (curr.) 63 kB
stars (curr.) 292
created 2019-08-16
license MIT License

Keras RAdam

Travis Coverage Version Downloads License

[中文|English]

Unofficial implementation of RAdam in Keras and TensorFlow.

Install

pip install keras-rectified-adam

Usage

import keras
import numpy as np
from keras_radam import RAdam

# Build toy model with RAdam optimizer
model = keras.models.Sequential()
model.add(keras.layers.Dense(input_shape=(17,), units=3))
model.compile(RAdam(), loss='mse')

# Generate toy data
x = np.random.standard_normal((4096 * 30, 17))
w = np.random.standard_normal((17, 3))
y = np.dot(x, w)

# Fit
model.fit(x, y, epochs=5)

TensorFlow without Keras

from keras_radam.training import RAdamOptimizer

RAdamOptimizer(learning_rate=1e-3)

Use Warmup

from keras_radam import RAdam

RAdam(total_steps=10000, warmup_proportion=0.1, min_lr=1e-5)

Q & A

About Correctness

The optimizer produces similar losses and weights to the official optimizer after 500 steps.

Use tf.keras or tf-2.0

Add TF_KERAS=1 to environment variables to use tensorflow.python.keras.

Use theano Backend

Add KERAS_BACKEND=theano to environment variables to enable theano backend.

comments powered by Disqus