August 15, 2019

157 words 1 min read

kmkolasinski/deep-learning-notes

kmkolasinski/deep-learning-notes

Experiments with Deep Learning

repo name kmkolasinski/deep-learning-notes
repo link https://github.com/kmkolasinski/deep-learning-notes
homepage
language Jupyter Notebook
size (curr.) 275294 kB
stars (curr.) 1178
created 2017-11-19
license

deep-learning-notes

Experiments with Deep Learning and other resources:

  • keras-capsule-pooling - an after-hours experiment in which I try to implement Capsule pooling for images.
  • max-normed-optimizer - an experimental implementation of an interesting gradient descent optimizer which normalizes gradients according to their norms. Contains various experiments which show potential power of this method.
  • selu-regularization - a Keras Regularizer Layer which allows for forcing SELU like regularization on the model weights (Dense and Conv2D versions are provided). Selu was introduced as an activation function with special initialization method, those regularizers can be add to force the weight to preserve self normalizing property during the training.
  • tf-oversampling - example with how to implement oversampling with tf.data.Dataset API.

Seminars on Deep Learning and Machine Learning

Seminars - contains a bunch of presentations I have gave at our company.

comments powered by Disqus