[code] - [paper]
[Read More]
Breaking the Activation Function Bottleneck
through adaptive parameterization
Deep neural networks are very powerful models, in theory able to approximate any function. In practice things are a little different. Oddly, a neural network tends to generalize better the larger it is, often to the point of having more parameters than there are data points in the training set....
[Read More]
A minimalist Deep Learning library
Automatic differentiation in Numpy
When you peek under the hood of a deep learning library, things will look pretty complex. That’s because they use a lot of abstractions to handle a wide variety of cases. But the core code of a Deep Learning library is actually not that complex at all. In fact, you’d...
[Read More]
Learning by Gradient Descent
Understanding gradient descent, momentum and learning
Gradient descent is perhaps the simplest learning optimization algorithm that
exists. In Deep Learning, it’s the foundational learning algorithm upon
which modern learning algorithms are developed.
[Read More]
Ensemble learning
Explaining how and what ensembles learn
You may have wondered why there are so many different algorithms for machine
learning - after all, would it not be sufficient with one algorithm? A famous
result is the no free lunch theorem,
which tells us that no one algorithm will be optimal for every problem.
[Read More]
How to use LaTex in Markdown
A quick how to guide
A pain free recipe
[Read More]
Wrapping sklearn classes
Bend sklearn to your will
What’s the fuzz about?
[Read More]