30 years ago, Python made its first appearance. But It took 20 years to gain appreciation from the developers. Fast-forward to 2019, it became the 2nd most loved language among developers.¹
Its growth over the past has been huge, especially over the past 5 years. Python became the machine learning and data science developers’ go-to language.
Python’s dominance in these fields will certainly be huge for the next few years. But it has got some serious disadvantages when compared to newer languages. This could be a roadblock for developers of the 20s.
This is the right time to examine the problems of Python and replacing it with a better alternative. In the case of AI development and Data Science, our next go-to language may be the Golang. …
In this tutorial, we’ll build a simple neural network (single-layer perceptron) in Go language, completely from scratch. We’ll also train it on sample data and perform predictions. Creating your own neural network from scratch will help you better understand what’s happening inside a neural network and the working of learning algorithms.
Perceptrons — invented by Frank Rosenblatt in 1958, are the simplest neural network that consists of n number of inputs, only one neuron, and one output, where n is the number of features of our dataset.
Hence, our single-layer perceptron consists of the following components.
Genetic Algorithms are based on Charles Darwin’s theory of natural selection and are often used to solve problems in research and machine learning.
In this article, we’ll be looking at the fundamentals of Genetic Algorithms (GA) and how to solve optimization problems using them.
Genetic algorithms were developed by John Henry Holland and his students and collaborators at the University of Michigan in the 1970s and 1980s.
It is a subset of evolutionary algorithms, and it mimics the process of natural selection in which the fittest individuals survive and are chosen for cross-over to reproduce offsprings of the next-generation.
The natural selection process also involves the addition of small randomness to the offsprings in the form of mutation. This will result in a new population of individuals with mixed fitness. …
Ever tried building a neural network model to solve simple math problems? Like multiplication of two numbers or square of a number? Then you would have probably realized neural networks are not designed to solving these simple problems. You would need a comparatively complex model just to approximate the square of a number. It wouldn't be perfect either.
Isn’t this a real problem? Today precision and accuracy in numbers are significant in any cutting-edge technology. A small variation in results can cause an extreme failure to the system where the AI is deployed. I didn’t mean that it is impossible to construct a near-perfect end-to-end neural network model that gives direct answers to math problems. …
Today, with open source machine learning software libraries such as TensorFlow, Keras, or PyTorch we can create a neural network, even with high structural complexity, with just a few lines of code. Having said that, the mathematics behind neural networks is still a mystery to some of us, and having the mathematics knowledge behind neural networks and deep learning can help us understand what’s happening inside a neural network. It is also helpful in architecture selection, fine-tuning of deep learning models, hyperparameters tuning, and optimization.
I had ignored understanding the mathematics behind neural networks and deep learning for a long time as I didn’t have good knowledge of algebra or differential calculus. A few days ago, I decided to start from scratch and derive the methodology and mathematics behind neural networks and deep learning, to know how and why they work. I also decided to write this article, which would be useful to people like me, who find it difficult to understand these concepts. …