Machine Learning and Blockchain – Part 1/3 – Overview and comparison of important variants
This is the first part of our three-part series about Machine Learning (ML) and Blockchain. In this part, we’re going to cover the basics of what Machine Learning means and compare the five most common variants of Machine Learning. The second and third part will explain use cases or business models where Machine Learning can be used on the Blockchain today or sometime in the future.
What is Machine Learning?
The term Machine Learning was defined in the 1950s by artificial intelligence (AI) pioneer Arthur Samuel, it is described as
“Programming computers to learn from experience should eventually eliminate the need for much of this detailed programming effort”
Samuel, A. L. (1959), “Some Studies in Machine Learning Using the Game of Checkers” in IBM Journal of Research and Development (Volume:3, Issue: 3), p. 210
The goal for these AI systems is to perform complex tasks how humans would solve these tasks. As an example, you could think of image recognition, understanding text or performing an action in the real world self-driving cars. For humans, these tasks are trivial, but for a machine these tasks can become easily overwhelming. We can’t just write a program that solves these tasks because programming is very precise and requires detailed instructions, it is not feasible to solve these tasks this way. Machine Learning enables the computer to program itself through experience. ML is given a dataset (numbers, text, photos, etc.), the so-called model is trained on this Dataset. This is called the training data. To test the model how accurate it is, a portion of the dataset is withheld from the training data and then used as test data. These models can then be used to accomplish three different tasks:
- Descriptive – The machine uses data to explain what happened. (ex. Image Recognition)
- Predictive – The system uses data to predict what will happen. (ex. Stock price prediction)
- Prescriptive – The system uses data to provide suggestions about what action to take. (ex. Trading bots)
This should give you a basic understanding of machine learning. In the following section, we’re going to explain five major variants of Machine Learning and compare them.
Explanation and comparison of important ML Variants
Here we look at five major Machine Learning variants supervised, unsupervised, reinforced, deep and transfer learning. We will explain what they are, how they work, what differentiates them and list some methods they use.
Supervised Learning
The training of models with supervised learning is done by using labelled data sets. This enables the model to learn and grow overtime. As an example, think of numerous pictures of animals that have the name associated with them. The model can then be trained to detect pictures that contain a cat, for example, without the assigned label. Typical supervised learning is used for classification and regression problems. Example methods are:
- Regression: Regression is a method of supervised learning that involves predicting a continuous output variable based on one or more input variables. The goal of regression is to find the best fitting line or curve that describes the relationship between the input and output variables. Common regression algorithms include linear regression, polynomial regression, and support vector regression.
- Classification: Classification is a method of supervised learning that involves predicting a categorical output variable based on one or more input variables. The goal of classification is to find a decision boundary that separates the different categories in the data. Common classification algorithms include logistic regression, decision trees, random forests, and support vector machines.
- Ensemble Learning: Ensemble learning is a method of supervised learning that involves combining multiple models to improve predictive accuracy. Ensemble learning can be used for both regression and classification tasks, and common ensemble techniques include bagging, boosting, and stacking.
Unsupervised Learning
The model of unsupervised learning just gets a dataset without any description. It then tries to find pattern or trends that are not directly visible to the human eye. Example methods are:
- Clustering: Clustering is a method of unsupervised learning that involves grouping similar data points together into clusters. The goal of clustering is to identify patterns or structure in the data based on similarities or distances between data points. Common clustering algorithms include k-means, hierarchical clustering, and density-based clustering.
- Principal Component Analysis (PCA): PCA is a method of unsupervised learning that involves identifying the most important features or components of a dataset. The goal of PCA is to reduce the dimensionality of the data while retaining as much information as possible. PCA works by transforming the data into a new coordinate system that maximizes the variance of the data along each axis.
- Association Rule Mining: Association rule mining is a method of unsupervised learning that involves identifying patterns or relationships between variables in a dataset. The goal of association rule mining is to find co-occurring items or events that are likely to be related. Common algorithms for association rule mining include Apriori and FP-Growth.
- Anomaly Detection: Anomaly detection is a method of unsupervised learning that involves identifying data points that deviate significantly from the expected behaviour of the data. The goal of anomaly detection is to detect unusual behaviour or outliers that may indicate a problem or opportunity. Common algorithms for anomaly detection include clustering-based methods, density-based methods, and distance-based methods.
Reinforcement Learning
The training of the models is done via trial and error. This means the system is given different scenarios and depending on the reaction of the system it gets rewarded or punished. You can consider such a system to be a bot who plays a game or a system for a self-driving car. The goal for the system is to maximize the rewards. Example methods are:
- Q-Learning: Q-Learning is a model-free reinforcement learning algorithm that learns the optimal action-selection policy for an agent in an environment. Q-Learning uses a table of Q-values to estimate the expected reward for taking a particular action in a particular state. The agent learns through trial and error, by updating the Q-values based on the rewards received.
- Policy Gradient Methods: Policy gradient methods are a class of reinforcement learning algorithms that learn a parameterized policy function, which maps the current state to an action. The policy is learned by optimizing a reward function using gradient ascent. Policy gradient methods can be used to solve problems with continuous action spaces and can handle stochastic policies.
- Actor-Critic Methods: Actor-critic methods combine the advantages of value-based and policy-based methods. The actor-critic model consists of two parts: the actor, which is responsible for selecting actions, and the critic, which evaluates the actions taken by the actor. The actor-critic method can be used to solve problems with large state and action spaces.
- Monte Carlo Methods: Monte Carlo methods are a class of reinforcement learning algorithms that use random sampling to estimate the expected reward. Monte Carlo methods can be used for both model-based and model-free reinforcement learning.
Deep Learning
Deep learning uses neural networks for the training of the model. Neural networks can have multiple layers that consist of individual nodes, these are called hidden layers. The outputs of these nodes are connected to the input of nodes one layer deeper, each node performance a different function. The goal of these neural networks is to mimic the functionality of the human brain. Deep learning uses neural networks with multiple of these hidden layers to complete complex tasks, like chatbots, medical diagnostics or image creation. Example methods are:
- Convolutional Neural Networks (CNNs): CNNs are a type of neural network commonly used for image and video recognition tasks. They use convolutional layers to extract features from input images or videos, and pooling layers to reduce the spatial dimensions of the feature maps. CNNs have been used for a wide range of applications, such as object detection, face recognition, and medical imaging.
- Recurrent Neural Networks (RNNs): RNNs are a type of neural network commonly used for sequential data processing tasks, such as speech recognition, language translation, and text generation. RNNs use recurrent connections to maintain information about previous inputs and produce outputs based on this information. Long Short-Term Memory (LSTM) networks and Gated Recurrent Units (GRU’s) are popular types of RNNs.
- Generative Adversarial Networks (GANs): GANs are a type of neural network architecture that consists of two networks: a generator and a discriminator. The generator network learns to generate new data samples that are similar to the training data, while the discriminator network learns to distinguish between the generated samples and the real data. GANs have been used for tasks such as image and video synthesis, and style transfer.
- Autoencoders: Autoencoders are a type of neural network that learns to encode input data into a lower-dimensional representation, and then decode the representation back into the original data. Autoencoders can be used for tasks such as data compression, denoising, and anomaly detection.
Transfer Learning
Transfer learning reuses a model that was trained for a similar problem like detecting cars as a starting point for training a model for detecting trucks. Example methods are:
- Fine-tuning: Fine-tuning is a method of transfer learning in which a pre-trained model is adapted to a new task by fine-tuning some of its parameters on a new dataset. The pre-trained model is usually trained on a large dataset for a similar task, such as image classification. Fine-tuning can be done by unfreezing the last few layers of the pre-trained model and retraining them on the new dataset, while keeping the rest of the model frozen.
- Feature extraction: Feature extraction is a method of transfer learning in which the features learned by a pre-trained model are extracted and used as input for a new model that is trained on a new task. This is done by removing the last few layers of the pre-trained model and using the output of the remaining layers as the input to the new model. The new model is then trained on a new dataset for the new task.
- Multitask learning: Multi-task learning is a method of transfer learning in which a single model is trained to perform multiple related tasks simultaneously. This is done by sharing the lower layers of the model between the different tasks, while keeping separate output layers for each task. Multitask learning can improve the performance of the model on each task by allowing it to learn shared representations that are useful for multiple tasks.
- Domain adaptation: Domain adaptation is a method of transfer learning in which a model trained on one domain is adapted to a new domain with a different distribution of data. This is done by using techniques such as adversarial training, where the model is trained to generate features that are invariant to the domain shift.
Comparison of the Variants
Criteria | Supervised ML | Unsupervised ML | Reinforcement ML | Deep ML | Transfer ML |
---|---|---|---|---|---|
Definition | Learns by using labelled data | Trained using unlabelled data without any guidance. | Works on interacting with the environment | Tries to mimic the human brain. | Reuses pre-trained model on a new task as a starting point |
Type of Data | Labelled data | Unlabelled data | No – predefined data | Large amount of labelled data | Labelled Data |
Types of problems | Regression and classification | Association and Clustering | Exploitation or Exploration | Classification, Clustering, Exploration | Limited available labelled data |
Supervision | Extra supervision | No supervision | No supervision | It depends on the use case. | Supervision for the pre-training phase necessary |
Methods | Linear Regression, Logistic Regression, SVM, KNN etc. | K – Means, C – Means, Apriori | Q – Learning, Policy Gradient Methods | Convolutional Neural Networks (CNNs), Generative Adversarial Networks (GANs) | Fine-tuning, Feature extraction |
Goal | Calculate outcomes | Discover underlying patterns | Learn a series of action | uses artificial neural networks to analyse and learn from data | Reduce the amount of labelled data to train models |
Application | Risk Evaluation, Forecast Sales | Recommendation System, Anomaly Detection | Self Driving Cars, Gaming, Healthcare | Image and speech recognition, natural language processing, and recommendation systems | computer vision, natural language processing, speech recognition |
This concludes our first part of the blog series about machine learning and blockchain. After covering the basics and explaining some ML variants, we will take a closer look how these variants can be used on the blockchain in the second part. In the third part, we will give you some future use cases how ML and blockchain could be utilized as a business model.
Sources:
https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained
https://www.ibm.com/topics/supervised-learning
https://www.guru99.com/unsupervised-machine-learning.html
https://www.guru99.com/reinforcement-learning-tutorial.html
https://link.springer.com/article/10.1007/s42979-021-00815-1