Difference between AI, machine learning, and deep learning

Artificial intelligence (AI) is a subset of machine learning and deep learning is a subset of machine learning.
Cinque Terre
Emroj Hossain
4 min read
Sat Dec 28 2019

Though artificial intelligence (AI), machine learning, and deep learning are referred in the same context quite often, they are not the same. Machine learning is a subset of AI, and deep learning is a subset of machine learning. Let me discuss each concept separately.

Relationships of deep learning, machine learning, and artificial intelligence

Artificial intelligence (AI)

The term artificial intelligence was first coined by McCarthy in 1956 and concerned with design with intelligence in artificial devices. Artificial devices are easy to understand but
What is intelligence?

Is it behaving like a human?
Is it acting like a human?
Is it thinking like a human?
Or is it behaving or thinking in the best possible manner?

There are two types of thoughts on defining the intelligence-

1. The artificially intelligent machines should behave rationally in the best possible manner.

2. In the other type of definition, it is believed that artificially intelligent machines should behave like humans and able to mimic humans.

When we talk about behaviour-What sort of behaviour are we talking about?
There are two main types of behaviour that people consider to define artificial intelligence.

1. Thinking intelligently.

2. Acting intelligently

One of the most common ways to confirm artificial intelligence in a system is the turning test. In this test, an interrogator integrates a system (human/machine) kept separated from him by asking questions. If the interrogator can not differentiate between the response from a human and a machine, then the machine is said to possess artificial intelligence.

One of the ways to make a system artificially intelligent is by coding knowledge in the system by some formal language and known as knowledge-base approach to artificial intelligence. One of the projects that have used knowledge-based approach is Cyc. In Cyc, the statements are fed into the system database by a human operator using a language called CycL. Since the statements are fed into the system by humans it is time-consuming and also it is very difficult to describe the complexity of the real world by few formal rules which is relatively easy for small tasks like chess play. Let me give you an example to understand the difficulty in describing the complexity by formal rules.
Cyc Failed to understand the story of a man named Fred shaving in the morning. The program detected a discrepancy in the story. It knew that human does not have any electrical parts in the body but since Fred was holding electric razor it asked back if Fred was still a human while he was shaving. The story has no discrepancy at all in a human perspective. But it very difficult to formalize the real world complex the rule for machines. The problem can be solved in a subset of artificial intelligence known as machine learning.

Machine learning

The difficulty faced by the AI system can be overcome if the AI systems have the ability to obtain knowledge by extracting patterns from the raw data. This capability of the AI system of gaining knowledge by finding the pattern from the row data is known as machine learning and is a subset of artificial intelligence. Machine learning can be formally defined in the following manner-

“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E”

Tom Mitchell

Let me give an example of machine learning. Prediction of the patient diseases is can be accomplished using a machine learning algorithm. In this case, the doctors might give several data about the patient as the features and the machines learn and find the patterns from the feature and then detect the patient's disease.

The performance of the machine learning algorithm depends heavily on the representation of the data. An example, as shown in the image the data points are difficult to separate if it is represented in the Cartesian coordinate system (x, y). But the same data point will be easy to separate for a machine learning algorithm if it is represented using the polar coordinate system (r, θ).

Representation of data

From the above example, it is clear that the representation of data plays an important role in the performance of a machine learning algorithm. Many artificial intelligence tasks can be solved efficiently if one provides the right set of features to the machine learning algorithm. But, extracting the right set of features and finding the right set of representations of the data is not always possible for a human. The problem can be overcome if one uses a machine-learning algorithm not only to find a pattern in the data but also to learn the proper representation of the data. This approach is commonly known as representation learning. Autoencoder is an example of representation learning, where a machine-learning algorithm consisting of two parts- encoder and decoder learn a proper and small set of parameters that can describe the pattern in the data more efficiently.

One of the main aims of representation learning is to separate the factor of variation that can properly describe the data. But in real-world scenarios sometimes it is very difficult to separate the factor of variation. let take an example of an machine learning algorithm detecting/classifying cars. The shape of the car depends on the viewing angle. All the pixel of a car image changes depending on the lighting conditions. So it becomes very difficult to determine high label, complex, abstract features from the raw data. The difficulty can be overcome by a using class of machine learning algorithm, known as deep learning.

Deep learning

Deep learning solves the main problem of the representation learning by representation a complex representation in terms of simple representations. A deep learning algorithm consists of many layers. The depth of the model/algorithm depends on the number of layers. The complex concepts are broken down into the relatively simpler concepts in each layer. Let's take an example that will help us to understand the concept more clearly. Suppose a deep learning network detects the human face from an input image. The first layer of the model might detect the edges and contours from the input pixel values of the image. The next layer of the model uses the concept of edges and contours of the previous layer to build relatively more complex concepts such as eyes noes, etc. The next layer uses the concept of the previous layer and detects a human face. The neural network is used to make deep learning models.


Why deep learning is popular now though existed long before?

Deep learning has a long and rich history and started back to 1940s but gained popularity just recently and mostly in this decade