08 December 2020 15:28:44 IST

Kashyap Kompella is the CEO of the global technology industry analyst firm RPA2AI Research. RPA2AI advises global corporations, venture capital/private equity firms, and government agencies in AI investments, enterprise AI, AI policies, and ethics.  Thinkers360 ranks Kashyap among the top ten global thought leaders on artificial intelligence, digital transformation, and emerging technologies. He is the co-author of the Amazon bestseller “Practical Artificial Intelligence.” Kashyap conducts masterclasses on AI for senior leaders and he is a visiting faculty for the Institute of Directors on transformative technologies. Kashyap attended BITS, Pilani, and ISB, Hyderabad. He also has a master’s in business laws from National Law School, Bangalore, and is a CFA charter holder.
Read More...

Diving into Deep Learning

Powerful applications in a wide variety of areas from image and speech recognition, and video processing

AI is certainly a hot area today, but it is by no means a new field. In fact, the term Artificial Intelligence was coined nearly 65 years ago in 1956. For nearly two decades after that, there was a wave of great hope around intelligent computers and the hard problems they’d be able to solve. But the first AI wave was followed by many years of disappointment because of the limitations of AI, and the funding for AI projects dried up. Later, a second wave of an AI boomed in the early 1980s, and again followed by an “AI winter” till the early 1990s.

The field of AI has been making steady progress during all these years but it’s the public perception of AI that kept alternating between hype and realism. In the present era, particularly in the last eight to ten years, we have seen the return of a bigger AI wave. To be sure, hype abounds but the potential applications of AI are more widespread, and even if the interest in AI cools a bit, there are still a reasonable number of applications to be developed just with currently existing AI capabilities. So, what has fuelled the rise of AI now? It’s deep learning, which represents a step-jump from the previous generations of AI.

But what exactly is deep learning?

Deep Learning is the popular name for a branch of machine learning techniques called Artificial Neural Networks (neural nets). Neural nets are said to be inspired by how the human brain works. A neural network is a data processing mechanism, arranged as a series of layers of artificial “neurons” that are connected to other neurons in the setup. The connections between the neurons are called edges.

Each neuron can receive a signal (input), process it (transform the input by performing a calculation), and pass the output along to neurons in the next layer. Thus, a neural net consists of an input layer, several processing layers, and an output layer. The processing layers are called hidden layers and can perform different types of transformations on their inputs. The associations between different neurons are stored as numerical weights (it is the weightage of the incoming neuron).

Training the neural network consists of processing known inputs and known outputs, so that by an iterative, trial and error process, the weights of the nodes in the network layers are computed to correctly map the given inputs and outputs. Of course, this is a simplified version of what a neural network actually is just to give you an illustration . There are some pretty complex and sophisticated computer science, mathematics, and probability techniques involved to make these calculations.

But what I want you to take away is this: we are interested in determining the weights of the individual nodes in the neural network layers so that we can use them in our real-world applications.

The “deep” in deep learning refers to the number of layers in the neural net architecture, not any profound insights that it may lead us to! The weights that the neural network is computing or learning, are sort of like Coke’s closely guarded formula.

What’s behind the rise of deep learning?

Artificial Neural Networks are not a new technique per se, but today’s deep/large neural nets are only possible because we have some seriously powerful computers compared to before (Moore’s Law has been in action for the last 45 years!) and because of the availability of large amounts of data. The performance of deep learning-based models has improved so much in recent years that for certain tasks (for example, image recognition), they achieve or surpass human performance.

 

Machine Learning vs Deep Learning

 

The difference between traditional machine learning and deep learning is that deep learning scales. In other words, the performance of traditional models plateaus after a certain amount of data, but performance of deep learning improves with more data.

Another difference is that deep learning works well for unstructured data (such as images, videos, audio, text) compared to traditional machine learning which works well for structured data (data in tabular form).

What does all of this mean for you as a manager?

With some understanding of neural networks, keep these in mind:

1. When you read “AI” in the media headlines, it usually refers to applications of deep learning. And when you think of deep learning, think of large neural nets, whose secret sauce is the “weights” they are computing.

2. Given the neural net architecture as described above, you can see that there can be no intuitive link between the inputs and the outputs of the deep learning model. That is why, you see AI being described as a “Blackbox”.

3. But real-life situations require an explanation – just think of granting/denying a loan to a customer. With deep learning, you don’t have direct answers - the weights between the different neurons are just a bunch of numbers, not a narrative. There is an emerging area called “Explainable AI” that tries to come up with the reasons for such decisions based on the model weights. We’ll explore Explainable AI in more detail in a future column.

4. Simpler machine learning models (for example, decision trees) potentially don’t perform as well as deep learning but the reasons for any decision are easy to understand. So, there is a trade-off between explanability and performance of models and based on the context, you may decide to forego the enhanced performance for simplicity.

5. The Blackbox nature of deep learning applications potentially creates new security risks. When the inputs are changed (for example, changing a few pixels in an image), human users may not perceive any difference, but the output of the model can be very different.

6. Deep learning has powerful applications in a wide-variety of areas — image recognition, speech recognition, natural language processing, video processing, document analysis and so much more. A plethora of AI applications across industries and functions is really about the leverage of deep learning for different usage scenarios. As a manager, your job would be to figure out which of these use cases makes sense for your organisation.

With the basics of Artificial Intelligence, Machine Learning and Deep Learning out of the way, we’ll start exploring the different use cases starting with the next column of Future Tense.