Artificial intelligence is today, what internet was 20 years ago. A lot of people are interested in the field, very few are actually experts in the field. And people already in the field, are constantly evolving learning new things day in and day out. Being exposed to knowledge that was previously unheard of. In some cases, even unimagined.
Artificial intelligence is a very complex amalgamation of several other domains, including but not limited to — Statistics, Probability, Linear Algebra, Computer Science, Psychology, linguistics, philosophy and several others. It is an ability to apply knowledge to real world problems demonstrated by machines. Stuart Russel in his book Artificial Intelligence : A modern approach stated that there are four organized structured definition of AI — Thinking Humanly, Thinking Rationally, Acting Humanly, and Acting Rationally. Humans are addictively fascinated with defining and categorizing everything. The benefit I see in that exercise is it makes communication easy (please note I said communication not understanding). There are many definitions for Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL). We can say that right now, AI field is still in its infancy and nomenclature is fluid. There is tremendous opportunity in the field.
Machine learning, on the other hand, is a subset of artificial intelligence. According to SAS “Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention.” And, then an even more interesting subset of machine learning is the concept of deep learning. This field is considered to be the most popular for researchers, developers, and even business owners. It helps widens the gaze of the possibility of the previously unthinkable. Deep learning is based on learning data representations — which essentially means that for training a computer to do what you want it to do, you need to first accumulate data for the computer to learn from. A lot of data and this learning can be broadly classified into three categories — Supervised, Unsupervised, and semi-supervised.
Src — https://goo.gl/2biUeH
I’ll take some simple examples to help get these concepts clear to you. When you started learning letters or numbers what was the first thing that you think that your teacher might’ve told you. “Here are a bunch of random shapes, tell me what ‘Two’ looks like?” I’m pretty sure that wasn’t the case. Your teacher would’ve first told you what ‘one’ looks like, and asked you to iterate over it 15 times - 20 times if you didn’t get better at possibly 50 times. Then your teacher would tell you what ‘two’ likes and asks you to practice over it, and so on and so forth. Well, this is a classic example of Supervised Learning. You tell the computer what it is supposed to learn, by explicitly telling what to look out for by supplying what is called labels to it. The machine would then iterate over tens of thousands or possibly even hundreds of thousands of the data provide by you and form a model that it would then use to identify any future previously unseen instance of the character. That is Supervised Learning in short — Supplying labeled data to the machine for it to learn and form a model.
Src — https://goo.gl/rVw2K9
A good example of this would be the idea of making computers identify objects. You supply with millions of images to the computer telling it that, well this what a dog looks like. This what a cat looks like. That... is a lamp. And several other things you encounter in the day to day life. And then, the machine forms a model for future classification of unseen things.
Moving on, have you ever solved a spot the difference puzzle. If you haven’t, the concept is simple, the puzzle, however, isn’t. Okay, so the way it goes is that you’re given two almost identical images but a few changes in one of them, and you need to find those differences. You’re not exactly sure what you’re looking for, but when you find a difference, you circle them out. Having a look at unsupervised learning somewhere near those lines. In contrast to supervised learning, you do not give labels to the machine, that is, you do not explicitly state what the differences are but you leave that up to the machine to figure that out. A good real-world example would be the classification of spam emails.
Src — https://goo.gl/oNQvuS
Switching gears to our previous example of learning about numbers through supervised learning. Now take this next example, with a grain of salt. When you were learning numbers, did you learn each and every number known to humanity? Or were you just told, this is hundred, this is thousand, and this a million? Fill in the numbers in between. Sometimes, it doesn’t make sense to learn everything using labeled data. Something’s can just be inferred from the knowledge that we’ve developed.
As you may have guessed, semi-supervised learning algorithms are trained on a combination of labeled and unlabeled data. “This is useful for a few reasons. First, the process of labeling massive amounts of data for supervised learning is often prohibitively time-consuming and expensive. What’s more, too much labeling can impose human biases on the model. That means including lots of unlabeled data during the training process actually tends to improve the accuracy of the final model while reducing the time and cost spent building it.” — datascience.com
One real-world application for semi-supervised learning is webpage classification. Say you want to classify any given webpage into one of several categories (like "Educational", " Shopping", "Forum", etc.). This is a case where it's expensive to go through tens of thousands of web pages and have humans annotate them (imagine how boring and strenuous it would be). So you train on de labeled examples. And then you train on unlabelled data.
So now that we understand what supervised, unsupervised and semi-supervised learning is. You now have a basic understanding of what goes behind the scenes for some real-world applications.
Quite intriguing, isn't it?