Machine Learning Header Background Image

What is Machine Learning?

When machines learn without being explicitly programmed, it is called machine learning. The classic definition of machine learning as coined by Tom Mitchell is: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E.”

Dissecting the above definition, we can infer the following:

  1. Machine learning is a subfield of computer science wherein a computer program is the machine that learns. The computer program often utilizes principles from statistics, linear algebra and optimization theory to do its work.
  2. Machine learning is narrowed down to a particular task or class of tasks, which makes it a subfield of artificial intelligence. AI aims to bring a cognitive understanding to machines, similar to the learning and problem-solving cycle of a human mind. In contrast, machine learning only focusses on the given task at hand.
  3. Machine learning is repetitive; machines learn in cycles and get better and better with iterations and experience. A machine learning program is primarily based on an algorithm that’s performance can be tuned by passing various sets of parameters.
  4. Machine learning thrives on data. The term “experience” in the above definition comes from feeding data to the computer program. The more data you feed the machine learning program and the more you train it, the better its performance. Big data provides the necessary raw material for machine learning to thrive.
  5. Machine learning is not learning by heart but recognizing complex patterns and make intelligent decisions based on data.

Applications of machine learning include facial detection, speech recognition, fraud detection, cancer detection, predicting home sale prices, playing chess, self-driving cars, etc. Machine learning is becoming ubiquitous with computer applications. It is what makes the applications intelligent.

History of Machine Learning

The field of machine learning evolved alongside computers. Right after the first computer was built in 1946, Alan Turing proposed a test to measure a computer’s performance in its ability to think and learn like a human. None have passed the Turing test as of yet, but many interesting systems and methodologies have evolved since. One of the earliest examples is IBM’s Arthur Samuel developing a program to play checkers in 1952. The program was able to observe positions and developed a learning model. As the program played more and more games, it learned from experience and got better and better at the game. Since then significant advances in this field have yielded concepts like neural networks, expert systems, and statistical data analysis, with the latest being big data science where the goal is to find, analyze and validate patterns in big data quickly.

Big Data and Machine Learning

We are in the age of big data; there is no denying that. However, this data is only as valuable as the insights it yields. Traditional data analysis tools provide reports and graphs involved simplistic approaches like sums, aggregates, counts and SQL queries to provide what is known as descriptive analytics. These tools also cannot deal with huge volumes of fast-changing data coming from disparate sources and formats that is typical of big data and its attributes of volume, variety, veracity and velocity. Here is where machine learning can aptly be applied: To derive the next level of analytics, called predictive and prescriptive analytics. Predictive analytics is the branch of advanced analytics where data is used to make predictions on future events. Predicting customer churn for a telecommunications organization is an example of predictive analytics. Prescriptive analytics is a type of predictive analytics which prescribes an action the user can take to achieve a certain result. For example, what should the telecommunications organization do to avoid losing business from particular groups of customers?

Machine learning applied to big data becomes a strategic asset for an organization, as proven by the likes of Uber, Netflix, AirBnB, etc. Uber uses machine learning to predict optimal positioning of vehicles to maximize profits, predicts hazardous driving conditions and helps drivers avoid accidents. Netflix employs renowned personalization machine learning algorithms for providing personalized content to customers that enhance customer retention. AirBnB has implemented a machine learning-based dynamic pricing feature for the hosts that predicts the demand for a listing based on seasonality and the listing’s features. All three have witnessed a transformation from being a startup to an all-dominating behemoth in their respective industries.

Not just used by businesses, our daily life is also interspersed with machine learning. Any time you interact with a computerized system, chances are you are interacting with a machine learning program.

When you check your email, the spam detector is machine learning program that has learned to detect spam emails from regular ones.

When you swipe a credit card, there is a fraud detecting machine learning program that is watching out for fraudulent credit card transactions and protecting you.

When you browse online for products and services, there is a machine learning recommendation engine figuring out what other products may be relevant to your interests.

When you engage in social media, machine learning helps you find contacts and grow your network.

Big data and machine learning are fueling a massive transformation, even in Healthcare, where researchers are making strides towards precision medicine. Precision medicine is geared towards providing personalized medicine, where the right treatment plan is prescribed based on each individual’s genetic characteristics, medical history, and body type.

Machine learning is omnipresent and important. To conclude, we quote Pedro Domingos, machine learning researcher, and professor at University of Washington:

“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”

Leave a Reply

Your email address will not be published. Required fields are marked *