Researchers develop new method for controlling soft robots

SAN JOSE: In machine learning, most tasks can be easily categorized into one of two different classes: supervised learning problems or unsupervised learning problems. In supervised learning, data has labels or classes appended to it, while in the case of unsupervised learning the data is unlabeled. Let’s take a close look at why this distinction is important and look at some of the algorithms associated with each type of learning.

Supervised Vs. Unsupervised Learning
Most machine learning tasks are in the domain of supervised learning. In supervised learning algorithms, the individual instances/data points in the dataset have a class or label assigned to them. This means that the machine learning model can learn to distinguish which features are correlated with a given class and that the machine learning engineer can check the model’s performance by seeing how many instances were properly classified. Classification algorithms can be used to discern many complex patterns, as long as the data is labeled with the proper classes. For instance, a machine-learning algorithm can learn to distinguish different animals from each other based off of characteristics like “whiskers”, “tail”, “claws”, etc.

In contrast to supervised learning, unsupervised learning involves creating a model that is able to extract patterns from unlabeled data. In other words, the computer analyzes the input features and determines for itself what the most important features and patterns are. Unsupervised learning tries to find the inherent similarities between different instances. If a supervised learning algorithm aims to place data points into known classes, unsupervised learning algorithms will examine the features common to the object instances and place them into groups based on these features, essentially creating its own classes.

Examples of supervised learning algorithms are Linear Regression, Logistic Regression, K-nearest Neighbors, Decision Trees, and Support Vector Machines.

Meanwhile, some examples of unsupervised learning algorithms are Principal Component Analysis and K-Means Clustering.

Supervised Learning Algorithm Examples
Linear Regression is an algorithm that takes two features and plots out the relationship between them. Linear Regression is used to predict numerical values in relation to other numerical variables. Linear Regression has the equation of Y = a +bX, where b is the line’s slope and a is where y crosses the X-axis.

Logistic Regression is a binary classification algorithm. The algorithm examines the relationship between numerical features and finds the probability that the instance can be classified into one of two different classes. The probability values are “squeezed” towards either 0 or 1. In other words, strong probabilities will approach 0.99 while weak probabilities will approach 0.

K-Nearest Neighbors assigns a class to new data points based on the assigned classes of some chosen amount of neighbors in the training set. The number of neighbors considered by the algorithm is important, and too few or too many neighbors can misclassify points.

Decision Trees are a type of classification and regression algorithm. A decision tree operates by splitting up a dataset down into smaller and smaller portions until the subsets can’t be split any further and what results is a tree with nodes and leaves. The nodes are where decisions about data points are made using different filtering criteria, while the leaves are the instances that have been assigned some label (a data point that has been classified). Decision tree algorithms are capable of handling both numerical and categorical data. Splits are made in the tree on specific variables/features.

Support Vector Machines are a classification algorithm that operates by drawing hyperplanes, or lines of separation, between data points. Data points are separated into classes based upon which side of the hyperplane they are on. Multiple hyperplanes can be drawn across a plane, diving a dataset into multiple classes. The classifier will try to maximize the distance between the diving hyperplane and the points on either side of the plane, and the greater the distance between the line and the points, the more confident the classifier is.

Unsupervised Learning Algorithms
Principal Component Analysis is a technique used for dimensionality reduction, meaning that the dimensionality or complexity of the data is represented in a simpler fashion. The Principal Component Analysis algorithm finds new dimensions for the data that are orthogonal. While the dimensionality of the data is reduced, the variance between the data should be preserved as much as possible. What this means in practical terms is that it takes the features in the dataset and distills them down into fewer features that represent most of the data.

K-Means Clustering is an algorithm that automatically groups data points into clusters based on similar features. The patterns within the dataset are analyzed and the datapoints split into groups based on these patterns. Essentially, K-means creates its own classes out of unlabeled data. The K-Means algorithm operates by assigning centers to the clusters, or centroids, and moving the centroids until the optimal position for the centroids is found. The optimal position will be one where the distance between the centroids to the surrounding data points within the class is minimized. The “K” in K-means clustering refers to how many centroids have been chosen.

Summing Up
To close, let’s quickly go over the key differences between supervised and unsupervised learning.

As we previously discussed, in supervised learning tasks the input data is labeled and the number of classes are known. Meanwhile, input data is unlabeled and the number of classes not known in unsupervised learning cases. Unsupervised learning tends to be less computationally complex, whereas supervised learning tends to be more computationally complex. While supervised learning results tend to be highly accurate, unsupervised learning results tend to be less accurate/moderately accurate.

Unstructured data is data that isn’t organized in a pre-defined fashion or lacks a specific data model. Meanwhile, structured data is data that has clear, definable relationships between the data points, with a pre-defined model containing it. That’s the short answer on the difference between structured and unstructured data, but let’s take a closer look at the differences between the two types of data.

Structured Data
When it comes to computer science, data structures refer to specific ways of storing and organizing data. Different data structures possess different relationships between data points, but data can also be unstructured. What does it mean to say that data is structured? To make this definition clearer, let’s take a look at some of the various ways of structuring data.

Structured data is often held in tables such as Excel files or SQL databases. In these cases, the rows and columns of the data hold different variables or features, and it is often possible to discern the relationship between data points by checking to see where data rows and columns intersect. Structured data can easily be fit into a relational database, and examples of different features in a structured dataset can include items like names, addresses, dates, weather statistics, credit card numbers, etc. While structured data is most often text data, it is possible to store things like images and audio as structured data as well.

Common sources of structured data include things like data collected from sensors, weblogs, network data, and retail or e-commerce data. Structured data can also be generated by people filling in spreadsheets or databases with data collected from computers and other devices. For instance, data collected through online forms is often immediately fed into a data structure.

Structured data has a long history of being stored in relational databases and SQL. These storage methods are popular because of the ease of reading and writing in these formats, with most platforms and languages being able to interpret these data formats.

In a machine learning context, structured data is easier to train a machine learning system on, because the patterns within the data are more explicit. Certain features can be fed into a machine learning classifier and used to label other data instances based on those selected features. In contrast, training a machine learning system on unstructured data tends to be more difficult, for reasons that will become clear.

Unstructured Data
Unstructured data is data that isn’t organized according to a pre-defined data model or structure. Unstructured data is often called qualitative data because it can’t be analyzed or processed in traditional ways using the regular methods used for structured data.

Because unstructured data doesn’t have any defined relationships between data points, it can’t be organized in relational databases. In contrast, the way unstructured data is stored is typically with a NoSQL database, or a non-relational database. If the structure of the database is of little concern, a data lake, or a large pool of unstructured data, can be used to store the data instead of a NoSQL database.

Unstructured data is difficult to analyze, and making sense of unstructured data often involves examining individual pieces of data to discern potential features and then looking to see if those features occur in other pieces of data within the pool.

The vast majority of data is in unstructured formats, with estimates that unstructured data comprises around 80% of all data. Data mining techniques can be used to help structure data.

In terms of machine learning, certain techniques can help order unstructured data and turn it into structured data. A popular tool for turning unstructured data into structured data is a system called an autoencoder.