It can do it for you
Machine learning is a type of AI (Artificial Intelligence) that enables computers to do things without being explicitly programmed by human developers. EXAMPLE: In 2016, IBM’s Watson software was able to beat the Jeopardy game champion by “learning” from books and encyclopedias.
In other words, machine learning builds a “model” from example inputs to make data-driven predictions vs. following strictly static program instructions (logic defined by human developers).
IBM’s programmers only created the “algorithms” that enabled the computer to learn. Machine learning algorithms identify information from data fed through “generic” (general purpose) algorithms which build their own logic from detecting patterns within the data.
Patterns are recognized by neural network algorithms. A neural network has multiple layers. At the top (or left) layer, the network trains on a specific set of “features” and then sends that information to the next layer. The network takes that information, combines it with other features and passes it to the next layer, and so on.
Due to the advancement of distributed compute resources, organizations are generating an torrant of image, text, and voice data from which insights are not previously possible.
YOUTUBE: A friendly introduction to Deep Learning and Neural Networks by Luis Serrano
Machine Learning is Fun! (article on Medium) self-proclaimed “the world’s easiest introduction to Machine Learning”.
Adam Geitgy’s introduction of Machine Learning https://medium.com/@ageitgey/machine-learning-is-fun-80ea3ec3c471/ “for anyone who is curious about machine learning but has no idea where to start.”
Tool: Neural Network Playground.
Types of machine learning
aims to predict an output given an input. It makes use of a labeled data set of training data (examples) to which we know the “answer”. It gets feedback on what is correct or not. The training correlates features to outputs in order to predict outputs based on new inputs.
- For text processing (Sentiment Analysis) and speech recognition: RNTN (Recurrent Net or a Recursive Neural Tensor Network) that operates on a character level. RNTNs were conceived by Richard Socher of MetaMind.io as part of his PhD thesis at Stanford. In 2016 it became part of Salesforce Einstein Predictive Services at https://metamind.readme.io/v1/docs
Recurrent nets can process data that changes with time, and has a feedback loop that acts as a forecasting engine.
RNTNs are better than feedforward or recurrent nets with data with a hierarchical structure (binary trees), such as the parse trees of a group of sentences.
TOOL: Clarifai uses a convolutional net to recognize things and concepts in a digital image. It then presents similar images.
For time series analysis, use a recorrent net.
For image recognition: DBN (Deep Belief Network) or Convolutional Net
For object recognition: Convolutional Net
aims to discover a good internal representation of the input. It makes use of a dataset without labels. Feature extraction: It identifies the structure of data to solve some given task. Pattern recognition: It classifies patterns and clusters data.
Autoencoders encode their own structure. They are feature extractors with the same number of input and output nodes. A hidden layer in Autoencoders provides a bottleneck of nodes reconstruction of input layer. So they can be used for image compression.
RBM (Restricted Boltzmann Machine) is an autoencoder with one visible and one hidden layer. Each visible node connects to all nodes in the hidden layer.
The “restricted” is because there is no connection among nodes within its layer.
RBM determines the relationship among input features.
In training loops, RBM reconstructs inputs in the backward pass.
RBM uses a measure called KL Divergence that compares actual to recreation of data
RBM makes decisions about what features are important based on weights and overall bias.
RBM is like a two-way translator.
All this means that data in RBM does not need to be labeled.
As an example, to determine whether an animal is acerous or not acerous (has horns), it look at features such as color, number of legs, horns,
Principal Component Analysis is a widely used linear method for finding a low-dimensional representation.
does not provide feedback until a goal is achieved. Its objective is to select an action to maximize payoff.
In Dec 2013, Volodymyr Mnih and others at DeepMind.com (a small company in London) uploaded to Arxiv a paper titled “Playing Atari with Deep Reinforcement Learning”. The paper describes how a “convolutional neural network” program learned to play several games on the Atari 2600 console (Pong, Breakout, SpaceInvaders, Seaquest, Beam Rider) by observing raw screen pixels of 210 × 160. The program learned by receiving a reward when the game score increased. This was hailed as general AI because the games and the goals in every game were very different but, without any change, learned seven different games, and performed better than some humans in three of them. * Their Feb. 2015 article in Nature gave them widespread attention. Then Google bought DeepMind (for $400 million).
(aka Probabilistic Programming) is where a neural network learns from one (or a few) examples, as opposed to a large amount of data.
Deep Learning Frameworks
TensorFlow r0.10 from Google
Microsoft Cognitive Toolkit v2.0 Beta 1 (aka CNTK 2) from Microsoft
uses a declarative BrainScript neural network configuration language
No MacOS support yet
CNTK 2 models on GPU-equipped N-series family of Azure Virtual Machines.
Azure Machine Learning is part of the larger Microsoft Cortana Analytics Suite offering.
Caffe 1.0 RC3 from Berkeley Artificial Intelligence…
MXNet v0.7 from Distributed Machine Learning…
Amazon’s DNN framework of choice, combines symbolic declaration of neural network geometries with imperative programming of tensor operations.
MXNet scales to multiple GPUs across multiple hosts with a near-linear scaling efficiency of 85 percent and boasts excellent development speed, programmability, and portability.
Its dynamic dependency scheduler allows mixing symbolic and imperative programming flavors: Python, R, Scala, Julia, and C++
Ahead of TensorFlow with embed imperative tensor operations.
Machine Learning frameworks
Scikit-learn 0.18.1 from Scikit-learn
Mature documentation and libraries
Python-based, but does not support PyPy compiler
Spark MLlib 2.01 from Apache Software Foundation
Written in Scala and uses the linear algebra package Breeze which uses netlib-java.
Get data easily from Spark big-data clusters
Supported in the Databricks cloud