What can possibly go wrong with robots smarter than humans?
Humans are limited and replaceable
While many organizations are still working toward a “mobile-first” approach to design (rather than “desktop-first” or “mainframe-first” design), some companies at the “bleeding edge” are moving to “AI first” design.
For example, Foxconn, is now assemblying iPhones using most mechanical robots rather than human robots. Tesla, BMW, and other auto manufacturers make heavy use of mechanical robots.
Simerlerly, Uber drivers are human until they are replaced by driverless cars.
Robots are at par with Olympic atheletes in table tennis.
Hit songs are written with emotional insights gained from scraping millions of conversations, newspaper headlines and speeches” “Not Easy” reached number four in the iTunes Hot Tracks chart, and number six in the alternative chart, within 48 hours of its release.
In offices, the trend is to replace people reading lines on screens. Instead of creating lines on various charts for analysis by people to make decisions, computers are making decisions.
“Machines will be capable, within 20 years, of doing any work a man can do.” –Herbert Simon (1916-2001), Nobel Laureate
This is because a human can only focus on a few things at a time.
Because humans cannot respond quick enough, instead of sending alerts for people to take action, computers are beginning to take action automatically.
Self-driving cars by Tesla, comma.ai, and others are a manifestation of this trend.
A lot can also happen in a few seconds within a computer. So vigilant actions such as scanning for malware need to be done real-time.
A program can look for patterns in behavior and alert people to new threats detected.
In operation of computers, configuration settings are increasingly being updated by programs instead of people editing files.
The point of these apocalyptic pronouncements is that AI and Machine Learning will probaly not be embrased with open arms in organizations where executive see their human army with disdain.
To reduce the likelihood of robots being undermined by the human workforce, management needs to prove that it’s not a “zero sum game” but that rising demand for services would result only in planned displacement of people to different human roles.
Those different roles are available only when the organization is growing and will continue to grow.
Traditionally, programmers hand-code rules to detect and respond to known threats.
But this has not kept up.
AI (neural networks in particular) can now discover, in real-time, threats such as malware installation, phishing attacks, and brute-force intrusions which programmers did not anticipate.
They can do that because Big Data systems enable the analysis of massive floods of data quickly. Many computers in the cloud (with fast Google Fiber network links) now process data faster than in the past.
The increased scope of AI’s processing capabilities now means it can analyze many more variables quickly. For example, to identify malware, an AI program can quickly scan every email for phising by looking for clues such as the originating IP address, word choice and phrasing, and many other factors.
Predictions from Swarm
AI can be designed to make prediction based on data analyzed.
Startup Unanimous A.I. (uni.ai) has, since 2015, been making accurate predictions like who will win contests such as the Superbowl, March Madness, US presidential debates, the Kentucky Derby superfecta, Academy Awards, etc. It has been more accurate than individual experts.
Its software platform (called UNU) milks conclusions not from algorithms, but from the “collective wisdom” of group of breathing people who influence each other by their vote, in real-time, like a Ouiji board. (
The role of humans
In the above scenarios, the role of human operators is to make sure the data sets fed into an AI engine are accurate and robust.
Data quality is more important than ever to weed out false positives. The old adage “garbage in garbage out” applies even more today. Systems can only be as intelligent as the data it analyzes.
More importantly, good AI adapts rules to deal with new conditions (threats).
AI may do that by analyzing judgements human experts make.
VIDEO: Deep Neural Networks are Easily Fooled by Evolving AI Lab
When an early-model Tay chatbot was first introduced in 2015, Microsoft shut it down a week after launch because it began spewing out racist and sexist texts because it lacked the filter that most human kids learn from parents.
Another difference with AI is that testing cannot achieve one-to-one correspondance between input requirements and resulting outcomes.
This is because the fundamental approach to Machine Learning is essentially guesswork. This is why people celebrated when 98% accuracy was achieved.
Machine Learning does not approach problems like a double-entry accounting system, where dollars and cents are supposed to balance out every time.
One does not bring an accounting system to a gunfight playing first-person shooter games.
Artificial Intelligence programs have beaten world champions in Jeopardy, chess, go, and poker because of algorithms which aim to learn new rules rather than following rules mechanically.
So a new mind-set is necessary to test AI.
Split data for eval.
The now standard approach to testing AI is to divide the universe of a large dataset into two groups. Usually 70% of the dataset is used for training and the remaining 30% of the dataset is reserved for evaluating the training.
But I argue here that this is not enough.
The case in point is an AI system that is about as serious as it gets – the system that recommends to judges how long a sentence to give convicts. That system was used for years before an investigative report analyzed the impact discovered that African Americans and the poor were systematically given harsher sentences than whites and well-to-do citizens.
“Garbage in, garbgse out” still applies here.
Looking for bias
One aspect of making judgement about the efficacy of AI results is whether it is biased against factors that were not part of data processed by the AI system.
Again, one cannot approach evaulating the total impact of an AI system simply by looking at only the data the AI system used.
Working with data “outside the system” is the “creative” role of “higher thinking” which humans can do well.
But humans need to be emboldened by management to both recognize and name “elephants in the room”.
An organization’s “cultural history” can and often does limit whether its members speak up.
Singular Value Decomposition (SVD)
Canonical-correlation analysis (CCA)
Develop Machine Learning Models
Mean Absolute Error and Root Mean Squared Error
Model training produces a checkpoint file that contains a model which already has parameters output from traning. Using checkpoint files means we can get straight to applying the model.
https://www.youtube.com/watch?v=KkwX7FkLfug Neural Net in C++ Tutorial on Vimeo vinh nguyen
https://www.youtube.com/watch?v=AyzOUbkUf3M The Next Generation of Neural Networks GoogleTechTalks
https://www.youtube.com/watch?v=oYbVFhK_olY Deep Learning with Neural Networks and TensorFlow Introduction by sentdex
https://www.youtube.com/watch?v=ujBiM9stPHU Neural Network Calculation (Part 1): Feedforward Structure Jeff Heaton
https://www.amazon.com/Deep-Learning-Adaptive-Computation-Machine/dp/0262035618 Deep Learning (Adaptive Computation and Machine Learning series)</a> by Ian Goodfellow, Yoshua Bengio, and Aaron Courville (of OpenAI) “is the only comprehensive book on the subject.”
https://www.youtube.com/watch?v=zwm2C3V35Fw Artificial Intelligence - The Apex Technology of the Information Age: Goldman Sachs’ Heath Terry 2:41 general talk
This is one of a series on AI, Machine Learning, Deep Learning, Robotics, and Analytics:
- Tableau Data Visualization
- AI Ecosystem
- Machine Learning
- Microsoft’s AI
- Microsoft’s Azure Machine Learning Algorithms
- Microsoft’s Azure Machine Learning tutorial
- Python installation
- Image Processing
- Code Generation