It comes with a little computer and a big AWS AI cloud
Overview
On Jan 31, 2023 AWS announced retirement of Deeplens. See https://docs.aws.amazon.com/deeplens/latest/dg/deeplens-end-of-life.html
This article provides a step-by-step tutorial with commentary on configuring the AWS Deeplens wi-fi camera and its local computer and big AWS AI cloud.
AI engineers have not needed DeepLens before because they have been uploading media to servers.
What DeepLens provides is real-time processing of scenes (visual imagery and sounds from the built-in 2D microphone array) on the little computer and in Amazon’s cloud.
One can run TensorFlow on iOS and Android smart phones because smart phones today have similar hardware to DeepLens, which has an Intel Atom chip with 8GB RAM. DeepLens is a way for Amazon to monitize the AI hype.
DeepLens is a step up from the Raspberry Pi Zero board and cardboard case of the $90 Google Vision Kit AIY that recognize objects. See MyTGTtech at 877-698-4883 (7am-11pm CST).
Within the DeepLens white box, AWS Greengrass Core runs AWS Lambda functions that invoke neural network models trained by Machine Learning software.
Outputs from DeepLens:
-
The “Device stream” is passed through the device without processing.
-
The “Project stream” output is the result of processing video frames through AWS Lambda functions on-board, referencing a “CNN deep learning inference model”. A “project” are speech-enabled (using Amazon Polly API service).
Hardware Power Up
-
You might need a power extension cord. There is only 3 feet to the power plug cord.
-
Attach the American plug to the base. Slide to lock it in.
The adapter sends 5V at 4 amps. QUESTION: A portable battery?
TODO: Measure amps used and heat under load.
- Ground yourself befor handling the SD chip. It holds 32 GB.
-
Insert the micro SD chip into the device until it’s flush with the surface and a mechanical click is heard.
Deeplens can use a chip up to 64 GB.
-
Get a 4-port USB and plug in a USB mouse and keyboard. This is to make room for a USB drive.
-
Get a 32GB or 64GB USB flash drive to use in resetting DeepLens (described below).
-
Get a micro HDMI adapter or cable to connect to a monitor:
$5.99 from https://www.amazon.com/AmazonBasics-High-Speed-Micro-HDMI-HDMI-Cable/dp/B014I8U33I/
$7.99 from https://www.amazon.com/UGREEN-Adapter-Support-Ethernet-Zenbook/dp/B00B2HORKE/
-
The bottom of the device has a screw-hole for a tripod mount like GoPro cameras.
A bracket to hold the camera steady above a work table would make recognition faster since the image moves less. Contact me. I can make one for you.
-
Remove the clear lens cover. The 4MP camera outputs H.264 encoding at 1080p resolution.
- Plug in a portable speaker to the audio out annotated by a headphone icon.
Prepare USB book drive
The usual Ubuntu OS steps for the Recovery menu by pressing Shift during Reboot does not work on DeepLens. This is because DeepLens, unlike other Ubuntu machines, does not come with a recovery partition.
Save the several days it took me to figure out how to get a Boot USB for DeepLens and buy one from me for $99.
An AWS Evangelist wrote this pdf. But it assumes that you have a working Ubuntu machine available and a Windows PC.
-
On a separate Windows laptop, create two partitions on a USB flash drive:
- FAT 2GB for the recovery partition
- NTFS >16GB containing the factory restore package from https://s3.amazonaws.com/deeplens-public/factory-restore/DeepLensFactoryRestore.zip to .bin folder and flash.sh script file.
BLAH: I wasn’t able to use Etcher on either Mac or Windows because I couldn’t get it to work with multiple partitions.
-
Download from https://s3.amazonaws.com/deeplens-public/factory-restore/Ubuntu-Live-16.04.3-Recovery.iso to turn the USB flash drive bootable.
To restore a DeepLens device to factory settings, wiping out all data:
-
Press the on/off button at the front of the DeepLens device and enter BIOS by repeatedly pressing ESC.
- Select “Boot From File”, USB VOLUME, EFI, BOOT, BOOTx64.EFI
- After the live system is up, an automatic flashing will happen to recover the device.
- When a Terminal window pops up with progress displayed, but no manual interaction is needed. If errors occur, repeat from first step. A result.log will be generated on the USB drive.
-
Wait for the flashing process to complete (~ 6min). After that, your device will automatically reboot.
-
Your device is now restored, so remove the USB flash drive.
Boot up
-
Press the power button for the vanilla Ubuntu OS 16.04 LTS desktop.
-
PROTIP: If that blue light is annoying, cover the buttons with black electrical tape or white double-sided foam tape.
-
Type in the “aws_cam” password (account name = password). Twice.
To change the password, see https://gist.github.com/willh/5982310b4742c104855221211516e8d3
-
Type in a new password. Twice. PROTIP: Write it down somewhere so you don’t have to reset the password.
To press reset, use a straight pin needle (finer than a smaller paperclip) to push inside the Reset button on the device.
Reset admin password
- At the Desktop, click the gear icon at the upper-right corner and select “Shut down…”.
- Keep holding down the Shift key while clicking “Reset” until the Recovery menu appears.
- Press down key for “Advanced options for Ubuntu”. Press Enter.
- Press Enter to select “Ubuntu, with Linux 3.13.0-32-generic (recovery mode)”
- Select root.
-
Type:
mount -n -o remount,rw /
- Press Enter
-
passwd aws_cam
Wi-Fi, Browser
- Click the gear icon at the upper-right and select “System Settings”. That’s the same as clicking the gear at the left group of icons.
-
Click “Network” to configure Wi-fi. Type the network password. Click Connect.
Router
- Open Firefox browser to http://198.168.0.1 (depending on your router).
-
Navigate to view the devices connected and their names.
- Click the Firefox browser icon at the left of the screen.
-
Type in the address of this page:
https://wilsonmar.github.io/deeplens
Config Local Time
- Click the time at the upper-right.
- Click “Time & Date settings…”
- Click approximately where you are in the map. The location should change.
-
Click the red x to dismiss the dialog.
Terminal
- Click the top icon at the left of the screen.
-
Click the Tarminal icon for the Terminal window
aws_cam@Deepcam:~$
-
Get the version:
dpkg -l awscam
Response:
||/ Name Version Architecture Description +++-==============-===========-===============-=============== ii awscam 1.1.17 amd64 awscam ii awscam 1.3.3 amd64 awscam
-
Get the time drift of your hardware clock:
sudo hwclock --show
The response:
Tue 17 Jul 2018 08:16:41 PM MDT .388766 seconds
Without the sudo, you’ll see: hwclock: Sorry, only the superuser can use the hardware clock.
SSH
-
In a Terminal, get the IP address:
ifconfig
???
-
From another machine on the same subnet, SSH into Deeplens:
ssh aws_cam@
AWS Config and Upgrade
Do this on a machine where you have passwords stored (within 1Password, LastPass, etc.)
-
Right-click this URL to read Amazon’s doc to setup IAM user and roles for Deeplens:
https://docs.aws.amazon.com/deeplens/latest/dg/deeplens-getting-started.html
TODO: Run a bash script to setup IAM user and roles for Deeplens.
Upgrade software
-
Upgrade per https://docs.aws.amazon.com/deeplens/latest/dg/deeplens-manual-updates.html
sudo apt-get update sudo apt-get install awscam sudo reboot
Sudo means you’ll be prompted for the password again.
-
Press Shift key while clicking Restart.
-
Open Firefox browser to https://aws.amazon.com/console/
- Open the File icon.
-
The awscam software is in folder “/opt/awscam”.
http://blog.jamescaple.com/aws-deeplens-hackathon-a-machine-learning-newbies-journey-into-the-abyss-part-2/
Configure for sound out
-
To enable Audio playback through Lambda, add two sound card resources for DeepLens in folder /dev/snd/: /dev/snd/pcmC0D0p and /dev/snd/controlC0
- Define and run models on the camera data stream locally in Python 2.7.
Use Cases
Kesha Williams (Chick-fil-A Engineering Manager) created a video on ACloudGuru.com how to build a Soda-Theft Detector which sends her an email when something disappears from the fridge.
Use Case : Read to me
DeepLens can be made to read a children’s book held up to the DeepLens camera (by converting video image to text and then text to voice by Amazon Polly):
Click for YouTube video to see it takes 20 seconds
The “ReadToMe” model above by Alex Schultz won 1st place in the #AWSDeepLensChallenge Amazon held (with Intel and DevPost) at Amazon’s Re:Invent conference in 2017. It’s also described at https://devpost.com/software/read-to-me.
This project is built using AWS GreenGrass groups deploy, Python 3.6, MXNet on AWS, Google’s OpenCV, Google’s Tesseract, and AWS Polly.
Code for the project is at https://github.com/alexschultz/ReadToMe.
User Case 2 : Sign Language
If an ASL letter is detected, a corresponding MP3 file is played through the speaker. See https://aws.amazon.com/deeplens/community-projects/ASLens/ and https://aws.amazon.com/deeplens/community-projects/deeplens_asl/
Early developers note that they need to optimize the AWS SageMaker model to run on the AWS DeepLens GPU, and then crop and scales each frame. Once resized, the video frame is run against the model.
User Case 3 : Customers waiting
https://aws.amazon.com/deeplens/community-projects/Customer_Counter/
The limitation for object recognition now is about 20 objects (cats, dogs, etc.).
Activity detection recognizes what activity is occurring.
Train your own models.
OpenCV
Use OpenCV to draw bounding boxes.
When calling do inference, Intel Inference Engine layer makes predictions optimized for the Intel chip.
A project consists of a model and a Lambda function.
Greengrass
https://aws.amazon.com/greengrass/ https://console.aws.amazon.com/iot/home#/greengrassIntro Greengrass Core runs on top of IoT devices (such as Raspberry Pi) running Amazon FreeRTOS or have the AWS IoT Device SDK installed. Greengrass Core enables Lambda functions to run locally on IoT devices.
Greengrass runs local machine learning inference using models built and trained by AWS Sagemaker in the cloud. See https://aws.amazon.com/greengrass/ml/
Greengrass keep device data encrypted and synchronized with the AWS cloud via MQTT protocol. Greengrass communicates securely with other devices within a Greengrass Group. Greengrass authenticates and encrypts data using the security and access management capabilities of AWS IoT Core. Greengrass can filter device data it transmits back to the cloud.
A Greengrass Group coordinates communication among up to 200 devices installed with Greengrass Core.
Limit User in IAM
PROTIP: Use two different browser programs. Use Firefox for Administrator work setting IAM for worker accounts. Use Chrome for worker use. Switch quickly between them using command+Tab.
PROTIP: Before pressing “Register Device”, in IAM create a new User with IAM Role just for Deeplens work (such as “deeplens-config”), then login with that for your device registration/configuration. If you’re configuring for non-technical customers, you will need to use yet another user with roles and permissions for usage rather than configuration.
https://docs.aws.amazon.com/deeplens/latest/dg/deeplens-prerequisites.html
-
Using Firefox, sign in as an administrator.
This account should be setup with Two-Factor Authentication using Google Authenticator on your smart phone.
-
Use the IAM service to create an IAM Group “DeepLens-config” with Policy:
- AWSDeepLensLambdaFunctionAccessPolicy
- AWSDeepLensServiceRolePolicy
- sagemaker:ListTrainingJobs to train models in the AWS cloud
- Amazon Rekognition for advanced image analysis in the AWS cloud
- Amazon Polly to create speech-enabled projects
- AWS Greengrass to connect your AWS DeepLens device to the AWS Cloud
-
Create a User name such as “deeplens-config-01”.
-
Attach the policy for “AWSLambdaFullAccess”.
-
Click menu Roles. Create Role. Select DeepLens. Click Use Case: DeepLens. Next: Permissions. To [AWS Managed Policy] AWSDeepLensServiceRolePolicy Grants AWS DeepLens access to AWS Services, resources and roles needed by DeepLens and its dependencies including IoT, S3, GreenGrass and AWS Lambda.
Next: Review.
Type “DeepLens1”. Click Create role.
Select “DeepLens - GreenGrass Lambda” which “Allows DeepLens to access administrative Lambda functions that run on a DeepLens device on your behalf.” Next: Permissions.
Click “Next: Review” for [AWS Managed Policy] AWSDeepLensLambdaFunctionAccessPolicy. This policy specifies permissions required by DeepLens Administrative lambda functions that run on a DeepLens device
Type “DeepLensGreenGrass”
Here:
-
AWSDeepLensServiceRole - An IAM role for the AWS DeepLens service to access dependent AWS services, including IoT, S3, GreenGrass and Lambda.
-
AWSDeepLensLambdaRole - An IAM role passed to the AWS Greengrass service for creating and accessing required AWS services, including deploying Lambda inference functions to a DeepLens device for on-device execution.
-
AWSDeepLensGreengrassRole - An IAM role passed to the AWS Greengrass service for creating and accessing required AWS services, including deploying Lambda inference functions to a DeepLens device for on-device execution.
-
AWSDeepLensGreengrassGroupRole - An IAM role passed to AWS Greengrass device groups for allowing DeepLens administrative Lambda functions to access dependent AWS services.
In order to get sound to play on the DeepLens, you will need to grant GreenGrass permission to use the Audio Card.
-
Save the credentials in a spreadsheet.
-
Sign in using the “Console Login link” such as https://123456789123.signin.aws.amazon.com/console
Register Device
-
Go to URL https://aws.amazon.com/deeplens
PROTIP: Note that N. Virginia (East) is the only region allowed at this point.
-
Click “Devices” on the left Resources pane.
-
I named mine “DL1”. Next.
-
Login to the AWS Console using any AWS account, so we can look around.
https://console.aws.amazon.com/console/home
Create Project
-
Click on https://console.aws.amazon.com/deeplens/home?region=us-east-1#projects/create
DeepLens comes with several project templates. The “Not a Hot Dog” app is a parody app featured in HBO’s Silicon Valley TV show. But it’s a starting point to other recognition apps such as dog breeds.
-
Use case 2: Select “Artistic Style Transfer” and Next to turn what the camera sees into how Van Gogh would have painted. *
Amazon’s facial recognition program is controvertial. Rights groups raised concerns that the service could be used in ways that could violate civil liberties.
- Click “Create”.
-
Click the radio button.
BLAH: “Deploy to device” grayed out?
-
Open a browser window to https://aws.amazon.com/deeplens/community-projects/
Others:
- Recognize mushrooms - the percentage it’s poison or edible.
DeepLens processes visual imagery based on CNN models created and validated using Amazon SageMaker in the AWS cloud, then downloaded to Deeplens.
AWS DeepLens integrates with Amazon Rekognition for advanced image analysis, Amazon SageMaker for training models. The device also connects securely to AWS IoT, Amazon SQS, Amazon SNS, Amazon S3, Amazon DynamoDB, and more.
- Recognize mushrooms - the percentage it’s poison or edible.
Contact real people
https://twitter.com/hashtag/deeplens?lang=en
https://aws.amazon.com/deeplens is the marketing home page.
The Terms and Conditions point to Amazon Customer Support at 1-877-375-9365.
Amazon’s Discussion Forum for Deeplens is at
https://forums.aws.amazon.com/forum.jspa?forumID=275
Documentation begins at https://docs.aws.amazon.com/deeplens/latest/dg/what-is-deeplens.html
https://aws.amazon.com/blogs/machine-learning/category/artificial-intelligence/aws-deeplens
https://aws.amazon.com/deeplens/community-projects/
Kinesis
https://aws.amazon.com/blogs/machine-learning/video-analytics-in-the-cloud-and-at-the-edge-with-aws-deeplens-and-kinesis-video-streams/
MNNet
AWS DeepLens comes pre-installed with the Apache MXNet framework and Gluon interface of models in Jupyter Python3 notebooks.
This is largely because (unlike TensorFlow) MXNet can be used with more languages: Python, R, Perl, Matlab, Scala, C++. MXNet itself is written in C++.
MXNet can use 32 or 16 bit weights and activations for smaller and faster models.
However, AWS developers can run any deep learning framework, including TensorFlow and Caffe.
Resources on MXNet:
- https://becominghuman.ai/an-introduction-to-the-mxnet-api-part-1-848febdcf8ab
- http://gluon.mxnet.io/chapter08_computer-vision/object-detection.html?highlight=ssd
- https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/README.md
- https://github.com/zhreshold/mxnet-ssd
- https://pythonprogramming.net/introduction-use-tensorflow-object-detection-api-tutorial/
- https://aws.amazon.com/mxnet/
AWS Gluon
Gluon (pronounced like “Glue-on”) at https://gluon.mxnet.io is an open source interface to get around the “black box” of training deep learning models semantically. Gluon uses imperative programming aka “define by run”. “You can use the Python debugger. You can stop the training process. You have full control over the training loop” *
You can get started with a wide range of computer vision tutorials for Gluon, including full notebooks ready to run on SageMaker.
SageMaker
First create a model in AWS SageMaker.
Deploy the model to your DeepLens, where the model optimizer will automatically optimize it for the best performance on the device.
Convolutional Deep Neural Networks: http://gluon.mxnet.io/chapter04_convolutional-neural-networks/cnn-gluon.html
https://gluon.mxnet.io/chapter04_convolutional-neural-networks/cnn-scratch.html
Object detection using convolutional neural networks: http://gluon.mxnet.io/chapter08_computer-vision/object-detection.html
Visual Question Answering in Gluon: http://gluon.mxnet.io/chapter08_computer-vision/visual-question-answer.html
To learn more about Gluon support for AWS DeepLens, please read the blog-post (https://aws.amazon.com/blogs/machine-learning/deploy-gluon-models-to-aws-deeplens-using-a-simple-python-api/) on this topic. To learn more about Gluon in general please visit the AWS Machine learning blog (https://aws.amazon.com/blogs/aws/introducing-gluon-a-new-library-for-machine-learning-from-aws-and-microsoft/). Read the AWS DeepLens Gluon documentation (https://docs.aws.amazon.com/deeplens/latest/dg/deeplens-supported-frameworks.html#deeplens-supported-frameworks-gluon) for more information on this feature, and try it though the AWS DeepLens console (https://console.aws.amazon.com/deeplens). To learn more about AWS DeepLens and pre-order your device, visit the AWS DeepLens website (https://aws.amazon.com/deeplens/).
YouTube videos
Julien Simon is the AWS Evangelist on DeepLens AI ML in EMEA:
-
Deep Learning at the edge with AWS DeepLens at PAPIs.io May 9, 2018
-
An Introduction to Deep Learning: Theory, Use Cases and Tools Apr 25, 2018
-
AWS Tel Aviv Summit 2018: Machine Learning State of the Union Mar 30, 2018
-
An overview of Amazon SageMaker Nov 30, 2017
https://devblogs.nvidia.com/deep-learning-nutshell-history-training/
- https://aws.amazon.com/evangelists/julien-simon/
- https://www.linkedin.com/in/juliensimon/
- https://twitter.com/julsimon?lang=en
- https://www.youtube.com/juliensimonfr
- https://medium.com/@julsimon articles
More on cloud
This is one of a series on cloud computing:
- Dockerize apps
- Kubernetes container engine
- Hashicorp Vault and Consul for keeping secrets
- Hashicorp Terraform
- Ansible server configuration automation
- Serverless software app development
- Terraform (declarative IaC)
- Build load-balanced servers in AWS EC2
- AWS On-boarding (CLI install)
- AWS MacOS instances in the cloud)
- AWS Certifications
- AWS IAM admin.
- AWS Data Tools
- AWS Security
- AWS VPC Networking
- AWS X-Ray tracing
- AWS server deployment options
- AWS Lambda
- AWS Cloud Formation/cloud-formation/)
- AWS Lightsail
- AWS Deeplens AI
- AWS Load Balanced Servers using CloudFormation
-
Microtrader (sample microservices CI/CD to production Docker within AWS)
-
AWS Data Processing: Databases, Big Data, Data Warehouse, Data Lakehouse
- Google Cloud Platform
-
Bash Windows using Microsoft’s WSL (Windows Subsystem for Linux)
- Azure cloud introduction
- Azure Cloud Onramp (Subscriptions, Portal GUI, CLI)
- Azure Cloud Powershell
- PowerShell GitHub API programming
- PowerShell DSC (Desired State Configuration)
- PowerShell Modules
- Microsoft AI in Azure cloud
- Azure cloud DevOps
- Azure Networking
- Azure Storage
- Azure Compute
- Dynatrace cloud monitoring
- Digital Ocean
- Cloud Foundry