It comes with a little computer and a big AWS AI cloud
- Hardware Power Up
- Use Case : Read to me
- Limit User in IAM
- Contact real people
- YouTube videos
- More on cloud
This article provides a step-by-step tutorial with commentary on configuring the AWS Deeplens wi-fi camera and its local computer and big AWS AI cloud.
AI engineers have not needed DeepLens before because they have been uploading media to servers.
What DeepLens provides is real-time processing of scenes (visual imagery and sounds from the built-in 2D microphone array) on the little computer and in Amazon’s cloud.
AWS Greengrass Core running in the white box runs AWS Lambda functions that invoke neural network models trained by Machine Learning software.
Outputs from DeepLens:
The “Device stream” is passed through the device without processing.
The “Project stream” output is the result of processing video frames through AWS Lambda functions on-board, referencing a “CNN deep learning inference model”. A “project” are speech-enabled (using Amazon Polly API service).
Hardware Power Up
One can run TensorFlow on iOS and Android smart phones because smart phones today have similar hardware to DeepLens, which has an Intel Atom chip with 8GB RAM. DeepLens is a way for Amazon to monitize the AI hype.
You might need a power extension cord. There is only 3 feet to the power plug cord.
Attach the American plug to the base. Slide to lock it in.
The adapter sends 5V at 4 amps. QUESTION: A portable battery?
TODO: Measure amps used and heat under load.
- Ground yourself befor handling the SD chip. It holds 32 GB.
Insert the micro SD chip into the device until it’s flush with the surface and a mechanical click is heard.
Deeplens can use a chip up to 64 GB.
Get a 4-port USB and plug in a USB mouse and keyboard. This is to make room for a USB drive.
Get a 32GB or 64GB USB flash drive to use in resetting DeepLens (described below).
Get a micro HDMI adapter or cable to connect to a monitor:
$5.99 from https://www.amazon.com/AmazonBasics-High-Speed-Micro-HDMI-HDMI-Cable/dp/B014I8U33I/
$7.99 from https://www.amazon.com/UGREEN-Adapter-Support-Ethernet-Zenbook/dp/B00B2HORKE/
The bottom of the device has a screw-hole for a tripod mount like GoPro cameras.
A bracket to hold the camera steady above a work table would make recognition faster since the image moves less. Contact me. I can make one for you.
Remove the clear lens cover. The 4MP camera outputs H.264 encoding at 1080p resolution.
- Plug in a portable speaker to the audio out annotated by a headphone icon.
Prepare USB book drive
The usual Ubuntu OS steps for the Recovery menu by pressing Shift during Reboot does not work on DeepLens. This is because DeepLens, unlike other Ubuntu machines, does not come with a recovery partition.
Save the several days it took me to figure out how to get a Boot USB for DeepLens and buy one from me for $99.
An AWS Evangelist wrote this pdf. But it assumes that you have a working Ubuntu machine available and a Windows PC.
On a separate Windows laptop, create two partitions on a USB flash drive:
- FAT 2GB for the recovery partition
- NTFS >16GB containing the factory restore package from https://s3.amazonaws.com/deeplens-public/factory-restore/DeepLensFactoryRestore.zip to .bin folder and flash.sh script file.
BLAH: I wasn’t able to use Etcher on either Mac or Windows because I couldn’t get it to work with multiple partitions.
Download from https://s3.amazonaws.com/deeplens-public/factory-restore/Ubuntu-Live-16.04.3-Recovery.iso to turn the USB flash drive bootable.
To restore a DeepLens device to factory settings, wiping out all data:
Press the on/off button at the front of the DeepLens device and enter BIOS by repeatedly pressing ESC.
- Select “Boot From File”, USB VOLUME, EFI, BOOT, BOOTx64.EFI
- After the live system is up, an automatic flashing will happen to recover the device.
- When a Terminal window pops up with progress displayed, but no manual interaction is needed. If errors occur, repeat from first step. A result.log will be generated on the USB drive.
Wait for the flashing process to complete (~ 6min). After that, your device will automatically reboot.
Your device is now restored, so remove the USB flash drive.
Press the power button for the vanilla Ubuntu OS 16.04 LTS desktop.
PROTIP: If that blue light is annoying, cover the buttons with black electrical tape or white double-sided foam tape.
Type in the “aws_cam” password (account name = password). Twice.
To change the password, see https://gist.github.com/willh/5982310b4742c104855221211516e8d3
Type in a new password. Twice. PROTIP: Write it down somewhere so you don’t have to reset the password.
To press reset, use a straight pin needle (finer than a smaller paperclip) to push inside the Reset button on the device.
Reset admin password
- At the Desktop, click the gear icon at the upper-right corner and select “Shut down…”.
- Keep holding down the Shift key while clicking “Reset” until the Recovery menu appears.
- Press down key for “Advanced options for Ubuntu”. Press Enter.
- Press Enter to select “Ubuntu, with Linux 3.13.0-32-generic (recovery mode)”
- Select root.
mount -n -o remount,rw /
- Press Enter
SSH into the device from another machien on the same subnet:
- Click the gear icon at the upper-right and select “System Settings”. That’s the same as clicking the gear at the left group of icons.
Click “Network” to configure Wi-fi. Type the network password. Click Connect.
- Open Firefox browser to http://126.96.36.199 (depending on your router).
Navigate to view the devices connected and their names.
- Click the Desktop icon to open a Terminal.
Get the version:
dpkg -l awscam
<pre> ||/ Name Version Architecture Description +++-==============-===========-===============-=============== ii awscam 1.1.17 amd64 awscam </pre>
Get the time of your hardware clock:
Set your local time zone:
cp /usr/share/zoneinfo/America/La_Paz /etc/localtime
Set the hardware clock automatically from the IP address of an NTP clock on the public internet:*
Alternatively, set the time manually, for example:
hwclock --set --date="2019-04-19 16:45:05" --localtime
Upgrade per https://docs.aws.amazon.com/deeplens/latest/dg/deeplens-manual-updates.html
sudo apt-get update sudo apt-get install awscam sudo reboot
Sudo means you’ll be prompted for the password again.
Press Shift key while clicking Restart.
Open Firefox browser to https://aws.amazon.com/console/
- Open the File icon.
The awscam software is in folder “/opt/awscam”.
Configure for sound out
To enable Audio playback through Lambda, add two sound card resources for DeepLens in folder /dev/snd/: /dev/snd/pcmC0D0p and /dev/snd/controlC0
Define and run models on the camera data stream locally in Python 2.7.
Use Case : Read to me
DeepLens can be made to read a children’s book held up to the DeepLens camera (by converting video image to text and then text to voice by Amazon Polly):
The “ReadToMe” model above by Alex Schultz won 1st place in the #AWSDeepLensChallenge Amazon held (with Intel and DevPost) at Amazon’s Re:Invent conference in 2017. It’s also described at https://devpost.com/software/read-to-me.
This project is built using AWS GreenGrass groups deploy, Python 3.6, MXNet on AWS, Google’s OpenCV, Google’s Tesseract, and AWS Polly.
Code for the project is at https://github.com/alexschultz/ReadToMe.
User Case 2 : Sign Language
If an ASL letter is detected, a corresponding MP3 file is played through the speaker. See https://aws.amazon.com/deeplens/community-projects/ASLens/ and https://aws.amazon.com/deeplens/community-projects/deeplens_asl/
Early developers note that they need to optimize the AWS SageMaker model to run on the AWS DeepLens GPU, and then crop and scales each frame. Once resized, the video frame is run against the model.
User Case 3 : Customers waiting
The limitation for object recognition now is about 20 objects (cats, dogs, etc.).
Activity detection recognizes what activity is occurring.
Train your own models.
Use OpenCV to draw bounding boxes.
When calling do inference, Intel Inference Engine layer makes predictions optimized for the Intel chip.
A project consists of a model and a Lambda function.
https://aws.amazon.com/greengrass/ https://console.aws.amazon.com/iot/home#/greengrassIntro Greengrass Core runs on top of IoT devices (such as Raspberry Pi) running Amazon FreeRTOS or have the AWS IoT Device SDK installed. Greengrass Core enables Lambda functions to run locally on IoT devices.
Greengrass runs local machine learning inference using models built and trained by AWS Sagemaker in the cloud. See https://aws.amazon.com/greengrass/ml/
Greengrass keep device data encrypted and synchronized with the AWS cloud via MQTT protocol. Greengrass communicates securely with other devices within a Greengrass Group. Greengrass authenticates and encrypts data using the security and access management capabilities of AWS IoT Core. Greengrass can filter device data it transmits back to the cloud.
A Greengrass Group coordinates communication among up to 200 devices installed with Greengrass Core.
Limit User in IAM
PROTIP: Use two different browser programs. Use Firefox for Administrator work setting IAM for worker accounts. Use Chrome for worker use. Switch quickly between them using command+Tab.
PROTIP: Before pressing “Register Device”, in IAM create a new User with IAM Role just for Deeplens work (such as “deeplens-config”), then login with that for your device registration/configuration. If you’re configuring for non-technical customers, you will need to use yet another user with roles and permissions for usage rather than configuration.
Using Firefox, sign in as an administrator.
This account should be setup with Two-Factor Authentication using Google Authenticator on your smart phone.
Use the IAM service to create an IAM Group “DeepLens-config” with Policy:
- sagemaker:ListTrainingJobs to train models in the AWS cloud
- Amazon Rekognition for advanced image analysis in the AWS cloud
- Amazon Polly to create speech-enabled projects
- AWS Greengrass to connect your AWS DeepLens device to the AWS Cloud
Create a User name such as “deeplens-config-01”.
Attach the policy for “AWSLambdaFullAccess”.
Click menu Roles. Create Role. Select DeepLens. Click Use Case: DeepLens. Next: Permissions. To [AWS Managed Policy] AWSDeepLensServiceRolePolicy Grants AWS DeepLens access to AWS Services, resources and roles needed by DeepLens and its dependencies including IoT, S3, GreenGrass and AWS Lambda.
Type “DeepLens1”. Click Create role.
Select “DeepLens - GreenGrass Lambda” which “Allows DeepLens to access administrative Lambda functions that run on a DeepLens device on your behalf.” Next: Permissions.
Click “Next: Review” for [AWS Managed Policy] AWSDeepLensLambdaFunctionAccessPolicy. This policy specifies permissions required by DeepLens Administrative lambda functions that run on a DeepLens device
AWSDeepLensServiceRole - An IAM role for the AWS DeepLens service to access dependent AWS services, including IoT, S3, GreenGrass and Lambda.
AWSDeepLensLambdaRole - An IAM role passed to the AWS Greengrass service for creating and accessing required AWS services, including deploying Lambda inference functions to a DeepLens device for on-device execution.
AWSDeepLensGreengrassRole - An IAM role passed to the AWS Greengrass service for creating and accessing required AWS services, including deploying Lambda inference functions to a DeepLens device for on-device execution.
AWSDeepLensGreengrassGroupRole - An IAM role passed to AWS Greengrass device groups for allowing DeepLens administrative Lambda functions to access dependent AWS services.
In order to get sound to play on the DeepLens, you will need to grant GreenGrass permission to use the Audio Card.
Save the credentials in a spreadsheet.
Sign in using the “Console Login link” such as https://123456789123.signin.aws.amazon.com/console
Go to URL https://aws.amazon.com/deeplens
PROTIP: Note that N. Virginia (East) is the only region allowed at this point.
Click “Devices” on the left Resources pane.
I named mine “DL1”. Next.
Login to the AWS Console using any AWS account, so we can look around.
Click on https://console.aws.amazon.com/deeplens/home?region=us-east-1#projects/create
DeepLens comes with several project templates. The “Not a Hot Dog” app is a parody app featured in HBO’s Silicon Valley TV show. But it’s a starting point to other recognition apps such as dog breeds.
Use case 2: Select “Artistic Style Transfer” and Next to turn what the camera sees into how Van Gogh would have painted. *
Amazon’s facial recognition program is controvertial. Rights groups raised concerns that the service could be used in ways that could violate civil liberties.
- Click “Create”.
Click the radio button.
BLAH: “Deploy to device” grayed out?
Open a browser window to https://aws.amazon.com/deeplens/community-projects/
- Recognize mushrooms - the percentage it’s poison or edible.
DeepLens processes visual imagery based on CNN models created and validated using Amazon SageMaker in the AWS cloud, then downloaded to Deeplens.
AWS DeepLens integrates with Amazon Rekognition for advanced image analysis, Amazon SageMaker for training models. The device also connects securely to AWS IoT, Amazon SQS, Amazon SNS, Amazon S3, Amazon DynamoDB, and more.
- Recognize mushrooms - the percentage it’s poison or edible.
Contact real people
https://aws.amazon.com/deeplens is the marketing home page.
The Terms and Conditions point to Amazon Customer Support at 1-877-375-9365.
Amazon’s Discussion Forum for Deeplens is at
Documentation begins at https://docs.aws.amazon.com/deeplens/latest/dg/what-is-deeplens.html
This is largely because (unlike TensorFlow) MXNet can be used with more languages: Python, R, Perl, Matlab, Scala, C++. MXNet itself is written in C++.
MXNet can use 32 or 16 bit weights and activations for smaller and faster models.
However, AWS developers can run any deep learning framework, including TensorFlow and Caffe.
Resources on MXNet:
Gluon (pronounced like “Glue-on”) at https://gluon.mxnet.io is an open source interface to get around the “black box” of training deep learning models semantically. Gluon uses imperative programming aka “define by run”. “You can use the Python debugger. You can stop the training process. You have full control over the training loop” *
You can get started with a wide range of computer vision tutorials for Gluon, including full notebooks ready to run on SageMaker.
First create a model in AWS SageMaker.
Deploy the model to your DeepLens, where the model optimizer will automatically optimize it for the best performance on the device.
Convolutional Deep Neural Networks: http://gluon.mxnet.io/chapter04_convolutional-neural-networks/cnn-gluon.html
Object detection using convolutional neural networks: http://gluon.mxnet.io/chapter08_computer-vision/object-detection.html
Visual Question Answering in Gluon: http://gluon.mxnet.io/chapter08_computer-vision/visual-question-answer.html
To learn more about Gluon support for AWS DeepLens, please read the blog-post (https://aws.amazon.com/blogs/machine-learning/deploy-gluon-models-to-aws-deeplens-using-a-simple-python-api/) on this topic. To learn more about Gluon in general please visit the AWS Machine learning blog (https://aws.amazon.com/blogs/aws/introducing-gluon-a-new-library-for-machine-learning-from-aws-and-microsoft/). Read the AWS DeepLens Gluon documentation (https://docs.aws.amazon.com/deeplens/latest/dg/deeplens-supported-frameworks.html#deeplens-supported-frameworks-gluon) for more information on this feature, and try it though the AWS DeepLens console (https://console.aws.amazon.com/deeplens). To learn more about AWS DeepLens and pre-order your device, visit the AWS DeepLens website (https://aws.amazon.com/deeplens/).
Julien Simon is the AWS Evangelist on DeepLens AI ML in EMEA:
Deep Learning at the edge with AWS DeepLens at PAPIs.io May 9, 2018
An overview of Amazon SageMaker Nov 30, 2017
- https://medium.com/@julsimon articles
More on cloud
This is one of a series on cloud computing:
- Serverless software app development
- Dockerize apps
- Kubernetes container engine
- Hashicorp Vault and Consul for keeping secrets
- Hashicorp Terraform
- Elastic Stack Ecosystem
- Dynatrace cloud monitoring
- RDP client to access servers
- AWS IAM
- AWS IoT
- AWS On-boarding
- AWS DevOps (CodeCommit, CodePipeline, CodeDeploy)
- AWS Lambda
- API Management by Amazon
- AWS server deployment options
- Azure cloud introduction
- Azure cloud on-ramp
- Azure cloud certifications