Protecting your pets with AWS Panorama

Corgi Puppies
Image By Author

AWS re:Invent 2020 last year was virtual, but that didn’t mean that there weren’t still plenty of new and exciting releases.

AWS Panorama Appliance
Image Credits:

One such release was AWS Panorama, an enterprise device which brings machine learning to existing cameras.

For example, Panorama can be used to do things like detect defects on items moving through an assembly line, or it could be used to inform grocery store employees when store shelves need to be restocked.

As long as your cameras are ONVIF compliant, the appliance can leverage your existing camera system.

Panorama is what’s known as an “edge” device, in that the machine learning inferencing actually happens on the device “at the edge” as opposed to sending camera streams to the cloud to do the machine learning heavy lifting. It does this using a built-in Nvidia Jetson Xavier GPU which is built for running optimized deep learning models at the edge. That means that not only is Panorama able to quickly process multiple camera streams in parallel, it can also operate without direct access to the internet.

For more technical information about the device, check out the .

Panorama applications are structurally very simple. They consist of AWS Lambdas which run via AWS IoT GreenGrass and one or more machine learning models. Within the lambda, the code runs an infinite loop which processes incoming image frames from one or more camera streams and then uses the provided ml model to perform inference on each frame. The results of the inference can then be used to perform any action you want. For example, sending messages via AWS SNS or MQTT.

If you’ve used AWS DeepLens in the past, then this application structure should be familiar.

AWS DeepLens
Image Credit:

It’s worth mentioning however that this is not simply a DeepLens V2. Panorama is definitely an enterprise level device. It is designed to operate in harsh industrial settings with it’s rugged water resistant and dust proof enclosure. And while DeepLens contains a single embedded camera, Panorama’s GPU is capable of handling multiple input camera streams in parallel.

For the remainder of this post, I am going to walk you through a sample application to show how easy it is to get started.

DISCLAIMER : This project references a number of third-party dependencies including the code used for training a model as well as alternative methods for testing this project. Always be sure you are following proper licensing requirements for your use case and be aware that the implementation described here might change as the dependencies change over time.

This demo application uses a YoloV5 object detection model written in PyTorch. The goal of this project is to use machine learning to alert pet owners when predators (i.e. Coyotes) are around so that the pets can be safely brought back indoors to avoid getting eaten.

Wild Coyote
Photo by on

The Model

To start off, we’re going to need a model. The Panorama appliance is compatible with several ML frameworks and has been tested using several different , however, the AWS console at the time of this writing, limits you to MxNet, PyTorch, or TensorFlow.

One important thing you’ll want to consider when planning out your application is that while the developer edition of the Panorama appliance allows you to SSH in and manually deploy whatever model or code you want, you will want to make sure your workflow is compliant with the AWS console because you won’t be able to manually SSH into the production version of the appliance and you will need to go through the console (or possibly the CLI if and when they end up adding support) to deploy your project in a production setting.

Specific instructions pertaining to the model training process can be found within the so I won’t be covering it here.

As part of the training process for YoloV5, you have the option of choosing from several different model sizes. You can choose to train a larger/slower model with better accuracy or a smaller/faster model with slightly lesser accuracy.

Image Credit:

While the hardware on Panorama is more capable than most edge devices, it is still an edge device and as such I opted for the smaller model in this case.

NOTE : If you decide to use a larger model such as YoloV5m or YoloV5l, you will likely need to make additional code changes to work with the model.

Once we have a model trained, we then need to export the model for TorchScript using the utility found within the repo.

python models/export.py — weights runs/train/exp/weights/best.pt — img 640

Export Script
Export Script
Image Credit:

NOTE : The export script will create a TorchScript model with the extension “.pt” . The deployment process is expecting the model to have an extension of“.pth” so you will need to change the extension to “.pth” to work with Panorama.

mv best.torchscript.pt best.torchscript.pth

In order to get the model onto the Panorama appliance, the deployment process is expecting a model, which is stored in S3 within a bucket whose name contains the word “panorama”.

Bucket S3 Example Path
Bucket S3 Example Path
Image Credit:

Before uploading, you’ll need to tar and gzip the model file (.pth) and then upload it to S3.

tar -czvf wildlife.tar.gz best.torchscript.pth

NOTE : It’s a good idea to use sub-directories within the S3 bucket to prevent conflicts with versioning or naming in the future as you add more models.

Also, be aware that during the deployment process Amazon will grab the compressed model file from S3 and compile the model for TensorRT using SageMaker Neo. This is to optimize the model to run on the Panorama hardware. It will then place the optimized model in the same bucket and subdirectory that the original model resides in.

Optimized Model
Optimized Model
Image Credit:

The Lambda

For this project, we have two files, “yolov5s_lambda.py” and “utils.py”

  • yolov5s_lambda.py contains the actual workflow of our application including the panorama subclass that we will be doing all the work in.
  • utils.py contains some helper functions and image processing logic so we’ll skip that in this tutorial but feel free to check it out on your own.

Within yolov5s_lambda.py. we start off by declaring our variables and import statements at the top of the file.

Image Credit:

The YOLOv5 class definition has a few methods that we need to implement starting with the interface which returns the parameters that we’ll be using throughout the rest of the Lambda.

Image Credit:

We also define a method called run_inference() to accept the image and run inference using our model.

Image Credit:

init() sets up our Lambda’s parameters including the model inputs and outputs.

Image Credit:

entry() is what gets called and handles each image frame that comes in from the camera streams. Notice the call to self.run_inference()

Image Credit:

Because this model has the tendency to mistakenly detect a corgi as coyote for one or two frames, (corgi’s are pretty vicious looking dogs…) I implemented a check which ensures we don’t alert the user until we see at least a certain number of consecutive image frames containing coyotes to be sure that we actually saw one rather than just alerting on a false positive.

If we do happen to see a coyote for a predefined number of consecutive frames, then we call a utility function which calls an http endpoint which is exposed on a Raspberry Pi connected to a buzzer that I set up.

Raspberry Pi Alert system
Raspberry Pi Alert system
Image By Author

This is to alert the pet owner to the presence of a coyote in the yard so they know to bring the pets indoors! In your situation, you might choose to publish a message to MQTT or AWS SNS instead.

On the flip side, if we don’t see a coyote for a given frame, then we reset the consecutive frame counter to start the count over again.

Image Credit:

Finally, we draw the detections on the frame and pass it to the output stream so we can see the detections on the output image.

Image Credit:

To learn more about the setting up your own Lambdas, check out the following:

Deployment Process

The deployment process is fairly straightforward. Once you have your lambda and model in place, you can create a new application using the console “Create application” wizard.

Panorama Application Dashboard
Panorama Application Dashboard
Image Credit:

Simply follow the prompts in the wizard

Image Credit:

To specify your model, pick the framework, specify the S3 path, and model name as it is referenced in the Lambda code.

Image Credit:

Specify the model input layer shape and name.

Image Credit:

Finally, specify your Lambda.

Image Credit:

Once you finish creating the application, you can deploy the application to the device.

Full instructions for setting up and deploying your own project can be found here:

Testing the application

If you have ONVIF compliant cameras on your network already, you can register those to use with the Panorama and just use them for your application testing. However if you don’t have cameras installed, there are a couple ways that you can simulate a camera stream. Ultimately what the Panorama application is looking for is an Rtsp stream which is accessible on the same network as the appliance.

For example, you can use this project “RtspSimpleServer”, which is available on , to stream video files from your computer and exposes them over an RTSP stream.

Image Credits:

There are also multiple phone apps which are capable of streaming video from your camera over Rtsp. For this project, I used an Android app called and it worked great.

Conclusion

Hopefully this guide has shown you how easy it is to get up and running with AWS Panorama. You can find the full example for this demo application :

For even more use cases and examples, you can check out the official maintained list of AWS provided sample Panorama applications on GitHub:

Software Engineer and AWS ML Hero

Get the Medium app