At WebRTC.ventures, we have worked on several projects implementing live streaming camera applications with a Raspberry Pi. For example, we implemented a motion detection camera that allows a user to watch their camera live and to watch pre-recorded videos that were generated when motion was detected. After doing this, we wanted to play more with that and the hot technology of the year: artificial intelligence (AI). 

While there are many ways to do that, a Raspberry Pi by itself is quite limited for heavy ML video processing, especially if you want it in real time. We decided to go ahead and build a demo that handles the heavy processing on the server.

There are many options and frameworks for streaming video from a Raspberry Pi. Here is a comparison table for some of them in a Raspberry Pi 3:

Using direct WebRTC from the RPI seems to be the best choice in terms of latency. However, because of direct WebRTC’s high CPU usage, we’re opting for GStreamer this time. We’ll use GStreamer to send an RTP stream to a media server that will handle the distribution to the viewers using WebRTC. This is a more generic approach that can be applied to less powerful devices or to RTP streaming cameras.

Because we love Node.js, we thought it’d be great to build this video processing service using Node.js. We used opencv4nodejs, which can be found here.

Although you can easily play with images and save video files using OpenCV, piping the modified video stream with OpenCV is not as easy.

Installing OpenCV with GStreamer

To integrate with GStreamer, you will need to build OpenCV manually. You can follow the OpenCV here for most of the installation steps. Remember to include GSTREAMER=ON when making it:

Once installed, we’re able to open any RTP stream with GStreamer commands within the OpenCV:

The line above will just pass a test video. To grab a UDP stream, you’ll need:

Now we’ll be able to play with OpenCV and apply ML image processing to the video. Here you can find a whole example.

In our case, we want to live stream to dozens of viewers. Vanilla WebRTC won’t be able to handle it, so we’re going to use Janus WebRTC gateway and specifically its live streaming plugin. At the end, the high-level architecture will look like something like this:

Basic colormap filter on video using OpenCV

We’ll keep getting the video frames and applying the filter to each one:

That’s it! Now we just need to feed this to Janus.

Video writing using GStreamer magic

It’s important to understand OpenCV and GStreamer for this part. There are some caveats that you need to know for your GStreamer pipeline. Otherwise, you may experience missing video keyframes or other encoding issues.

In my case, I was sending H264 video from Raspberry Pi and needed to add format=I420 explicitly in the videoWriter of OpenCV. With a test source, I had to specify the format being used by explicitly passing the format (e.g., BGR). Finally, after modifying the video, you can use OpenCV VideoWriter:

Result

Although this is definitely not a heavy processing task, we were able to stream video from a Raspberry Pi, apply a filter, and stream it live to dozens of users. You can find the demo project here.

In the second part of this project, we’ll go a bit deeper and do something more complex with OpenCV and TensorFlow. Stay tuned!

Recent Blog Posts