An introduction to system-to-system pipeline development

Software & Hardware

  • Server: Linux Machine
  • OS: Ubuntu 22.04.1 LTS (Jammy Jellyfish)
  • GStreamer: 1.20.3
  • Client: Linux Machine
  • OS: Ubuntu 22.04.1 LTS (Jammy Jellyfish)
  • GStreamer: 1.20.3

1. Introduction

(To put Image with computers)

This tutorial will demonstrate the development of two separate pipelines that are connected over the network, specifically a local LAN connection across two machines.

In order to make this tutorial work, we need to ensure that both machines are set up properly. Both machines will need to have GStreamer installed Official GStreamer Installation Guide.

If required, we can also install the DgiStreamer remotely on the machines to have the extra features DgiStreamer provides Install DgiRemote.
(SO INSTALL DGISTREAMER OR DGIREMOTE?)

Finally, we need a machine with DgiRemote to develop our pipeline; this can be either of our machines.

2. Developing the pipeline

We need to develop two pipelines: one to stream a video source and a second one to receive the stream and display it on the machine. However, before continuing we need to introduce some technicalities on how User Datagram Protocol (UDP) streaming in GStreamer works.

GStreamer and UDP streaming

In GStreamer, UDP streaming can work in two ways:

  • broadcast: by providing a valid broadcast IP
  • point-to-point: where the streaming device streams directly to the client’s IP.

In this tutorial, we will use the point-to-point method because we need to recover the IP of the client machine by using the command ifconfig in the terminal.
However, streaming raw video over the internet is not advisable since packet loss or errors in delivery can completely kill the received stream. Moreover, no information about video synchronization is present, thus continuity issues can arise on the receiving end. GStreamer proposes video encapsulation in RTP payloads, a protocol to handle real-time video. GStreamer provides a series of dedicated payload wrappers depending on the chosen encoding, so you can change the encoding according to your liking.

Note: it should also be possible to provide streaming in a more conservative manner, by streaming on a machine using the 0.0.0.0 IP and providing the IP of the machine to the client(s). However, this has some known issues and we, therefore, leave it out of the scope of this tutorial.

Source pipeline (streaming pipeline)

(Image of the pipeline here)

  • v4l2src: this is a node available only for Linux. It is part of the video4linux2 plugin that enables the reading of Linux files for video, or reading of the camera output. By default, data acquired by the first camera connected to the computer are written in /dev/video0 directory.
  • videotestsrc: this is an alternative node in cases where a camera is not avaiable.
  • jpegenc: this node will encode the raw video using the jpeg encoder, generating a jpeg video stream.
  • rtpjpegpay: this node subsequently wraps the encoded jpeg video in the RTP protocol in order to be ready for streaming.
  • udpsink: this is the final node that will serve the video for the client to connect and receive the stream. We haven’t modified anything, however, we would like to point out that by default this module streams on the localhost IP (127.0.0.1) and has the auto-multicast setting enabled, which allows the streaming to automatically join multicast-groups.

(Download link to the pipeline file)

Sink pipeline (Reading pipeline)

(Image to the receiving pipeline)

  • udpsrc: network source that reads UDP packets from the network. We have set the value of the port=0 to enable the automatic port search, and enforced the caps=application/x-rtp in order to listen for the expected input; this can be useful if you have more than 1 stream in the same network.
    (ARE THE BELOW BULLET POINTS CORRECT??)
  • * capsfilter: UDP only provides raw bits and we do not know their content, therefore we will put a capsfilter to specify the format of the received stream. This information can be recovered from the pay node used, which in our case is rtpjpegpay.
  • the filter is application/x-rtp,encoding-name=JPEG,payload=96 (the application/x-rtp specifies that we are receiving an rtp stream, the encoding-name=JPEG tells to the depay node that the encoding used is JPEG, while payload=96 is the output payload size of the rtpjpegpay node used in the streaming server)
  • rtpjpegdepay: this node removes the encoded video from the RTP protocol packets, and enforces various processes such as synchronization based on the protocol.
  • jpegdec: this node decodes JPEG-encoded video.
  • videoscale: scales the raw and unencoded video in order to be better visualized. Usually, streaming the video in low resolution and scaling it back can improve the streaming speed while maintaining good quality. The default scaling method is bicubic.
  • autovideosink: is used to show the result; it automatically searches for a valid GStreamer node to show the video on the current machine.

(Download link to the pipeline file)

Running the pipeline

(SUGGEST ## 3.)

We now have to run the pipeline separately. We first need to launch the streaming pipeline in the source machine, and then the client pipeline in the second machine. We can do this in two ways: (1) open two instances of DgiStreamer and connect to the two machines, or (2) export the pipeline in GStreamer format using the Preview Button and launch them in the terminal:

  1. Source Pipeline:
  2. Using camera on Linux: gst-launch-1.0 v4l2src ! jpegenc ! rtpjpegpay ! udpsink
  3. Using test video source: gst-launch-1.0 videotestsrc ! jpegenc ! rtpjpegpay ! udpsink
  4. Client Pipeline: gst-launch-1.0 udpsrc caps='application/x-rtp' ! rtpjpegdepay ! jpegdec ! videoscale ! autovideosink

After a few seconds, you should be able to see your video!