Skip to content

September 1, 2020 - First Linux Meetup

45 people signed up and 17 people attended. Due to the US health situation, the meetup was online only using Zoom Pro. It lasted 20 minutes past the planned 60 minutes due to active questions.

In addition to configuration questions for libuvc-theta and libuvc-theta-sample, we showed two Linux streaming demos from a Jetson Nano with real-time processing using Python OpenCV.

Slides

A copy of the slides used in the meetup are here

Community Projects - ROS and OpenCV

ROS package for RICOH THETA S

RICOH THETA S usage package with Ubuntu 16.04 and ROS Kinetic

Robotic Intelligence Lab (RobInLab) ROS package

rgbd stereo 360 camera

rgbd_stereo_360camera

  • convert fisheye images to equirectangular projection
  • obtain depth image from stereo equirectangular image pair using basic SGBM algorithm to estimate disparity
  • depth information using WLS filter in addition to SGBM algorithm

Q and A

How do I load v4l2loopback automatically when I reboot?

In the file /etc/modules-load.d/modules.conf add a new line v4l2loopback.

How do I save video to file with the USB API?

If you are using the USB to save video to file (not streaming), this command works with the V and Z1.

ptpcam -R 0x101c,0,0,1

Can I use OpenVSLAM?

It works with a video from file. We didn't test it in the past with a live stream because we didn't have a way to get the live video onto Linux. Please test it now and report back. Discussion.

Demo 3 in the video below is with a RICOH THETA V at 1920x960 with FPS of 10. Video

How do I access camera acceleration or change of position?

You need to use a plug-in to access camera sensor data. Discussion

Related info is in this video for a real-time demo. Article summary in English. GitHub repo here. Integrated with TensorFlow Lite using internal camera OS NDK. There is a related article here that shows camera Yaw and Pitch on the OLED of the Z1. gist

Do you have plans to additional deep learning?

We may based on additional feedback. A good next step is to assess the VOC-360 image dataset that can be used to train models for 360 images. More information on the model and how to download the dataset is here. TensorFlow demo running inside the camera is here. FDDB-360 contains 17,052 fisheye-looking images and a total of 26,640 annotated faces.

Did you use DetectNet on 4K frames?

(from Craig) For DetectNet, I was only able to test it at 2K on the Jetson Nano due to limited resources. I believe it works at 4k on Jetson Xavier. But, I don't have a Jetson Xavier due to cost

Is motionJPEG higher per frame bitrate?

Craig: The MotionJPEG is lower fps and higher latency, but it is convenient. There's a demo video of MotionJPEG here: https://youtu.be/5eSdqEudu5s The code is available for testing. Though MotionJPEG is not recommended for AI processing. However, feel free to give it a try.

Can we use other THETAs? or only V?

Craig: Z1 works

Can the THETA S be used for live streaming?

From craig to Everyone: 10:39 AM The V and Z1 are the only models that support live streaming of equirectangular The THETA S streams motionJPEG. It should work with Linux in dual-fisheye the V and Z1 stream in UVC 1.5 in equirectangular From john to Everyone: 10:41 AM yes, it works

There is latency of 2-3 seconds with Ubuntu 20.04

John → Had latency of 2-3 seconds → Couldn’t text it on 16-18 → How to reduce the latency, what changes need to be made on Ubuntu 16/18? Craig: I used Ubuntu 20. Discrete GPU, John might be using software rendering (if it cannot use GPU). Craig uses x86, did not use software rendering

From john to Everyone: 10:55 AM I saw your description in the link and tested for Ubuntu 20.04, but with latency. From john to Everyone: 10:55 AM But couldn't make it work in Ubuntu 16 and 18. Do you know of what modification is required? From craig to Everyone: 10:55 AM I didn't have latency. did you install the gstreamer full plug-in pack?

After meetup, ran this test to show latency.

you mention it doesn’t work with Raspberry Pi now. We were wondering the reason. Is it because the OS or the CPU architecture (i.e. arm64) ?

From Luke Lu to Everyone: 11:01 AM Thanks for your tutorial. In your forum, you mention it doesn’t work with Raspberry Pi now. We were wondering the reason. Is it because the OS or the CPU architecture (i.e. arm64) ? Craig: Cut down resolution and try again. Craig couldn’t get it to work. Nano has DDR4, Pi3 has DDR3. Hardware acceleration. H.264 acceleration doesn’t do decoding. Cut stream down to 2k and try again. Try Pi4 (newest), 2k, assess to see if it works. Speed of decoding is the limiting factor.

Do I need to recompile to change the resolution

After the meetup, we ran this test using command line arguments. Using the test, you can specify the resolution with:

$ ./gst_loopback --format 4K
start, hit any key to stop

$ ./gst_loopback --format 2K
start, hit any key to stop

If the THETA is on /dev/video2, you can confirm resolution with:

$ v4l2-ctl --device /dev/video2 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
    Type: Video Capture

    [0]: 'YU12' (Planar YUV 4:2:0)
        Size: Discrete 1920x960
            Interval: Discrete 0.033s (30.000 fps)

Or if in 4K,

$ v4l2-ctl --device /dev/video2 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
    Type: Video Capture

    [0]: 'YU12' (Planar YUV 4:2:0)
        Size: Discrete 3840x1920
            Interval: Discrete 0.033s (30.000 fps)