I will start a project and I wonder your opinion. The scenario is as follows. I use (ultrazed it have PL (FPGA) and PS (Linux)) It takes PL of the video video(raw video 640x480 30fps) over ethernet, performs pixel manipulation, then writes this video to memory. After that, PS comes in and takes the raw video data from the memory and transfers it to SATA.Video What is the best and no problem way? I dont want any loss. Our hardware is ultrascale(xilinx). This app will work on petalinux(linux). Raw video will be saved to SATA.
I want to use V42L library and driver. But I don't understand. I have no device as /dev/video0. How can ı capture video stream(from memory ) using V4L2.
Is there a way to read memory with V4L2?