Using the FFmpeg library to encode video streams from OpenCV 3 to x264 (H.264 encoder) on Linux involves multiple steps. The process can be broadly divided into the following stages:
- Environment Setup: Ensure that OpenCV and FFmpeg libraries, including the x264 encoder, are installed on the Linux system.
- Writing Code: Use C++ with OpenCV API to capture video frames, then use FFmpeg's libav* series libraries to encode the frames into x264 format.
- Compilation and Execution: Compile the C++ program and run it on Linux, ensuring the video is correctly encoded and stored.
Detailed Steps:
1. Environment Setup:
- First, install OpenCV and FFmpeg on the Linux system. Use package managers such as apt (Debian/Ubuntu) or yum (Fedora) for installation.
bash
sudo apt-get install libopencv-dev sudo apt-get install ffmpeg libavcodec-dev libavformat-dev libswscale-dev
2. Writing Code:
- Create a C++ program that uses OpenCV to capture video frames and then uses FFmpeg's API to encode these frames into x264. The following is a simplified code example:
cpp#include <opencv2/opencv.hpp> #include <opencv2/highgui/highgui.hpp> extern "C" { #include <libavcodec/avcodec.h> #include <libavformat/avformat.h> #include <libswscale/swscale.h> } int main() { cv::VideoCapture cap(0); // Open default camera if (!cap.isOpened()) { return -1; } AVCodec *codec; AVCodecContext *c= NULL; int i, ret, x, y, got_output; FILE *f; AVFrame *frame; AVPacket pkt; uint8_t endcode[] = { 0, 0, 1, 0xb7 }; avcodec_register_all(); codec = avcodec_find_encoder(AV_CODEC_ID_H264); if (!codec) { fprintf(stderr, "Codec not found\n"); exit(1); } c = avcodec_alloc_context3(codec); if (!c) { fprintf(stderr, "Could not allocate video codec context\n"); exit(1); } c->bit_rate = 400000; c->width = 640; c->height = 480; c->time_base = (AVRational){1, 25}; c->gop_size = 10; c->max_b_frames = 1; c->pix_fmt = AV_PIX_FMT_YUV420P; if (avcodec_open2(c, codec, NULL) < 0) { fprintf(stderr, "Could not open codec\n"); exit(1); } frame = av_frame_alloc(); if (!frame) { fprintf(stderr, "Could not allocate video frame\n"); exit(1); } frame->format = c->pix_fmt; frame->width = c->width; frame->height = c->height; ret = av_image_alloc(frame->data, frame->linesize, c->width, c->height, c->pix_fmt, 32); if (ret < 0) { fprintf(stderr, "Could not allocate raw picture buffer\n"); exit(1); } // OpenCV and FFmpeg integration section cv::Mat image; while (cap.read(image)) { // Convert OpenCV's Mat to FFmpeg's AVFrame // Color space conversion is required here av_init_packet(&pkt); pkt.data = NULL; // packet data will be allocated by the encoder pkt.size = 0; fflush(stdout); ret = avcodec_encode_video2(c, &pkt, frame, &got_output); if (ret < 0) { fprintf(stderr, "Error encoding frame\n"); exit(1); } if (got_output) { fwrite(pkt.data, 1, pkt.size, f); av_packet_unref(&pkt); } } // Add encoding termination marker avcodec_encode_video2(c, &pkt, NULL, &got_output); if (got_output) { fwrite(pkt.data, 1, pkt.size, f); av_packet_unref(&pkt); } fclose(f); avcodec_close(c); av_free(c); av_freep(&frame->data[0]); av_frame_free(&frame); return 0; }
3. Compilation and Execution:
- Compile the above code using g++, linking OpenCV and FFmpeg libraries.
bash
g++ -o video_capture video_capture.cpp `pkg-config --cflags --libs opencv4` -lavcodec -lavformat -lavutil -lswscale - Run the program:
bash
./video_capture
Notes:
- The code example omits details of error handling and resource release; in actual applications, these should be added to ensure program robustness.
- Color space conversion is necessary because OpenCV typically uses BGR format, while x264 requires YUV format.
2024年8月15日 00:05 回复