c – 向gstreamer应用程序添加opencv处理

前端之家收集整理的这篇文章主要介绍了c – 向gstreamer应用程序添加opencv处理前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
我试图执行以下操作:使用gstreamer接收视频流并使用opencv进行处理.我找到了几个解决方案,其中之一是将视频写入(从gstreamer)fifo中,然后使用opencv读取它. (OPTION3这里 MJPEG streaming and decoding).问题是我不能打开管道. cvCreateFileCapture从不返回.这是我写的部分代码
if(mkfifo("fifo.avi",S_IRUSR| S_IWUSR) == -1)
{
    cout<<"Cant create fifo"<<endl;
    cout<<errno<<endl;
}

loop = g_main_loop_new(NULL,false);

fsink = gst_element_factory_make("filesink","fsink");
g_object_set(G_OBJECT(fsink),"location","fifo.avi",NULL);

playbin = gst_element_factory_make("playbin2","play");    
g_object_set(G_OBJECT(playbin),"uri",uri.c_str(),NULL);
g_object_set(G_OBJECT(playbin),"video-sink",fsink,NULL);

bus = gst_pipeline_get_bus(GST_PIPELINE(playbin));
gst_bus_add_signal_watch(bus);

g_signal_connect(bus,"message::buffering",G_CALLBACK(&set_playbin_state),playbin);
gst_object_unref(bus);

cvNamedWindow("output",CV_WINDOW_AUTOSIZE);
capture = cvCreateFileCapture("fifo.avi");

程序堆栈在最后一行.
PS:我使用的是opencv 2.3.1.

解决方法

所以.搜索一段时间后,我找到了一个解决方案,它涉及从缓冲区检索数据.所以想法是创建播放器并将appsink设置为“视频接收器”.这里是代码示例:
cout<<"Creating appsink"<<endl;
appsink = gst_element_factory_make("appsink","asink");
gst_app_sink_set_emit_signals((GstAppSink*)appsink,true);
gst_app_sink_set_drop((GstAppSink*)appsink,true);
gst_app_sink_set_max_buffers((GstAppSink*)appsink,1);

//creating and initialising pipeline

g_object_set(G_OBJECT(playbin),appsink,NULL);

g_signal_connect(appsink,"new-buffer",G_CALLBACK(DisplayFrame),(gpointer) mark);

//callback function looks like this

gboolean Core::DisplayFrame(GstAppSink *fks,gpointer mark)
{
static bool init = false;
static IplImage *frame;
GstBuffer* buf;
Mark* mk = (Mark*) mark;

if(!init)
{
    init = true;
    frame = cvCreateImage(cvSize(mk->frame_w,mk->frame_h),IPL_DEPTH_8U,1);
}
buf = gst_app_sink_pull_buffer(fks);
frame->imageData = (char*)GST_BUFFER_DATA(buf);

ProcessFrame(frame);
gst_buffer_unref(buf);
return true;
}

这个工作.
PS.有很多关于这种方法的信息,但是我花了很多时间来搜索它.所以我决定在这里发布,以便至少提供一些搜索关键字.

UPDATE.
还有更多关于连接gstreamer和opencv的信息.这是关于将缓冲区转换为iplimage.
首先,我们需要接收rgb缓冲区,使转换尽可能简单.为了做到这一点,我们将用appsink替换appink,连接到ffmpegcolorspace

cout<<"Creating appsink"<<endl;
appsink = gst_element_factory_make("appsink",1);
csp = gst_element_factory_make("ffmpegcolorspace","csp");
sinkpipe = gst_pipeline_new("sinkp");
gst_bin_add_many(GST_BIN(sinkpipe),csp,NULL);
gst_element_link_filtered(csp,gst_caps_new_simple("video/x-raw-rgb",NULL));
pad = gst_element_get_static_pad(csp,"sink");
gst_element_add_pad(sinkpipe,gst_ghost_pad_new("ghost",pad));
g_object_unref(pad);

//...

g_object_set(G_OBJECT(playbin),sinkpipe,NULL);

//...

g_signal_connect(appsink,G_CALLBACK(GetFrame),(gpointer) mark);

//...

//caps_struct can be retrieved via writing data probe
//search for it in streamer manual

cout<<"Getting frame resolution"<<endl;
gst_structure_get_int(caps_struct,"width",&(mark->frame_w));
gst_structure_get_int(caps_struct,"height",&(mark->frame_h));
gst_structure_get_int(caps_struct,"depth",&depth);

mark->GeneratePoints();
frame = cvCreateImage(cvSize(mark->frame_w,mark->frame_h),depth/3,3);


//callback function

gboolean Core::GetFrame(GstAppSink *fks,gpointer frame)
{

IplImage* frame_temp = frame
IplImage* frame_temp_two = cvCloneImage(frame_temp);

GstBuffer* buf;
buf = gst_app_sink_pull_buffer(fks);
frame_temp_two->imageData = (char*) GST_BUFFER_DATA(buf);
cvConvertImage(frame_temp_two,frame_temp,CV_CVTIMG_SWAP_RB);
ProcessFrame(frame_temp);
gst_buffer_unref(buf);
return true;
}

我希望这将有助于某人.

原文链接:https://www.f2er.com/c/113883.html

猜你在找的C&C++相关文章