我想继续处理框架作为BufferedImages,并使用OpenCV(C)来提供视频.仅仅使用OpenCV的框架,我发现OpenCV很有效率地返回高清网络摄像头框架并将其绘制到屏幕上.
我认为将这些数据提供给Java并实现相同的效率是非常简单的.我刚刚完成了编写JNI DLL以将此数据复制到BufferedImage并将其返回给Java.但是,我发现我正在做的数据复制真的是阻碍了性能.我的目标是30 FPS,但是将OpenCV返回的char数组中的数据复制到Java BufferedImage中,大概需要100毫秒的时间.相反,我看到约2-5 FPS.
当返回帧捕获时,OpenCV提供一个指向1D字符数组的指针.这些数据需要提供给Java,显然我没有时间复制它.
我需要一个更好的解决方案来将这些框架捕获到一个BufferedImage.我正在考虑的几个解决方案,我认为没有一个是非常好的(相当确定他们也会表现不佳):
(1)覆盖BufferedImage,并通过对DLL进行本机调用,从各种BufferedImage方法返回像素数据. (而不是一次复制数组,我按照调用代码的请求返回单个像素).请注意,调用代码通常需要图像中的所有像素来绘制图像或进行处理,因此,单独的像素抓取操作将在2D for循环中实现.
(2)指示BufferedImage使用java.nio.ByteBuffer以某种方式直接访问OpenCV返回的char数组中的数据.感谢任何关于如何做到这一点的提示.
(3)在C中做一切,忘记Java.好的,是的,这听起来像是最合乎逻辑的解决方案,但是我从来没有时间从头开始这个多个月的项目.
到目前为止,我的JNI代码已经被写入返回BufferedImage,但是在这一点上,我愿意接受一个1D字符数组的返回,然后把它放在一个BufferedImage中.
顺便说一下,这里的问题是:将1D char数组的图像数据复制到BufferedImage中最有效的方法是什么?
提供的是(低效)代码,我用来从OpenCV源图像并复制到BufferedImage中:
JNIEXPORT jobject JNICALL Java_graphicanalyzer_ImageFeedOpenCV_getFrame (JNIEnv * env,jobject jThis,jobject camera) { //get the memory address of the CvCapture device,the value of which is encapsulated in the camera jobject jclass cameraClass = env->FindClass("graphicanalyzer/Camera"); jfieldID fid = env->GetFieldID(cameraClass,"pCvCapture","I"); //get the address of the CvCapture device int a_pCvCapture = (int)env->GetIntField(camera,fid); //get a pointer to the CvCapture device CvCapture *capture = (CvCapture*)a_pCvCapture; //get a frame from the CvCapture device IplImage *frame = cvQueryFrame( capture ); //get a handle on the BufferedImage class jclass bufferedImageClass = env->FindClass("java/awt/image/BufferedImage"); if (bufferedImageClass == NULL) { return NULL; } //get a handle on the BufferedImage(int width,int height,int imageType) constructor jmethodID bufferedImageConstructor = env->GetMethodID(bufferedImageClass,"<init>","(III)V"); //get the field ID of BufferedImage.TYPE_INT_RGB jfieldID imageTypeFieldID = env->GetStaticFieldID(bufferedImageClass,"TYPE_INT_RGB","I"); //get the int value from the BufferedImage.TYPE_INT_RGB field jint imageTypeIntRGB = env->GetStaticIntField(bufferedImageClass,imageTypeFieldID); //create a new BufferedImage jobject ret = env->NewObject(bufferedImageClass,bufferedImageConstructor,(jint)frame->width,(jint)frame->height,imageTypeIntRGB); //get a handle on the method BufferedImage.getRaster() jmethodID getWritableRasterID = env->GetMethodID(bufferedImageClass,"getRaster","()Ljava/awt/image/WritableRaster;"); //call the BufferedImage.getRaster() method jobject writableRaster = env->CallObjectMethod(ret,getWritableRasterID); //get a handle on the WritableRaster class jclass writableRasterClass = env->FindClass("java/awt/image/WritableRaster"); //get a handle on the WritableRaster.setPixel(int x,int y,int[] rgb) method jmethodID setPixelID = env->GetMethodID(writableRasterClass,"setPixel","(II[I)V"); //void setPixel(int,int,int[]) //iterate through the frame we got above and set each pixel within the WritableRaster jintArray rgbArray = env->NewIntArray(3); jint rgb[3]; char *px; for (jint x=0; x < frame->width; x++) { for (jint y=0; y < frame->height; y++) { px = frame->imageData+(frame->widthStep*y+x*frame->nChannels); rgb[0] = abs(px[2]); // OpenCV returns BGR bit order rgb[1] = abs(px[1]); // OpenCV returns BGR bit order rgb[2] = abs(px[0]); // OpenCV returns BGR bit order //copy jint array into jintArray env->SetIntArrayRegion(rgbArray,3,rgb); //take values in rgb and move to rgbArray //call setPixel() this is a copy operation env->CallVoidMethod(writableRaster,setPixelID,x,y,rgbArray); } } return ret; //return the BufferedImage }