我有一个应用程序将图像转换为视频,在Google Play中我看到以下崩溃(我得到的唯一细节是函数的名称,我不理解其余的):
backtrace:
#00 pc 0000cc78 /data/app-lib/com.myapp-1/libswscale.so (sws_scale+204)
#01 pc 000012af /data/app-lib/com.myapp-1/libffmpeg.so (OpenImage+322)
code around pc:
79065c58 e58d8068 e58d2070 e58d3074 059d00b0
代码指向函数sws_scale,代码几乎一直在我的设备上工作(Nexus 5)但我看到很多报告,即使是同一个设备也有这个问题.知道为什么会这样吗?
AVFrame* OpenImage(const char* imageFileName,int W_VIDEO,int H_VIDEO,int* numBytes)
{
AVFormatContext *pFormatCtx;
AVCodecContext *pCodecCtx;
AVCodec *pCodec;
AVFrame *pFrame;
int frameFinished;
uint8_t *buffer;
AVPacket packet;
int srcBytes;
AVFrame* frame2 = NULL;// scaled frame
uint8_t* frame2_buffer;
struct SwsContext *resize;
if(av_open_input_file(&pFormatCtx,imageFileName,NULL,NULL)!=0)
{
LOGI("Can't open image file '%s'\n",imageFileName);
return NULL;
}
//dump_format(pFormatCtx,0);
if (av_find_stream_info(pFormatCtx) < 0)
{
LOGI("Can't find stream info.");
return NULL;
}
pCodecCtx = pFormatCtx->streams[0]->codec;
pCodecCtx->pix_fmt = PIX_FMT_YUV420P;
// Find the decoder for the video stream
pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
if (!pCodec)
{
LOGI("Codec not found\n");
return NULL;
}
// Open codec
if(avcodec_open(pCodecCtx,pCodec)<0)
{
LOGI("Could not open codec\n");
return NULL;
}
pFrame = avcodec_alloc_frame();
if (!pFrame)
{
LOGI("Can't allocate memory for AVFrame\n");
return NULL;
}
// Determine required buffer size and allocate buffer
srcBytes = avpicture_get_size(PIX_FMT_YUV420P,pCodecCtx->width,pCodecCtx->height);
buffer = (uint8_t *) av_malloc(srcBytes * sizeof(uint8_t));
avpicture_fill((AVPicture *) pFrame,buffer,PIX_FMT_YUV420P,pCodecCtx->height);
// Read frame
if (av_read_frame(pFormatCtx,&packet) >= 0)
{
int ret;
// if(packet.stream_index != 0)
// continue;
ret = avcodec_decode_video2(pCodecCtx,pFrame,&frameFinished,&packet);
if (ret > 0)
{
//LOGI("Frame is decoded,size %d\n",ret);
pFrame->quality = 4;
// Create another frame for resized result
frame2 = avcodec_alloc_frame();
*numBytes = avpicture_get_size(PIX_FMT_YUV420P,W_VIDEO,H_VIDEO);
frame2_buffer = (uint8_t *)av_malloc(*numBytes * sizeof(uint8_t));
avpicture_fill((AVPicture*)frame2,frame2_buffer,H_VIDEO);
// Get resize context
resize = sws_getContext(pCodecCtx->width,pCodecCtx->height,H_VIDEO,SWS_BICUBIC,NULL);
// frame2 should be filled with resized samples
ret = sws_scale(resize,(const uint8_t* const*)pFrame->data,pFrame->linesize,frame2->data,frame2->linesize);
sws_freeContext(resize);
}
else
LOGI("Error [%d] while decoding frame: %s\n",ret,strerror(AVERROR(ret)));
}
av_free(pFrame);
av_free_packet(&packet);
avcodec_close(pCodecCtx);
//av_free(pCodecCtx);
av_close_input_file(pFormatCtx);
return frame2;
}
最佳答案
在你的avcodec_decode_video2之后,不要只检查ret.你也需要检查frameFinished.如果frameFinished == 0,则不应使用您的框架(因为未填充).我不知道图像,但是当你解码视频时,它经常发生.您需要读取下一个数据包并进行下一次avcodec_decode_video2调用.
原文链接:https://www.f2er.com/android/431138.html旁注:你为什么强迫pCodecCtx-> pix_fmt = PIX_FMT_YUV420P?它由av_find_stream_info自动设置为正确的格式,您应该将其用作sws_getContext参数.
最后一件事:无需用avpicture_fill填充你的pFrame.你只需要av_frame_alloc()它,avcodec_decode_video2将负责填充它.