views:

1214

answers:

1

Hello,

I'd like to convert a HBitmap to a video stream using libavcodec. I get my HBitmap using:

HBITMAP hCaptureBitmap =CreateCompatibleBitmap(hDesktopDC, nScreenWidth, nScreenHeight);
SelectObject(hCaptureDC,hCaptureBitmap); 
BitBlt(hCaptureDC,0,0,nScreenWidth,nScreenHeight,hDesktopDC,0,0,SRCCOPY);

And I'd like to convert it to YUV (which is required by the codec i'm using). For that I use:

SwsContext *fooContext = sws_getContext(c->width,c->height,PIX_FMT_BGR32,   c->width,c->height,PIX_FMT_YUV420P,SWS_FAST_BILINEAR,NULL,NULL,NULL);

uint8_t *movie_dib_bits = reinterpret_cast<uint8_t *>(bm.bmBits) + bm.bmWidthBytes * (bm.bmHeight - 1);

int dibrowbytes = -bm.bmWidthBytes;

uint8_t* data_out[1];
int stride_out[1];
data_out[0] = movie_dib_bits;
stride_out[0] = dibrowbytes;

sws_scale(fooContext,data_out,stride_out,0,c->height,picture->data,picture->linesize);

But this is not working at all... Any idea why ? Or how could I do it differently ?

Thank you !

+1  A: 

I am not familiar with the stuff you are using to get the bitmap, but assuming it is correct and you have a pointer to the BGR 32-bit/pixel data, try something like this:

uint8_t* inbuffer;
int in_width, in_height, out_width, out_height;

//here, make sure inbuffer points to the input BGR32 data, 
//and the input and output dimensions are set correctly.

//calculate the bytes needed for the output image
int nbytes = avpicture_get_size(PIX_FMT_YUV420P, out_width, out_height);

//create buffer for the output image
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);

//create ffmpeg frame structures.  These do not allocate space for image data, 
//just the pointers and other information about the image.
AVFrame* inpic = avcodec_alloc_frame();
AVFrame* outpic = avcodec_alloc_frame();

//this will set the pointers in the frame structures to the right points in 
//the input and output buffers.
avpicture_fill((AVPicture*)inpic, inbuffer, PIX_FMT_BGR32, in_width, in_height);
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, out_width, out_height);

//create the conversion context
SwsContext* fooContext = sws_getContext(in_width, in_height, PIX_FMT_BGR32, out_width, out_height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);

//perform the conversion
sws_scale(fooContext, inpic->data, inpic->linesize, 0, in_height, outpic->data, outpic->linesize);

//encode the frame here...

//free memory
av_free(outbuffer);
av_free(inpic);
av_free(outpic);

Of course, if you are going to be converting a sequence of frames, just make your allocations once at the beginning and deallocations once at the end.

Jason