I have a need to burn in a time code to a video and am wondering if this is something that ffmpeg is capable of?
Short answer, no.
Long answer, yes, but not without using a separate library to create the frames with the rendered time code on them, with transparency filling the rest of the frame, then using FFmpeg to overlay the frames on the existing video. Off the top of my head I don't know how to do this, but I'm sure if you're creative you can figure it out.
Edit: I've been working on this problem because it is an interesting question/project for me. I have come a little further in the solution by writing a Perl script that will generate a .srt
file with the time code embedded in it for any given video file from which FFmpeg is configured to be able to read the metadata. It uses the Video::FFmpeg
library to read the duration and saves a subtitle file as ${video}.srt
. This will make it so it will render automatically in Mplayer if you insert the following lines in your ~/.mplayer/config
:
# select subtitle files automatically in the current directory, all files
# matching the basename of the current playing file
sub-fuzziness=1
Still working on how to position and overlay the rendered subtitles on a video and re-encode in the same format. I'll update this post as I know more.
I believe this can help you with that:
http://code.google.com/p/ffmbc/
But I don't know much about it.
FFMPEG will be able to do much of the work, but it won't be all packaged up. Using FFMPEG, you can have it decode all of the frames in sequence, and give you the "Presentation Time Stamp" (additional time related metadata may be available on some formats, but PTS is what you'll want to look for to get started.) Then, you are on your own to actually draw the text onto the decoded frame yourself. I use Qt for similar things, by using QPainter on a QImage with the frame data, but there may be some other API to draw on an image that you find more obvious. Then, use the FFMPEG API to make a compressed video that has your newly drawn-on frames in it. It'll be a little more complicated if you also want audio. My own work doesn't really care about audio, so I haven't bothered to learn the audio aspects of the API. Basically, as you do your read loop getting packets out of the file, some of them will be audio. Instead of discarding them like I do, you'll need to keep them and write them into the output file as you got them.
I've only used the C API, rather than C#, so I don't know if there are any special gotcha's there to be worried about.