Skip to content

Building animations from image series with ffmpeg

Urs Ganse edited this page Mar 21, 2022 · 13 revisions

When working with Vlasiator simulation results, Analysator usually produces directories full of .png files. FFmpeg, the animation swiss army knife, can be used to turn these into video files. (ffmpeg is probably available on all relevant systems through module load ffmpeg or apt install ffmpeg)

A typical command line to do so looks like this:

ffmpeg -y -f image2 -start_number 1234 -framerate 5 -i ABC_Potato_%07d.png -vf "fps=5,scale=trunc(iw/2)*2:trunc(ih/2)*2" -c:v libx264 -preset slow -profile baseline -qp 18 -pix_fmt yuv420p outputfile.mp4

Watch out: ordering of command line options is important

The individual parts of this commandline are:

  • -y: Assume "yes" to every question, especially to overwriting the output file
  • -f image2 -start_number 1234 -framerate 5 -i ABC_Potato_%07d.png: This is the input specification. Read input files with names "ABC_Potato_0001234.png, ABC_Potato_0001235.png, ...", assuming an input framerate of 5 fps
  • -vf "fps=5,scale=trunc(iw/2)*2:trunc(ih/2)*2": Build a filter chain that enforces 5 fps framerate and scales the image size to be an even number of pixels. Some devices (hello, iphone!) won't play videos with an odd size.
  • -c:v libx264 -preset slow: Create video with the H.264 codec, with a bunch of parameters set to reasonable defaults with the "slow" preset (For slightly smaller files, use the "veryslow" preset, for real-time video encoding, consider using "fast" or "veryfast").
  • -profile baseline: Only use codec features of the "baseline" feature set of h.264. This ensures that the video is playable on all devices. You get significantly smaller files using the "high" profile, but many mobile devices then refuse to play it.
  • -qp 18: Quantization parameter set to 18. This number determinies the quality of the resulting video. Smaller numbers give better quality. -qp 1 is perfectly lossless video, -qp 10 is visually indistinguishable even on great screens. -qp 24 is about youtube quality.
  • -pix_fmt yuv420p: Encode the resulting video in the YUV420P colour space (which is a horrible digital version of the NTSC colourspace, with reduced chromacity resolution). Unfortunately, this is the only colour space reliably supported on all consumer hardware. If you leave this parameter out, the video will instead be encoded in rgb888, which is much better in terms of scientific reproduction of the results (since it won't distort plotting colour scales), but no longer plays on apple hardware.
  • outputfile.mp4: The output file that should be written to.

Note:

  • You can specify different framerates for input and output. Since the input, in this case, is "just a bunch of files", that number can be pretty arbitrary. The filter chain defines the output framerate.
  • For a long video, if you have trouble jumping around in playback, try adding more keyframes: ffmpeg -y -f image2 -start_number 1234 -framerate 5 -i ABC_Potato_%07d.png -vf "fps=5,scale=trunc(iw/2)*2:trunc(ih/2)*2" -c:v libx264 -preset slow -profile baseline -qp 18 -x264-params keyint=20 -pix_fmt yuv420p outputfile.mp4
  • If you add -movflags +faststart to the command line, the resulting mp4 files have their index at the beginning of the file, so browsers can start playing them before they are fully downloaded.
  • If a resulting video is too huge to be played in a browser (because, for example, your simulation has ridiculously high resolution), you can subsequently create lower resolution or lower-quality versions of the video by re-encoding them: ffmpeg -i highres.mp4 -vf scale=trunc(iw/4):trunc(ih/4) -c:v libx264 -preset slow -qp 24 lores.mp4 for a quarter-resolution, low-quality version for example.
  • Instead of -start_number 1234 -i formatted_file_%07d.png one can use regular glob patterns: -pattern_type glob -i 'some_imgs_*.png'. This does not fail if some frame is missing.
  • To crop a video that's too large, add a -vf crop=xsize,ysize[,xoffset,yoffset] parameter before the other -vf. If you don't specify xoffset and yoffset, the cropped section will be taken from the middle of the source video. Othrwise, xoffset and yoffset specify coordinates of the top left corner.
  • There is a huge selection of other filters you can put into the encoding chain. Some of which can be useful for our purposes:
    • drawbox
    • drawtext (Also see its "timecode" parameter to overlay simulation time, if you forgot to render it into the frames); this requires specific ffmpeg build options though (see below for a ImageMagick solution)
    • drawgrid
    • overlay can be used both to put videos above each other and to put a vlasiator logo over the video. See the examples.

See also

ImageMagick is handy for pre-processing animation frames. Here's a snippet to draw a time string to a bunch of frames. Font selection is a bit choose, providing a font file seems a good idea.

module load ImageMagick #needed on Turso, at least
for filename in path_to_images/*.png; do
  outfn=output_path/${filename##*/}
  tstr=$((10#${filename:92:7}+1300))
  convert $filename -font /proj/mjalho/visit_scripts/NewCM10-Regular.otf \
   -fill black -pointsize 40 -gravity NorthWest -draw "text 132,60 't = $tstr s'" $outfn
done

The FFmpeg FAQ page about making movies from images.