FFPLAY-ALL(1) FFPLAY-ALL(1)

ffplay - FFplay media player

ffplay [options] [input_url]

FFplay is a very simple and portable media player using the FFmpeg libraries and the SDL library. It is mostly used as a testbed for the various FFmpeg APIs.

All the numerical options, if not specified otherwise, accept a string representing a number as input, which may be followed by one of the SI unit prefixes, for example: 'K', 'M', or 'G'.

If 'i' is appended to the SI unit prefix, the complete prefix will be interpreted as a unit prefix for binary multiples, which are based on powers of 1024 instead of powers of 1000. Appending 'B' to the SI unit prefix multiplies the value by 8. This allows using, for example: 'KB', 'MiB', 'G' and 'B' as number suffixes.

Options which do not take arguments are boolean options, and set the corresponding value to true. They can be set to false by prefixing the option name with "no". For example using "-nofoo" will set the boolean option with name "foo" to false.

Options that take arguments support a special syntax where the argument given on the command line is interpreted as a path to the file from which the actual argument value is loaded. To use this feature, add a forward slash '/' immediately before the option name (after the leading dash). E.g.

ffmpeg -i INPUT -/filter:v filter.script OUTPUT

will load a filtergraph description from the file named filter.script.

Some options are applied per-stream, e.g. bitrate or codec. Stream specifiers are used to precisely specify which stream(s) a given option belongs to.

A stream specifier is a string generally appended to the option name and separated from it by a colon. E.g. "-codec:a:1 ac3" contains the "a:1" stream specifier, which matches the second audio stream. Therefore, it would select the ac3 codec for the second audio stream.

A stream specifier can match several streams, so that the option is applied to all of them. E.g. the stream specifier in "-b:a 128k" matches all audio streams.

An empty stream specifier matches all streams. For example, "-codec copy" or "-codec: copy" would copy all the streams without reencoding.

Possible forms of stream specifiers are:

Matches the stream with this index. E.g. "-threads:1 4" would set the thread count for the second stream to 4. If stream_index is used as an additional stream specifier (see below), then it selects stream number stream_index from the matching streams. Stream numbering is based on the order of the streams as detected by libavformat except when a stream group specifier or program ID is also specified. In this case it is based on the ordering of the streams in the group or program.
stream_type is one of following: 'v' or 'V' for video, 'a' for audio, 's' for subtitle, 'd' for data, and 't' for attachments. 'v' matches all video streams, 'V' only matches video streams which are not attached pictures, video thumbnails or cover arts. If additional_stream_specifier is used, then it matches streams which both have this type and match the additional_stream_specifier. Otherwise, it matches all streams of the specified type.
Matches streams which are in the group with the specifier group_specifier. if additional_stream_specifier is used, then it matches streams which both are part of the group and match the additional_stream_specifier. group_specifier may be one of the following:
Match the stream with this group index.
#group_id or i:group_id
Match the stream with this group id.
Matches streams which are in the program with the id program_id. If additional_stream_specifier is used, then it matches streams which both are part of the program and match the additional_stream_specifier.
#stream_id or i:stream_id
Match the stream by stream id (e.g. PID in MPEG-TS container).
Matches streams with the metadata tag key having the specified value. If value is not given, matches streams that contain the given tag with any value.
Matches streams with the given disposition(s). dispositions is a list of one or more dispositions (as printed by the -dispositions option) joined with '+'.
Matches streams with usable configuration, the codec must be defined and the essential information such as video dimension or audio sample rate must be present.

Note that in ffmpeg, matching by metadata will only work properly for input files.

These options are shared amongst the ff* tools.

Show license.
Show help. An optional parameter may be specified to print help about a specific item. If no argument is specified, only basic (non advanced) tool options are shown.

Possible values of arg are:

Print advanced tool options in addition to the basic tool options.
Print complete list of options, including shared and private options for encoders, decoders, demuxers, muxers, filters, etc.
Print detailed information about the decoder named decoder_name. Use the -decoders option to get a list of all decoders.
Print detailed information about the encoder named encoder_name. Use the -encoders option to get a list of all encoders.
Print detailed information about the demuxer named demuxer_name. Use the -formats option to get a list of all demuxers and muxers.
Print detailed information about the muxer named muxer_name. Use the -formats option to get a list of all muxers and demuxers.
Print detailed information about the filter named filter_name. Use the -filters option to get a list of all filters.
Print detailed information about the bitstream filter named bitstream_filter_name. Use the -bsfs option to get a list of all bitstream filters.
Print detailed information about the protocol named protocol_name. Use the -protocols option to get a list of all protocols.
Show version.
Show the build configuration, one option per line.
Show available formats (including devices).
Show available demuxers.
Show available muxers.
Show available devices.
Show all codecs known to libavcodec.

Note that the term 'codec' is used throughout this documentation as a shortcut for what is more correctly called a media bitstream format.

Show available decoders.
Show all available encoders.
Show available bitstream filters.
Show available protocols.
Show available libavfilter filters.
Show available pixel formats.
Show available sample formats.
Show channel names and standard channel layouts.
Show stream dispositions.
Show recognized color names.
Show autodetected sources of the input device. Some devices may provide system-dependent source names that cannot be autodetected. The returned list cannot be assumed to be always complete.
ffmpeg -sources pulse,server=192.168.0.4
Show autodetected sinks of the output device. Some devices may provide system-dependent sink names that cannot be autodetected. The returned list cannot be assumed to be always complete.
ffmpeg -sinks pulse,server=192.168.0.4
Set logging level and flags used by the library.

The optional flags prefix can consist of the following values:

Indicates that repeated log output should not be compressed to the first line and the "Last message repeated n times" line will be omitted.
Indicates that log output should add a "[level]" prefix to each message line. This can be used as an alternative to log coloring, e.g. when dumping the log to file.

Flags can also be used alone by adding a '+'/'-' prefix to set/reset a single flag without affecting other flags or changing loglevel. When setting both flags and loglevel, a '+' separator is expected between the last flags value and before loglevel.

loglevel is a string or a number containing one of the following values:

Show nothing at all; be silent.
Only show fatal errors which could lead the process to crash, such as an assertion failure. This is not currently used for anything.
Only show fatal errors. These are errors after which the process absolutely cannot continue.
Show all errors, including ones which can be recovered from.
Show all warnings and errors. Any message related to possibly incorrect or unexpected events will be shown.
Show informative messages during processing. This is in addition to warnings and errors. This is the default value.
Same as "info", except more verbose.
Show everything, including debugging information.

For example to enable repeated log output, add the "level" prefix, and set loglevel to "verbose":

ffmpeg -loglevel repeat+level+verbose -i input output

Another example that enables repeated log output without affecting current state of "level" prefix flag or loglevel:

ffmpeg [...] -loglevel +repeat

By default the program logs to stderr. If coloring is supported by the terminal, colors are used to mark errors and warnings. Log coloring can be disabled setting the environment variable AV_LOG_FORCE_NOCOLOR, or can be forced setting the environment variable AV_LOG_FORCE_COLOR.

Dump full command line and log output to a file named "program-YYYYMMDD-HHMMSS.log" in the current directory. This file can be useful for bug reports. It also implies "-loglevel debug".

Setting the environment variable FFREPORT to any value has the same effect. If the value is a ':'-separated key=value sequence, these options will affect the report; option values must be escaped if they contain special characters or the options delimiter ':' (see the ``Quoting and escaping'' section in the ffmpeg-utils manual).

The following options are recognized:

file
set the file name to use for the report; %p is expanded to the name of the program, %t is expanded to a timestamp, "%%" is expanded to a plain "%"
set the log verbosity level using a numerical value (see "-loglevel").

For example, to output a report to a file named ffreport.log using a log level of 32 (alias for log level "info"):

FFREPORT=file=ffreport.log:level=32 ffmpeg -i input output

Errors in parsing the environment variable are not fatal, and will not appear in the report.

Suppress printing banner.

All FFmpeg tools will normally show a copyright notice, build options and library versions. This option can be used to suppress printing this information.

Allows setting and clearing cpu flags. This option is intended for testing. Do not use it unless you know what you're doing.
ffmpeg -cpuflags -sse+mmx ...
ffmpeg -cpuflags mmx ...
ffmpeg -cpuflags 0 ...

Possible flags for this option are:

Override detection of CPU count. This option is intended for testing. Do not use it unless you know what you're doing.
ffmpeg -cpucount 2
Set the maximum size limit for allocating a block on the heap by ffmpeg's family of malloc functions. Exercise extreme caution when using this option. Don't use if you do not understand the full consequence of doing so. Default is INT_MAX.

These options are provided directly by the libavformat, libavdevice and libavcodec libraries. To see the list of available AVOptions, use the -help option. They are separated into two categories:

These options can be set for any container, codec or device. Generic options are listed under AVFormatContext options for containers/devices and under AVCodecContext options for codecs.
These options are specific to the given container, device or codec. Private options are listed under their corresponding containers/devices/codecs.

For example to write an ID3v2.3 header instead of a default ID3v2.4 to an MP3 file, use the id3v2_version private option of the MP3 muxer:

ffmpeg -i input.flac -id3v2_version 3 out.mp3

All codec AVOptions are per-stream, and thus a stream specifier should be attached to them:

ffmpeg -i multichannel.mxf -map 0:v:0 -map 0:a:0 -map 0:a:0 -c:a:0 ac3 -b:a:0 640k -ac:a:1 2 -c:a:1 aac -b:2 128k out.mp4

In the above example, a multichannel audio stream is mapped twice for output. The first instance is encoded with codec ac3 and bitrate 640k. The second instance is downmixed to 2 channels and encoded with codec aac. A bitrate of 128k is specified for it using absolute index of the output stream.

Note: the -nooption syntax cannot be used for boolean AVOptions, use -option 0/-option 1.

Note: the old undocumented way of specifying per-stream AVOptions by prepending v/a/s to the options name is now obsolete and will be removed soon.

Force displayed width.
Force displayed height.
Start in fullscreen mode.
Disable audio.
Disable video.
Disable subtitles.
Seek to pos. Note that in most formats it is not possible to seek exactly, so ffplay will seek to the nearest seek point to pos.

pos must be a time duration specification, see the Time duration section in the ffmpeg-utils(1) manual.

Play duration seconds of audio/video.

duration must be a time duration specification, see the Time duration section in the ffmpeg-utils(1) manual.

Seek by bytes.
Set custom interval, in seconds, for seeking using left/right keys. Default is 10 seconds.
Disable graphical display.
Borderless window.
Window always on top. Available on: X11 with SDL >= 2.0.5, Windows SDL >= 2.0.6.
-volume
Set the startup volume. 0 means silence, 100 means no volume reduction or amplification. Negative values are treated as 0, values above 100 are treated as 100.
Force format.
Set window title (default is the input filename).
Set the x position for the left of the window (default is a centered window).
Set the y position for the top of the window (default is a centered window).
-loop number
Loops movie playback <number> times. 0 means forever.
Set the show mode to use. Available values for mode are:
0, video
show video
1, waves
show audio waves
2, rdft
show audio frequency band using RDFT ((Inverse) Real Discrete Fourier Transform)

Default value is "video", if video is not present or cannot be played "rdft" is automatically selected.

You can interactively cycle through the available show modes by pressing the key w.

Create the filtergraph specified by filtergraph and use it to filter the video stream.

filtergraph is a description of the filtergraph to apply to the stream, and must have a single video input and a single video output. In the filtergraph, the input is associated to the label "in", and the output to the label "out". See the ffmpeg-filters manual for more information about the filtergraph syntax.

You can specify this parameter multiple times and cycle through the specified filtergraphs along with the show modes by pressing the key w.

filtergraph is a description of the filtergraph to apply to the input audio. Use the option "-filters" to show all the available filters (including sources and sinks).
Read input_url.

Print several playback statistics, in particular show the stream duration, the codec parameters, the current position in the stream and the audio/video synchronisation drift. It is shown by default, unless the log level is lower than "info". Its display can be forced by manually specifying this option. To disable it, you need to specify "-nostats".
Non-spec-compliant optimizations.
Generate pts.
Set the master clock to audio ("type=audio"), video ("type=video") or external ("type=ext"). Default is audio. The master clock is used to control audio-video synchronization. Most media players use audio as master clock, but in some cases (streaming or high quality broadcast) it is necessary to change that. This option is mainly used for debugging purposes.
Select the desired audio stream using the given stream specifier. The stream specifiers are described in the Stream specifiers chapter. If this option is not specified, the "best" audio stream is selected in the program of the already selected video stream.
Select the desired video stream using the given stream specifier. The stream specifiers are described in the Stream specifiers chapter. If this option is not specified, the "best" video stream is selected.
Select the desired subtitle stream using the given stream specifier. The stream specifiers are described in the Stream specifiers chapter. If this option is not specified, the "best" subtitle stream is selected in the program of the already selected video or audio stream.
Exit when video is done playing.
Exit if any key is pressed.
Exit if any mouse button is pressed.
Force a specific decoder implementation for the stream identified by media_specifier, which can assume the values "a" (audio), "v" (video), and "s" subtitle.
Force a specific audio decoder.
Force a specific video decoder.
Force a specific subtitle decoder.
Automatically rotate the video according to file metadata. Enabled by default, use -noautorotate to disable it.
Drop video frames if video is out of sync. Enabled by default if the master clock is not set to video. Use this option to enable frame dropping for all master clock sources, use -noframedrop to disable it.
Do not limit the input buffer size, read as much data as possible from the input as soon as possible. Enabled by default for realtime streams, where data may be dropped if not read in time. Use this option to enable infinite buffers for all inputs, use -noinfbuf to disable it.
Defines how many threads are used to process a filter pipeline. Each pipeline will produce a thread pool with this many threads available for parallel processing. The default is 0 which means that the thread count will be determined by the number of available CPUs.
Use vulkan renderer rather than SDL builtin renderer. Depends on libplacebo.
Vulkan configuration using a list of key=value pairs separated by ":".
Use HW accelerated decoding. Enable this option will enable vulkan renderer automatically.

Quit.
Toggle full screen.
Pause.
Toggle mute.
9, 0
/, *
Decrease and increase volume respectively.
Cycle audio channel in the current program.
Cycle video channel.
Cycle subtitle channel in the current program.
Cycle program.
Cycle video filters or show modes.
Step to the next frame.

Pause if the stream is not already paused, step to the next video frame, and pause.

Seek backward/forward 10 seconds.
Seek backward/forward 1 minute.
Seek to the previous/next chapter. or if there are no chapters Seek backward/forward 10 minutes.
Seek to percentage in file corresponding to fraction of width.
Toggle full screen.

This section documents the syntax and formats employed by the FFmpeg libraries and tools.

FFmpeg adopts the following quoting and escaping mechanism, unless explicitly specified. The following rules are applied:

  • ' and \ are special characters (respectively used for quoting and escaping). In addition to them, there might be other special characters depending on the specific syntax where the escaping and quoting are employed.
  • A special character is escaped by prefixing it with a \.
  • All characters enclosed between '' are included literally in the parsed string. The quote character ' itself cannot be quoted, so you may need to close the quote and escape it.
  • Leading and trailing whitespaces, unless escaped or quoted, are removed from the parsed string.

Note that you may need to add a second level of escaping when using the command line or a script, which depends on the syntax of the adopted shell language.

The function "av_get_token" defined in libavutil/avstring.h can be used to parse a token quoted or escaped according to the rules defined above.

The tool tools/ffescape in the FFmpeg source tree can be used to automatically quote or escape a string in a script.

Examples

  • Escape the string "Crime d'Amour" containing the "'" special character:
    Crime d\'Amour
    
  • The string above contains a quote, so the "'" needs to be escaped when quoting it:
    'Crime d'\''Amour'
    
  • Include leading or trailing whitespaces using quoting:
    '  this string starts and ends with whitespaces  '
    
  • Escaping and quoting can be mixed together:
    ' The string '\'string\'' is a string '
    
  • To include a literal \ you can use either escaping or quoting:
    'c:\foo' can be written as c:\\foo
    

The accepted syntax is:

[(YYYY-MM-DD|YYYYMMDD)[T|t| ]]((HH:MM:SS[.m...]]])|(HHMMSS[.m...]]]))[Z]
now

If the value is "now" it takes the current time.

Time is local time unless Z is appended, in which case it is interpreted as UTC. If the year-month-day part is not specified it takes the current year-month-day.

There are two accepted syntaxes for expressing time duration.

[-][<HH>:]<MM>:<SS>[.<m>...]

HH expresses the number of hours, MM the number of minutes for a maximum of 2 digits, and SS the number of seconds for a maximum of 2 digits. The m at the end expresses decimal value for SS.

or

[-]<S>+[.<m>...][s|ms|us]

S expresses the number of seconds, with the optional decimal part m. The optional literal suffixes s, ms or us indicate to interpret the value as seconds, milliseconds or microseconds, respectively.

In both expressions, the optional - indicates negative duration.

Examples

The following examples are all valid time duration:

55
55 seconds
0.2
0.2 seconds
200ms
200 milliseconds, that's 0.2s
200000us
200000 microseconds, that's 0.2s
12:03:45
12 hours, 03 minutes and 45 seconds
23.189
23.189 seconds

Specify the size of the sourced video, it may be a string of the form widthxheight, or the name of a size abbreviation.

The following abbreviations are recognized:

720x480
720x576
352x240
352x288
640x480
768x576
352x240
352x240
128x96
176x144
352x288
4cif
704x576
16cif
1408x1152
160x120
320x240
640x480
800x600
1024x768
1600x1200
2048x1536
1280x1024
2560x2048
5120x4096
852x480
1366x768
1600x1024
1920x1200
2560x1600
3200x2048
3840x2400
6400x4096
7680x4800
320x200
640x350
852x480
1280x720
1920x1080
2k
2048x1080
2kflat
1998x1080
2kscope
2048x858
4k
4096x2160
4kflat
3996x2160
4kscope
4096x1716
640x360
240x160
400x240
432x240
480x320
960x540
2kdci
2048x1080
4kdci
4096x2160
3840x2160
7680x4320

Specify the frame rate of a video, expressed as the number of frames generated per second. It has to be a string in the format frame_rate_num/frame_rate_den, an integer number, a float number or a valid video frame rate abbreviation.

The following abbreviations are recognized:

30000/1001
25/1
30000/1001
25/1
30000/1001
25/1
24/1
24000/1001

A ratio can be expressed as an expression, or in the form numerator:denominator.

Note that a ratio with infinite (1/0) or negative value is considered valid, so you should check on the returned value if you want to exclude those values.

The undefined value can be expressed using the "0:0" string.

It can be the name of a color as defined below (case insensitive match) or a "[0x|#]RRGGBB[AA]" sequence, possibly followed by @ and a string representing the alpha component.

The alpha component may be a string composed by "0x" followed by an hexadecimal number or a decimal number between 0.0 and 1.0, which represents the opacity value (0x00 or 0.0 means completely transparent, 0xff or 1.0 completely opaque). If the alpha component is not specified then 0xff is assumed.

The string random will result in a random color.

The following names of colors are recognized:

0xF0F8FF
0xFAEBD7
0x00FFFF
0x7FFFD4
0xF0FFFF
0xF5F5DC
0xFFE4C4
0x000000
0xFFEBCD
0x0000FF
0x8A2BE2
0xA52A2A
0xDEB887
0x5F9EA0
0x7FFF00
0xD2691E
0xFF7F50
0x6495ED
0xFFF8DC
0xDC143C
0x00FFFF
0x00008B
0x008B8B
0xB8860B
0xA9A9A9
0x006400
0xBDB76B
0x8B008B
0x556B2F
0xFF8C00
0x9932CC
0x8B0000
0xE9967A
0x8FBC8F
0x483D8B
0x2F4F4F
0x00CED1
0x9400D3
0xFF1493
0x00BFFF
0x696969
0x1E90FF
0xB22222
0xFFFAF0
0x228B22
0xFF00FF
0xDCDCDC
0xF8F8FF
0xFFD700
0xDAA520
0x808080
0x008000
0xADFF2F
0xF0FFF0
0xFF69B4
0xCD5C5C
0x4B0082
0xFFFFF0
0xF0E68C
0xE6E6FA
0xFFF0F5
0x7CFC00
0xFFFACD
0xADD8E6
0xF08080
0xE0FFFF
0xFAFAD2
0x90EE90
0xD3D3D3
0xFFB6C1
0xFFA07A
0x20B2AA
0x87CEFA
0x778899
0xB0C4DE
0xFFFFE0
0x00FF00
0x32CD32
0xFAF0E6
0xFF00FF
0x800000
0x66CDAA
0x0000CD
0xBA55D3
0x9370D8
0x3CB371
0x7B68EE
0x00FA9A
0x48D1CC
0xC71585
0x191970
0xF5FFFA
0xFFE4E1
0xFFE4B5
0xFFDEAD
0x000080
0xFDF5E6
0x808000
0x6B8E23
0xFFA500
0xFF4500
0xDA70D6
0xEEE8AA
0x98FB98
0xAFEEEE
0xD87093
0xFFEFD5
0xFFDAB9
0xCD853F
0xFFC0CB
0xDDA0DD
0xB0E0E6
0x800080
0xFF0000
0xBC8F8F
0x4169E1
0x8B4513
0xFA8072
0xF4A460
0x2E8B57
0xFFF5EE
0xA0522D
0xC0C0C0
0x87CEEB
0x6A5ACD
0x708090
0xFFFAFA
0x00FF7F
0x4682B4
0xD2B48C
0x008080
0xD8BFD8
0xFF6347
0x40E0D0
0xEE82EE
0xF5DEB3
0xFFFFFF
0xF5F5F5
0xFFFF00
0x9ACD32

A channel layout specifies the spatial disposition of the channels in a multi-channel audio stream. To specify a channel layout, FFmpeg makes use of a special syntax.

Individual channels are identified by an id, as given by the table below:

front left
front right
front center
low frequency
back left
back right
front left-of-center
front right-of-center
back center
side left
side right
top center
top front left
top front center
top front right
top back left
top back center
top back right
downmix left
downmix right
wide left
wide right
surround direct left
surround direct right
low frequency 2

Standard channel layout compositions can be specified by using the following identifiers:

FC
FL+FR
2.1
FL+FR+LFE
3.0
FL+FR+FC
3.0(back)
FL+FR+BC
4.0
FL+FR+FC+BC
FL+FR+BL+BR
FL+FR+SL+SR
3.1
FL+FR+FC+LFE
5.0
FL+FR+FC+BL+BR
5.0(side)
FL+FR+FC+SL+SR
4.1
FL+FR+FC+LFE+BC
5.1
FL+FR+FC+LFE+BL+BR
5.1(side)
FL+FR+FC+LFE+SL+SR
6.0
FL+FR+FC+BC+SL+SR
6.0(front)
FL+FR+FLC+FRC+SL+SR
3.1.2
FL+FR+FC+LFE+TFL+TFR
FL+FR+FC+BL+BR+BC
6.1
FL+FR+FC+LFE+BC+SL+SR
6.1
FL+FR+FC+LFE+BL+BR+BC
6.1(front)
FL+FR+LFE+FLC+FRC+SL+SR
7.0
FL+FR+FC+BL+BR+SL+SR
7.0(front)
FL+FR+FC+FLC+FRC+SL+SR
7.1
FL+FR+FC+LFE+BL+BR+SL+SR
7.1(wide)
FL+FR+FC+LFE+BL+BR+FLC+FRC
7.1(wide-side)
FL+FR+FC+LFE+FLC+FRC+SL+SR
5.1.2
FL+FR+FC+LFE+BL+BR+TFL+TFR
FL+FR+FC+BL+BR+BC+SL+SR
FL+FR+BL+BR+TFL+TFR+TBL+TBR
5.1.4
FL+FR+FC+LFE+BL+BR+TFL+TFR+TBL+TBR
7.1.2
FL+FR+FC+LFE+BL+BR+SL+SR+TFL+TFR
7.1.4
FL+FR+FC+LFE+BL+BR+SL+SR+TFL+TFR+TBL+TBR
7.2.3
FL+FR+FC+LFE+BL+BR+SL+SR+TFL+TFR+TBC+LFE2
9.1.4
FL+FR+FC+LFE+BL+BR+FLC+FRC+SL+SR+TFL+TFR+TBL+TBR
FL+FR+FC+BL+BR+BC+SL+SR+WL+WR+TBL+TBR+TBC+TFC+TFL+TFR
DL+DR
22.2
FL+FR+FC+LFE+BL+BR+FLC+FRC+BC+SL+SR+TC+TFL+TFC+TFR+TBL+TBC+TBR+LFE2+TSL+TSR+BFC+BFL+BFR

A custom channel layout can be specified as a sequence of terms, separated by '+'. Each term can be:

the name of a single channel (e.g. FL, FR, FC, LFE, etc.), each optionally containing a custom name after a '@', (e.g. FL@Left, FR@Right, FC@Center, LFE@Low_Frequency, etc.)

A standard channel layout can be specified by the following:

  • the name of a single channel (e.g. FL, FR, FC, LFE, etc.)
  • the name of a standard channel layout (e.g. mono, stereo, 4.0, quad, 5.0, etc.)
  • a number of channels, in decimal, followed by 'c', yielding the default channel layout for that number of channels (see the function "av_channel_layout_default"). Note that not all channel counts have a default layout.
  • a number of channels, in decimal, followed by 'C', yielding an unknown channel layout with the specified number of channels. Note that not all channel layout specification strings support unknown channel layouts.
  • a channel layout mask, in hexadecimal starting with "0x" (see the "AV_CH_*" macros in libavutil/channel_layout.h.

Before libavutil version 53 the trailing character "c" to specify a number of channels was optional, but now it is required, while a channel layout mask can also be specified as a decimal number (if and only if not followed by "c" or "C").

See also the function "av_channel_layout_from_string" defined in libavutil/channel_layout.h.

When evaluating an arithmetic expression, FFmpeg uses an internal formula evaluator, implemented through the libavutil/eval.h interface.

An expression may contain unary, binary operators, constants, and functions.

Two expressions expr1 and expr2 can be combined to form another expression "expr1;expr2". expr1 and expr2 are evaluated in turn, and the new expression evaluates to the value of expr2.

The following binary operators are available: "+", "-", "*", "/", "^".

The following unary operators are available: "+", "-".

Some internal variables can be used to store and load intermediary results. They can be accessed using the "ld" and "st" functions with an index argument varying from 0 to 9 to specify which internal variable to access.

The following functions are available:

Compute absolute value of x.
Compute arccosine of x.
Compute arcsine of x.
Compute arctangent of x.
Compute principal value of the arc tangent of y/x.
Return 1 if x is greater than or equal to min and lesser than or equal to max, 0 otherwise.
Compute bitwise and/or operation on x and y.

The results of the evaluation of x and y are converted to integers before executing the bitwise operation.

Note that both the conversion to integer and the conversion back to floating point can lose precision. Beware of unexpected results for large numbers (usually 2^53 and larger).

Round the value of expression expr upwards to the nearest integer. For example, "ceil(1.5)" is "2.0".
Return the value of x clipped between min and max.
Compute cosine of x.
Compute hyperbolic cosine of x.
Return 1 if x and y are equivalent, 0 otherwise.
Compute exponential of x (with base "e", the Euler's number).
Round the value of expression expr downwards to the nearest integer. For example, "floor(-1.5)" is "-2.0".
Compute Gauss function of x, corresponding to "exp(-x*x/2) / sqrt(2*PI)".
Return the greatest common divisor of x and y. If both x and y are 0 or either or both are less than zero then behavior is undefined.
Return 1 if x is greater than y, 0 otherwise.
Return 1 if x is greater than or equal to y, 0 otherwise.
This function is similar to the C function with the same name; it returns "sqrt(x*x + y*y)", the length of the hypotenuse of a right triangle with sides of length x and y, or the distance of the point (x, y) from the origin.
Evaluate x, and if the result is non-zero return the result of the evaluation of y, return 0 otherwise.
Evaluate x, and if the result is non-zero return the evaluation result of y, otherwise the evaluation result of z.
Evaluate x, and if the result is zero return the result of the evaluation of y, return 0 otherwise.
Evaluate x, and if the result is zero return the evaluation result of y, otherwise the evaluation result of z.
Return 1.0 if x is +/-INFINITY, 0.0 otherwise.
Return 1.0 if x is NAN, 0.0 otherwise.
Load the value of the internal variable with index idx, which was previously stored with st(idx, expr). The function returns the loaded value.
Return linear interpolation between x and y by amount of z.
Compute natural logarithm of x.
Return 1 if x is lesser than y, 0 otherwise.
Return 1 if x is lesser than or equal to y, 0 otherwise.
Return the maximum between x and y.
Return the minimum between x and y.
Compute the remainder of division of x by y.
Return 1.0 if expr is zero, 0.0 otherwise.
Compute the power of x elevated y, it is equivalent to "(x)^(y)".
Print the value of expression t with loglevel l. If l is not specified then a default log level is used. Return the value of the expression printed.
Return a pseudo random value between 0.0 and 1.0. idx is the index of the internal variable used to save the seed/state, which can be previously stored with st(idx).

To initialize the seed, you need to store the seed value as a 64-bit unsigned integer in the internal variable with index idx.

For example, to store the seed with value 42 in the internal variable with index 0 and print a few random values:

st(0,42); print(random(0)); print(random(0)); print(random(0))
Return a pseudo random value in the interval between min and max. idx is the index of the internal variable which will be used to save the seed/state, which can be previously stored with st(idx).

To initialize the seed, you need to store the seed value as a 64-bit unsigned integer in the internal variable with index idx.

Find an input value for which the function represented by expr with argument ld(0) is 0 in the interval 0..max.

The expression in expr must denote a continuous function or the result is undefined.

ld(0) is used to represent the function input value, which means that the given expression will be evaluated multiple times with various input values that the expression can access through ld(0). When the expression evaluates to 0 then the corresponding input value will be returned.

Round the value of expression expr to the nearest integer. For example, "round(1.5)" is "2.0".
Compute sign of x.
Compute sine of x.
Compute hyperbolic sine of x.
Compute the square root of expr. This is equivalent to "(expr)^.5".
Compute expression "1/(1 + exp(4*x))".
Store the value of the expression expr in an internal variable. idx specifies the index of the variable where to store the value, and it is a value ranging from 0 to 9. The function returns the value stored in the internal variable.

The stored value can be retrieved with ld(var).

Note: variables are currently not shared between expressions.

Compute tangent of x.
Compute hyperbolic tangent of x.
Evaluate a Taylor series at x, given an expression representing the ld(idx)-th derivative of a function at 0.

When the series does not converge the result is undefined.

ld(idx) is used to represent the derivative order in expr, which means that the given expression will be evaluated multiple times with various input values that the expression can access through ld(idx). If idx is not specified then 0 is assumed.

Note, when you have the derivatives at y instead of 0, "taylor(expr, x-y)" can be used.

time(0)
Return the current (wallclock) time in seconds.
Round the value of expression expr towards zero to the nearest integer. For example, "trunc(-1.5)" is "-1.0".
Evaluate expression expr while the expression cond is non-zero, and returns the value of the last expr evaluation, or NAN if cond was always false.

The following constants are available:

area of the unit disc, approximately 3.14
exp(1) (Euler's number), approximately 2.718
golden ratio (1+sqrt(5))/2, approximately 1.618

Assuming that an expression is considered "true" if it has a non-zero value, note that:

"*" works like AND

"+" works like OR

For example the construct:

if (A AND B) then C

is equivalent to:

if(A*B, C)

In your C code, you can extend the list of unary and binary functions, and define recognized constants, so that they are available for your expressions.

The evaluator also recognizes the International System unit prefixes. If 'i' is appended after the prefix, binary prefixes are used, which are based on powers of 1024 instead of powers of 1000. The 'B' postfix multiplies the value by 8, and can be appended after a unit prefix or used alone. This allows using for example 'KB', 'MiB', 'G' and 'B' as number postfix.

The list of available International System prefixes follows, with indication of the corresponding powers of 10 and of 2.

10^-24 / 2^-80
10^-21 / 2^-70
10^-18 / 2^-60
10^-15 / 2^-50
10^-12 / 2^-40
10^-9 / 2^-30
10^-6 / 2^-20
10^-3 / 2^-10
10^-2
10^-1
10^2
10^3 / 2^10
10^3 / 2^10
10^6 / 2^20
10^9 / 2^30
10^12 / 2^40
10^15 / 2^50
10^18 / 2^60
10^21 / 2^70
10^24 / 2^80

libavcodec provides some generic global options, which can be set on all the encoders and decoders. In addition, each codec may support so-called private options, which are specific for a given codec.

Sometimes, a global option may only affect a specific kind of codec, and may be nonsensical or ignored by another, so you need to be aware of the meaning of the specified options. Also some options are meant only for decoding or encoding.

Options may be set by specifying -option value in the FFmpeg tools, or by setting the value explicitly in the "AVCodecContext" options or using the libavutil/opt.h API for programmatic use.

The list of supported options follow:

Set bitrate in bits/s. Default value is 200K.
Set audio bitrate (in bits/s). Default value is 128K.
Set video bitrate tolerance (in bits/s). In 1-pass mode, bitrate tolerance specifies how far ratecontrol is willing to deviate from the target average bitrate value. This is not related to min/max bitrate. Lowering tolerance too much has an adverse effect on quality.
Set generic flags.

Possible values:

Use four motion vector by macroblock (mpeg4).
Use 1/4 pel motion compensation.
loop
Use loop filter.
Use fixed qscale.
Use internal 2pass ratecontrol in first pass mode.
Use internal 2pass ratecontrol in second pass mode.
Only decode/encode grayscale.
psnr
Set error[?] variables during encoding.
Input bitstream might be randomly truncated.
Don't output frames whose parameters differ from first decoded frame in stream. Error AVERROR_INPUT_CHANGED is returned when a frame is dropped.
Use interlaced DCT.
Force low delay.
Place global headers in extradata instead of every keyframe.
Only write platform-, build- and time-independent data. (except (I)DCT). This ensures that file and data checksums are reproducible and match between platforms. Its primary use is for regression testing.
Apply H263 advanced intra coding / mpeg4 ac prediction.
Apply interlaced motion estimation.
Use closed gop.
Output even potentially corrupted frames.
Set codec time base.

It is the fundamental unit of time (in seconds) in terms of which frame timestamps are represented. For fixed-fps content, timebase should be "1 / frame_rate" and timestamp increments should be identically 1.

Set the group of picture (GOP) size. Default value is 12.
Set audio sampling rate (in Hz).
Set number of audio channels.
Set cutoff bandwidth. (Supported only by selected encoders, see their respective documentation sections.)
Set audio frame size.

Each submitted frame except the last must contain exactly frame_size samples per channel. May be 0 when the codec has CODEC_CAP_VARIABLE_FRAME_SIZE set, in that case the frame size is not restricted. It is set by some decoders to indicate constant frame size.

Set the frame number.
Set video quantizer scale compression (VBR). It is used as a constant in the ratecontrol equation. Recommended range for default rc_eq: 0.0-1.0.
Set video quantizer scale blur (VBR).
Set min video quantizer scale (VBR). Must be included between -1 and 69, default value is 2.
Set max video quantizer scale (VBR). Must be included between -1 and 1024, default value is 31.
Set max difference between the quantizer scale (VBR).
Set max number of B frames between non-B-frames.

Must be an integer between -1 and 16. 0 means that B-frames are disabled. If a value of -1 is used, it will choose an automatic value depending on the encoder.

Default value is 0.

Set qp factor between P and B frames.
Workaround not auto detected encoder bugs.

Possible values:

Xvid interlacing bug (autodetected if fourcc==XVIX)
(autodetected if fourcc==UMP4)
padding bug (autodetected)
old standard qpel (autodetected per fourcc/version)
direct-qpel-blocksize bug (autodetected per fourcc/version)
edge padding bug (autodetected per fourcc/version)
Workaround various bugs in microsoft broken decoders.
trancated frames
Specify how strictly to follow the standards.

Possible values:

strictly conform to an older more strict version of the spec or reference software
strictly conform to all the things in the spec no matter what consequences
allow unofficial extensions
allow non standardized experimental things, experimental (unfinished/work in progress/not well tested) decoders and encoders. Note: experimental decoders can pose a security risk, do not use this for decoding untrusted input.
Set QP offset between P and B frames.
Set error detection flags.

Possible values:

verify embedded CRCs
detect bitstream specification deviations
buffer
detect improper bitstream length
abort decoding on minor error detection
ignore decoding errors, and continue decoding. This is useful if you want to analyze the content of a video and thus want everything to be decoded no matter what. This option will not result in a video that is pleasing to watch in case of errors.
consider things that violate the spec and have not been seen in the wild as errors
consider all spec non compliancies as errors
consider things that a sane encoder should not do as an error
Set max bitrate tolerance (in bits/s). Requires bufsize to be set.
Set min bitrate tolerance (in bits/s). Most useful in setting up a CBR encode. It is of little use elsewise.
Set ratecontrol buffer size (in bits).
Set QP factor between P and I frames.
Set QP offset between P and I frames.
Set DCT algorithm.

Possible values:

autoselect a good one (default)
fast integer
accurate integer
floating point AAN DCT
Compress bright areas stronger than medium ones.
Set temporal complexity masking.
Set spatial complexity masking.
Set inter masking.
Compress dark areas stronger than medium ones.
Select IDCT implementation.

Possible values:

Automatically pick a IDCT compatible with the simple one
floating point AAN IDCT
Set error concealment strategy.

Possible values:

iterative motion vector (MV) search (slow)
deblock
use strong deblock filter for damaged MBs
favor predicting from the previous frame instead of the current
Set sample aspect ratio.
Set sample aspect ratio. Alias to aspect.
Print specific debug info.

Possible values:

picture info
rate control
macroblock (MB) type
qp
per-block quantization parameter (QP)
display complexity metadata for the upcoming frame, GoP or for a given duration.
error recognition
memory management control operations (H.264)
picture buffer allocations
threading operations
skip motion compensation
Set full pel me compare function.

Possible values:

sum of absolute differences, fast (default)
sum of squared errors
sum of absolute Hadamard transformed differences
sum of absolute DCT transformed differences
psnr
sum of squared quantization errors (avoid, low quality)
number of bits needed for the block
rate distortion optimal, slow
0
sum of absolute vertical differences
sum of squared vertical differences
noise preserving sum of squared differences
5/3 wavelet, only used in snow
9/7 wavelet, only used in snow
Set sub pel me compare function.

Possible values:

sum of absolute differences, fast (default)
sum of squared errors
sum of absolute Hadamard transformed differences
sum of absolute DCT transformed differences
psnr
sum of squared quantization errors (avoid, low quality)
number of bits needed for the block
rate distortion optimal, slow
0
sum of absolute vertical differences
sum of squared vertical differences
noise preserving sum of squared differences
5/3 wavelet, only used in snow
9/7 wavelet, only used in snow
Set macroblock compare function.

Possible values:

sum of absolute differences, fast (default)
sum of squared errors
sum of absolute Hadamard transformed differences
sum of absolute DCT transformed differences
psnr
sum of squared quantization errors (avoid, low quality)
number of bits needed for the block
rate distortion optimal, slow
0
sum of absolute vertical differences
sum of squared vertical differences
noise preserving sum of squared differences
5/3 wavelet, only used in snow
9/7 wavelet, only used in snow
Set interlaced dct compare function.

Possible values:

sum of absolute differences, fast (default)
sum of squared errors
sum of absolute Hadamard transformed differences
sum of absolute DCT transformed differences
psnr
sum of squared quantization errors (avoid, low quality)
number of bits needed for the block
rate distortion optimal, slow
0
sum of absolute vertical differences
sum of squared vertical differences
noise preserving sum of squared differences
5/3 wavelet, only used in snow
9/7 wavelet, only used in snow
Set diamond type & size for motion estimation.
(1024, INT_MAX)
full motion estimation(slowest)
(768, 1024]
umh motion estimation
(512, 768]
hex motion estimation
(256, 512]
l2s diamond motion estimation
[2,256]
var diamond motion estimation
(-1, 2)
small diamond motion estimation
-1
funny diamond motion estimation
(INT_MIN, -1)
sab diamond motion estimation
Set amount of motion predictors from the previous frame.
Set pre motion estimation compare function.

Possible values:

sum of absolute differences, fast (default)
sum of squared errors
sum of absolute Hadamard transformed differences
sum of absolute DCT transformed differences
psnr
sum of squared quantization errors (avoid, low quality)
number of bits needed for the block
rate distortion optimal, slow
0
sum of absolute vertical differences
sum of squared vertical differences
noise preserving sum of squared differences
5/3 wavelet, only used in snow
9/7 wavelet, only used in snow
Set diamond type & size for motion estimation pre-pass.
Set sub pel motion estimation quality.
Set limit motion vectors range (1023 for DivX player).
Set macroblock decision algorithm (high quality mode).

Possible values:

use mbcmp (default)
use fewest bits
use best rate distortion
Set number of bits which should be loaded into the rc buffer before decoding starts.
Possible values:
Allow non spec compliant speedup tricks.
Skip bitstream encoding.
Ignore cropping information from sps.
Place global headers at every keyframe instead of in extradata.
Frame data might be split into multiple chunks.
Show all frames before the first keyframe.
Export motion vectors into frame side-data (see "AV_FRAME_DATA_MOTION_VECTORS") for codecs that support it. See also doc/examples/export_mvs.c.
Do not skip samples and export skip information as frame side data.
Do not reset ASS ReadOrder field on flush.
Generate/parse embedded ICC profiles from/to colorimetry tags.
Possible values:
Export motion vectors into frame side-data (see "AV_FRAME_DATA_MOTION_VECTORS") for codecs that support it. See also doc/examples/export_mvs.c.
Export encoder Producer Reference Time into packet side-data (see "AV_PKT_DATA_PRFT") for codecs that support it.
Export video encoding parameters through frame side data (see "AV_FRAME_DATA_VIDEO_ENC_PARAMS") for codecs that support it. At present, those are H.264 and VP9.
Export film grain parameters through frame side data (see "AV_FRAME_DATA_FILM_GRAIN_PARAMS"). Supported at present by AV1 decoders.
Set the number of threads to be used, in case the selected codec implementation supports multi-threading.

Possible values:

automatically select the number of threads to set

Default value is auto.

Set intra_dc_precision.
Set nsse weight.
Set number of macroblock rows at the top which are skipped.
Set number of macroblock rows at the bottom which are skipped.
Set encoder codec profile. Default value is unknown. Encoder specific profiles are documented in the relevant encoder documentation.
Set the encoder level. This level depends on the specific codec, and might correspond to the profile level. It is set by default to unknown.

Possible values:

Decode at 1= 1/2, 2=1/4, 3=1/8 resolutions.
Set min macroblock lagrange factor (VBR).
Set max macroblock lagrange factor (VBR).
Make decoder discard processing depending on the frame type selected by the option value.

skip_loop_filter skips frame loop filtering, skip_idct skips frame IDCT/dequantization, skip_frame skips decoding.

Possible values:

Discard no frame.
Discard useless frames like 0-sized frames.
Discard all non-reference frames.
Discard all bidirectional frames.
Discard all frames excepts keyframes.
Discard all frames except I frames.
Discard all frames.

Default value is default.

Refine the two motion vectors used in bidirectional macroblocks.
Set minimum interval between IDR-frames.
Set reference frames to consider for motion compensation.
Set rate-distortion optimal quantization.
See the Channel Layout section in the ffmpeg-utils(1) manual for the required syntax.
Possible values:
BT.709
BT.470 M
BT.470 BG
SMPTE 170 M
SMPTE 240 M
Film
BT.2020
SMPTE ST 428-1
SMPTE 431-2
SMPTE 432-1
JEDEC P22
Possible values:
BT.709
BT.470 M
BT.470 BG
SMPTE 170 M
SMPTE 240 M
Linear
Log
Log square root
IEC 61966-2-4
BT.1361
IEC 61966-2-1
BT.2020 - 10 bit
BT.2020 - 12 bit
SMPTE ST 2084
SMPTE ST 428-1
ARIB STD-B67
colorspace integer (decoding/encoding,video)
Possible values:
RGB
BT.709
FCC
BT.470 BG
SMPTE 170 M
SMPTE 240 M
YCOCG
BT.2020 NCL
BT.2020 CL
SMPTE 2085
Chroma-derived NCL
Chroma-derived CL
ICtCp
If used as input parameter, it serves as a hint to the decoder, which color_range the input has. Possible values:
MPEG (219*2^(n-8))
JPEG (2^n-1)
Possible values:
Set the log level offset.
Number of slices, used in parallelized encoding.
Select which multithreading methods to use.

Use of frame will increase decoding delay by one frame per thread, so clients which cannot provide future frames should not use it.

Possible values:

Decode more than one part of a single frame at once.

Multithreading using slices works only when the video was encoded with slices.

Decode more than one frame at once.

Default value is slice+frame.

Set audio service type.

Possible values:

Main Audio Service
Effects
Visually Impaired
Hearing Impaired
Dialogue
Commentary
Emergency
Voice Over
Karaoke
Set sample format audio decoders should prefer. Default value is "none".
Set the input subtitles character encoding.
Set/override the field order of the video. Possible values:
Progressive video
Interlaced video, top field coded and displayed first
Interlaced video, bottom field coded and displayed first
Interlaced video, top coded first, bottom displayed first
Interlaced video, bottom coded first, top displayed first
Set to 1 to disable processing alpha (transparency). This works like the gray flag in the flags option which skips chroma information instead of alpha. Default is 0.
"," separated list of allowed decoders. By default all are allowed.
Separator used to separate the fields printed on the command line about the Stream parameters. For example, to separate the fields with newlines and indentation:
ffprobe -dump_separator "
                          "  -i ~/videos/matrixbench_mpeg2.mpg
Maximum number of pixels per image. This value can be used to avoid out of memory failures due to large images.
Enable cropping if cropping parameters are multiples of the required alignment for the left and top parameters. If the alignment is not met the cropping will be partially applied to maintain alignment. Default is 1 (enabled). Note: The required alignment depends on if "AV_CODEC_FLAG_UNALIGNED" is set and the CPU. "AV_CODEC_FLAG_UNALIGNED" cannot be changed from the command line. Also hardware decoders will not apply left/top Cropping.

Decoders are configured elements in FFmpeg which allow the decoding of multimedia streams.

When you configure your FFmpeg build, all the supported native decoders are enabled by default. Decoders requiring an external library must be enabled manually via the corresponding "--enable-lib" option. You can list all available decoders using the configure option "--list-decoders".

You can disable all the decoders with the configure option "--disable-decoders" and selectively enable / disable single decoders with the options "--enable-decoder=DECODER" / "--disable-decoder=DECODER".

The option "-decoders" of the ff* tools will display the list of enabled decoders.

A description of some of the currently available video decoders follows.

AOMedia Video 1 (AV1) decoder.

Options

Select an operating point of a scalable AV1 bitstream (0 - 31). Default is 0.

HEVC (AKA ITU-T H.265 or ISO/IEC 23008-2) decoder.

The decoder supports MV-HEVC multiview streams with at most two views. Views to be output are selected by supplying a list of view IDs to the decoder (the view_ids option). This option may be set either statically before decoder init, or from the get_format() callback - useful for the case when the view count or IDs change dynamically during decoding.

Only the base layer is decoded by default.

Note that if you are using the "ffmpeg" CLI tool, you should be using view specifiers as documented in its manual, rather than the options documented here.

Options

Specify a list of view IDs that should be output. This option can also be set to a single '-1', which will cause all views defined in the VPS to be decoded and output.
This option may be read by the caller to retrieve an array of view IDs available in the active VPS. The array is empty for single-layer video.

The value of this option is guaranteed to be accurate when read from the get_format() callback. It may also be set at other times (e.g. after opening the decoder), but the value is informational only and may be incorrect (e.g. when the stream contains multiple distinct VPS NALUs).

This option may be read by the caller to retrieve an array of view positions (left, right, or unspecified) available in the active VPS, as "AVStereo3DView" values. When the array is available, its elements apply to the corresponding elements of view_ids_available, i.e. "view_pos_available[i]" contains the position of view with ID "view_ids_available[i]".

Same validity restrictions as for view_ids_available apply to this option.

Raw video decoder.

This decoder decodes rawvideo streams.

Options

Specify the assumed field type of the input video.
-1
the video is assumed to be progressive (default)
0
bottom-field-first is assumed
1
top-field-first is assumed

dav1d AV1 decoder.

libdav1d allows libavcodec to decode the AOMedia Video 1 (AV1) codec. Requires the presence of the libdav1d headers and library during configuration. You need to explicitly configure the build with "--enable-libdav1d".

Options

The following options are supported by the libdav1d wrapper.

Set amount of frame threads to use during decoding. The default value is 0 (autodetect). This option is deprecated for libdav1d >= 1.0 and will be removed in the future. Use the option "max_frame_delay" and the global option "threads" instead.
Set amount of tile threads to use during decoding. The default value is 0 (autodetect). This option is deprecated for libdav1d >= 1.0 and will be removed in the future. Use the global option "threads" instead.
Set max amount of frames the decoder may buffer internally. The default value is 0 (autodetect).
Apply film grain to the decoded video if present in the bitstream. Defaults to the internal default of the library. This option is deprecated and will be removed in the future. See the global option "export_side_data" to export Film Grain parameters instead of applying it.
Select an operating point of a scalable AV1 bitstream (0 - 31). Defaults to the internal default of the library.
Output all spatial layers of a scalable AV1 bitstream. The default value is false.

AVS2-P2/IEEE1857.4 video decoder wrapper.

This decoder allows libavcodec to decode AVS2 streams with davs2 library.

AVS3-P2/IEEE1857.10 video decoder.

libuavs3d allows libavcodec to decode AVS3 streams. Requires the presence of the libuavs3d headers and library during configuration. You need to explicitly configure the build with "--enable-libuavs3d".

Options

The following option is supported by the libuavs3d wrapper.

Set amount of frame threads to use during decoding. The default value is 0 (autodetect).

eXtra-fast Essential Video Decoder (XEVD) MPEG-5 EVC decoder wrapper.

This decoder requires the presence of the libxevd headers and library during configuration. You need to explicitly configure the build with --enable-libxevd.

The xevd project website is at https://github.com/mpeg5/xevd.

Options

The following options are supported by the libxevd wrapper. The xevd-equivalent options or values are listed in parentheses for easy migration.

To get a more accurate and extensive documentation of the libxevd options, invoke the command "xevd_app --help" or consult the libxevd documentation.

Force to use a specific number of threads

The family of Intel QuickSync Video decoders (VC1, MPEG-2, H.264, HEVC, JPEG/MJPEG, VP8, VP9, AV1, VVC).

Common Options

The following options are supported by all qsv decoders.

Internal parallelization depth, the higher the value the higher the latency.
A GPU-accelerated copy between video and system memory

HEVC Options

Extra options for hevc_qsv.

A user plugin to load in an internal session
A :-separate list of hexadecimal plugin UIDs to load in an internal session

Uncompressed 4:2:2 10-bit decoder.

Options

Set the line size of the v210 data in bytes. The default value is 0 (autodetect). You can use the special -1 value for a strideless v210 as seen in BOXX files.

A description of some of the currently available audio decoders follows.

AC-3 audio decoder.

This decoder implements part of ATSC A/52:2010 and ETSI TS 102 366, as well as the undocumented RealAudio 3 (a.k.a. dnet).

AC-3 Decoder Options

Dynamic Range Scale Factor. The factor to apply to dynamic range values from the AC-3 stream. This factor is applied exponentially. The default value is 1. There are 3 notable scale factor ranges:
DRC disabled. Produces full range audio.
0 < drc_scale <= 1
DRC enabled. Applies a fraction of the stream DRC value. Audio reproduction is between full range and full compression.
DRC enabled. Applies drc_scale asymmetrically. Loud sounds are fully compressed. Soft sounds are enhanced.

FLAC audio decoder.

This decoder aims to implement the complete FLAC specification from Xiph.

FLAC Decoder options

The lavc FLAC encoder used to produce buggy streams with high lpc values (like the default value). This option makes it possible to decode such streams correctly by using lavc's old buggy lpc logic for decoding.

Internal wave synthesizer.

This decoder generates wave patterns according to predefined sequences. Its use is purely internal and the format of the data it accepts is not publicly documented.

libcelt decoder wrapper.

libcelt allows libavcodec to decode the Xiph CELT ultra-low delay audio codec. Requires the presence of the libcelt headers and library during configuration. You need to explicitly configure the build with "--enable-libcelt".

libgsm decoder wrapper.

libgsm allows libavcodec to decode the GSM full rate audio codec. Requires the presence of the libgsm headers and library during configuration. You need to explicitly configure the build with "--enable-libgsm".

This decoder supports both the ordinary GSM and the Microsoft variant.

libilbc decoder wrapper.

libilbc allows libavcodec to decode the Internet Low Bitrate Codec (iLBC) audio codec. Requires the presence of the libilbc headers and library during configuration. You need to explicitly configure the build with "--enable-libilbc".

Options

The following option is supported by the libilbc wrapper.

Enable the enhancement of the decoded audio when set to 1. The default value is 0 (disabled).

libopencore-amrnb decoder wrapper.

libopencore-amrnb allows libavcodec to decode the Adaptive Multi-Rate Narrowband audio codec. Using it requires the presence of the libopencore-amrnb headers and library during configuration. You need to explicitly configure the build with "--enable-libopencore-amrnb".

An FFmpeg native decoder for AMR-NB exists, so users can decode AMR-NB without this library.

libopencore-amrwb decoder wrapper.

libopencore-amrwb allows libavcodec to decode the Adaptive Multi-Rate Wideband audio codec. Using it requires the presence of the libopencore-amrwb headers and library during configuration. You need to explicitly configure the build with "--enable-libopencore-amrwb".

An FFmpeg native decoder for AMR-WB exists, so users can decode AMR-WB without this library.

libopus decoder wrapper.

libopus allows libavcodec to decode the Opus Interactive Audio Codec. Requires the presence of the libopus headers and library during configuration. You need to explicitly configure the build with "--enable-libopus".

An FFmpeg native decoder for Opus exists, so users can decode Opus without this library.

ARIB STD-B24 caption decoder.

Implements profiles A and C of the ARIB STD-B24 standard.

libaribb24 Decoder Options

Sets the base path for the libaribb24 library. This is utilized for reading of configuration files (for custom unicode conversions), and for dumping of non-text symbols as images under that location.

Unset by default.

Tells the decoder wrapper to skip text blocks that contain half-height ruby text.

Enabled by default.

Yet another ARIB STD-B24 caption decoder using external libaribcaption library.

Implements profiles A and C of the Japanse ARIB STD-B24 standard, Brazilian ABNT NBR 15606-1, and Philippines version of ISDB-T.

Requires the presence of the libaribcaption headers and library (https://github.com/xqq/libaribcaption) during configuration. You need to explicitly configure the build with "--enable-libaribcaption". If both libaribb24 and libaribcaption are enabled, libaribcaption decoder precedes.

libaribcaption Decoder Options

Specifies the format of the decoded subtitles.
Graphical image.
ass
ASS formatted text.
Simple text based output without formatting.

The default is ass as same as libaribb24 decoder. Some present players (e.g., mpv) expect ASS format for ARIB caption.

Specifies the encoding scheme of input subtitle text.
Automatically detect text encoding (default).
8bit-char JIS encoding defined in ARIB STD B24. This encoding used in Japan for ISDB captions.
UTF-8 encoding defined in ARIB STD B24. This encoding is used in Philippines for ISDB-T captions.
Latin character encoding defined in ABNT NBR 15606-1. This encoding is used in South America for SBTVD / ISDB-Tb captions.
Specify comma-separated list of font family names to be used for bitmap or ass type subtitle rendering. Only first font name is used for ass type subtitle.

If not specified, use internaly defined default font family.

ARIB STD-B24 specifies that some captions may be displayed at different positions at a time (multi-rectangle subtitle). Since some players (e.g., old mpv) can't handle multiple ASS rectangles in a single AVSubtitle, or multiple ASS rectangles of indeterminate duration with the same start timestamp, this option can change the behavior so that all the texts are displayed in a single ASS rectangle.

The default is false.

If your player cannot handle AVSubtitles with multiple ASS rectangles properly, set this option to true or define ASS_SINGLE_RECT=1 to change default behavior at compilation.

Specify whether always render outline text for all characters regardless of the indication by charactor style.

The default is false.

Specify width for outline text, in dots (relative).

The default is 1.5.

Specify whether to ignore background color rendering.

The default is false.

Specify whether to ignore rendering for ruby-like (furigana) characters.

The default is false.

Specify whether to render replaced DRCS characters as Unicode characters.

The default is true.

Specify whether to replace MSZ (Middle Size; half width) fullwidth alphanumerics with halfwidth alphanumerics.

The default is true.

Specify whether to replace some MSZ (Middle Size; half width) fullwidth japanese special characters with halfwidth ones.

The default is true.

Specify whether to replace MSZ (Middle Size; half width) characters with halfwidth glyphs if the fonts supports it. This option works under FreeType or DirectWrite renderer with Adobe-Japan1 compliant fonts. e.g., IBM Plex Sans JP, Morisawa BIZ UDGothic, Morisawa BIZ UDMincho, Yu Gothic, Yu Mincho, and Meiryo.

The default is true.

Specify the resolution of the canvas to render subtitles to; usually, this should be frame size of input video. This only applies when "-subtitle_type" is set to bitmap.

The libaribcaption decoder assumes input frame size for bitmap rendering as below:

1.
PROFILE_A : 1440 x 1080 with SAR (PAR) 4:3
2.
PROFILE_C : 320 x 180 with SAR (PAR) 1:1

If actual frame size of input video does not match above assumption, the rendered captions may be distorted. To make the captions undistorted, add "-canvas_size" option to specify actual input video size.

Note that the "-canvas_size" option is not required for video with different size but same aspect ratio. In such cases, the caption will be stretched or shrunk to actual video size if "-canvas_size" option is not specified. If "-canvas_size" option is specified with different size, the caption will be stretched or shrunk as specified size with calculated SAR.

libaribcaption decoder usage examples

Display MPEG-TS file with ARIB subtitle by "ffplay" tool:

ffplay -sub_type bitmap MPEG.TS

Display MPEG-TS file with input frame size 1920x1080 by "ffplay" tool:

ffplay -sub_type bitmap -canvas_size 1920x1080 MPEG.TS

Embed ARIB subtitle in transcoded video:

ffmpeg -sub_type bitmap -i src.m2t -filter_complex "[0:v][0:s]overlay" -vcodec h264 dest.mp4

Options

-2
Compute clut once if no matching CLUT is in the stream.
-1
Compute clut if no matching CLUT is in the stream.
0
Never compute CLUT
1
Always compute CLUT and override the one provided in the stream.
Selects the dvb substream, or all substreams if -1 which is default.

This codec decodes the bitmap subtitles used in DVDs; the same subtitles can also be found in VobSub file pairs and in some Matroska files.

Options

Specify the global palette used by the bitmaps. When stored in VobSub, the palette is normally specified in the index file; in Matroska, the palette is stored in the codec extra-data in the same format as in VobSub. In DVDs, the palette is stored in the IFO file, and therefore not available when reading from dumped VOB files.

The format for this option is a string containing 16 24-bits hexadecimal numbers (without 0x prefix) separated by commas, for example "0d00ee, ee450d, 101010, eaeaea, 0ce60b, ec14ed, ebff0b, 0d617a, 7b7b7b, d1d1d1, 7b2a0e, 0d950c, 0f007b, cf0dec, cfa80c, 7c127b".

Specify the IFO file from which the global palette is obtained. (experimental)
Only decode subtitle entries marked as forced. Some titles have forced and non-forced subtitles in the same track. Setting this flag to 1 will only keep the forced subtitles. Default value is 0.

Libzvbi allows libavcodec to decode DVB teletext pages and DVB teletext subtitles. Requires the presence of the libzvbi headers and library during configuration. You need to explicitly configure the build with "--enable-libzvbi".

Options

List of teletext page numbers to decode. Pages that do not match the specified list are dropped. You may use the special "*" string to match all pages, or "subtitle" to match all subtitle pages. Default value is *.
Set default character set used for decoding, a value between 0 and 87 (see ETS 300 706, Section 15, Table 32). Default value is -1, which does not override the libzvbi default. This option is needed for some legacy level 1.0 transmissions which cannot signal the proper charset.
Discards the top teletext line. Default value is 1.
Specifies the format of the decoded subtitles.
The default format, you should use this for teletext pages, because certain graphics and colors cannot be expressed in simple text or even ASS.
Simple text based output without formatting.
ass
Formatted ASS output, subtitle pages and teletext pages are returned in different styles, subtitle pages are stripped down to text, but an effort is made to keep the text alignment and the formatting.
X offset of generated bitmaps, default is 0.
Y offset of generated bitmaps, default is 0.
Chops leading and trailing spaces and removes empty lines from the generated text. This option is useful for teletext based subtitles where empty spaces may be present at the start or at the end of the lines or empty lines may be present between the subtitle lines because of double-sized teletext characters. Default value is 1.
Sets the display duration of the decoded teletext pages or subtitles in milliseconds. Default value is -1 which means infinity or until the next subtitle event comes.
Force transparent background of the generated teletext bitmaps. Default value is 0 which means an opaque background.
Sets the opacity (0-255) of the teletext background. If txt_transparent is not set, it only affects characters between a start box and an end box, typically subtitles. Default value is 0 if txt_transparent is set, 255 otherwise.

When you configure your FFmpeg build, all the supported bitstream filters are enabled by default. You can list all available ones using the configure option "--list-bsfs".

You can disable all the bitstream filters using the configure option "--disable-bsfs", and selectively enable any bitstream filter using the option "--enable-bsf=BSF", or you can disable a particular bitstream filter using the option "--disable-bsf=BSF".

The option "-bsfs" of the ff* tools will display the list of all the supported bitstream filters included in your build.

The ff* tools have a -bsf option applied per stream, taking a comma-separated list of filters, whose parameters follow the filter name after a '='.

ffmpeg -i INPUT -c:v copy -bsf:v filter1[=opt1=str1:opt2=str2][,filter2] OUTPUT

Below is a description of the currently available bitstream filters, with their parameters, if any.

Convert MPEG-2/4 AAC ADTS to an MPEG-4 Audio Specific Configuration bitstream.

This filter creates an MPEG-4 AudioSpecificConfig from an MPEG-2/4 ADTS header and removes the ADTS header.

This filter is required for example when copying an AAC stream from a raw ADTS AAC or an MPEG-TS container to MP4A-LATM, to an FLV file, or to MOV/MP4 files and related formats such as 3GP or M4A. Please note that it is auto-inserted for MP4A-LATM and MOV/MP4 and related formats.

Modify metadata embedded in an AV1 stream.

Insert or remove temporal delimiter OBUs in all temporal units of the stream.
Insert a TD at the beginning of every TU which does not already have one.
Remove the TD from the beginning of every TU which has one.
Set the color description fields in the stream (see AV1 section 6.4.2).
Set the color range in the stream (see AV1 section 6.4.2; note that this cannot be set for streams using BT.709 primaries, sRGB transfer characteristic and identity (RGB) matrix coefficients).
Limited range.
Full range.
Set the chroma sample location in the stream (see AV1 section 6.4.2). This can only be set for 4:2:0 streams.
Left position (matching the default in MPEG-2 and H.264).
Top-left position.
Set the tick rate (time_scale / num_units_in_display_tick) in the timing info in the sequence header.
Set the number of ticks in each picture, to indicate that the stream has a fixed framerate. Ignored if tick_rate is not also set.
Deletes Padding OBUs.

Remove zero padding at the end of a packet.

Extract the core from a DCA/DTS stream, dropping extensions such as DTS-HD.

Manipulate Dolby Vision metadata in a HEVC/AV1 bitstream, optionally enabling metadata compression.

If enabled, strip all Dolby Vision metadata (configuration record + RPU data blocks) from the stream.
Which compression level to enable.
No metadata compression.
Limited metadata compression scheme. Should be compatible with most devices. This is the default.
Extended metadata compression. Devices are not required to support this. Note that this level currently behaves the same as limited in libavcodec.

Add extradata to the beginning of the filtered packets except when said packets already exactly begin with the extradata that is intended to be added.

The additional argument specifies which packets should be filtered. It accepts the values:
add extradata to all key packets
add extradata to all packets

If not specified it is assumed k.

For example the following ffmpeg command forces a global header (thus disabling individual packet headers) in the H.264 packets generated by the "libx264" encoder, but corrects them by adding the header stored in extradata to the key packets:

ffmpeg -i INPUT -map 0 -flags:v +global_header -c:v libx264 -bsf:v dump_extra out.ts

Blocks in DV which are marked as damaged are replaced by blocks of the specified color.

The color to replace damaged blocks by
A 16 bit mask which specifies which of the 16 possible error status values are to be replaced by colored blocks. 0xFFFE is the default which replaces all non 0 error status values.
No error, no concealment
Error, No concealment
Reserved
Error or concealment
Not reserved
The specific error status code

Extract the core from a E-AC-3 stream, dropping extra channels.

Extract the in-band extradata.

Certain codecs allow the long-term headers (e.g. MPEG-2 sequence headers, or H.264/HEVC (VPS/)SPS/PPS) to be transmitted either "in-band" (i.e. as a part of the bitstream containing the coded frames) or "out of band" (e.g. on the container level). This latter form is called "extradata" in FFmpeg terminology.

This bitstream filter detects the in-band headers and makes them available as extradata.

When this option is enabled, the long-term headers are removed from the bitstream after extraction.

Remove units with types in or not in a given set from the stream.

List of unit types or ranges of unit types to pass through while removing all others. This is specified as a '|'-separated list of unit type values or ranges of values with '-'.
Identical to pass_types, except the units in the given set removed and all others passed through.

The types used by pass_types and remove_types correspond to NAL unit types (nal_unit_type) in H.264, HEVC and H.266 (see Table 7-1 in the H.264 and HEVC specifications or Table 5 in the H.266 specification), to marker values for JPEG (without 0xFF prefix) and to start codes without start code prefix (i.e. the byte following the 0x000001) for MPEG-2. For VP8 and VP9, every unit has type zero.

Extradata is unchanged by this transformation, but note that if the stream contains inline parameter sets then the output may be unusable if they are removed.

For example, to remove all non-VCL NAL units from an H.264 stream:

ffmpeg -i INPUT -c:v copy -bsf:v 'filter_units=pass_types=1-5' OUTPUT

To remove all AUDs, SEI and filler from an H.265 stream:

ffmpeg -i INPUT -c:v copy -bsf:v 'filter_units=remove_types=35|38-40' OUTPUT

To remove all user data from a MPEG-2 stream, including Closed Captions:

ffmpeg -i INPUT -c:v copy -bsf:v 'filter_units=remove_types=178' OUTPUT

To remove all SEI from a H264 stream, including Closed Captions:

ffmpeg -i INPUT -c:v copy -bsf:v 'filter_units=remove_types=6' OUTPUT

To remove all prefix and suffix SEI from a HEVC stream, including Closed Captions and dynamic HDR:

ffmpeg -i INPUT -c:v copy -bsf:v 'filter_units=remove_types=39|40' OUTPUT

Extract Rgb or Alpha part of an HAPQA file, without recompression, in order to create an HAPQ or an HAPAlphaOnly file.

Specifies the texture to keep.

Convert HAPQA to HAPQ

ffmpeg -i hapqa_inputfile.mov -c copy -bsf:v hapqa_extract=texture=color -tag:v HapY -metadata:s:v:0 encoder="HAPQ" hapq_file.mov

Convert HAPQA to HAPAlphaOnly

ffmpeg -i hapqa_inputfile.mov -c copy -bsf:v hapqa_extract=texture=alpha -tag:v HapA -metadata:s:v:0 encoder="HAPAlpha Only" hapalphaonly_file.mov

Modify metadata embedded in an H.264 stream.

Insert or remove AUD NAL units in all access units of the stream.

Default is pass.

Set the sample aspect ratio of the stream in the VUI parameters. See H.264 table E-1.
Set whether the stream is suitable for display using overscan or not (see H.264 section E.2.1).
Set the video format in the stream (see H.264 section E.2.1 and table E-2).
Set the colour description in the stream (see H.264 section E.2.1 and tables E-3, E-4 and E-5).
Set the chroma sample location in the stream (see H.264 section E.2.1 and figure E-1).
Set the tick rate (time_scale / num_units_in_tick) in the VUI parameters. This is the smallest time unit representable in the stream, and in many cases represents the field rate of the stream (double the frame rate).
Set whether the stream has fixed framerate - typically this indicates that the framerate is exactly half the tick rate, but the exact meaning is dependent on interlacing and the picture structure (see H.264 section E.2.1 and table E-6).
Zero constraint_set4_flag and constraint_set5_flag in the SPS. These bits were reserved in a previous version of the H.264 spec, and thus some hardware decoders require these to be zero. The result of zeroing this is still a valid bitstream.
Set the frame cropping offsets in the SPS. These values will replace the current ones if the stream is already cropped.

These fields are set in pixels. Note that some sizes may not be representable if the chroma is subsampled or the stream is interlaced (see H.264 section 7.4.2.1.1).

Insert a string as SEI unregistered user data. The argument must be of the form UUID+string, where the UUID is as hex digits possibly separated by hyphens, and the string can be anything.

For example, 086f3693-b7b3-4f2c-9653-21492feee5b8+hello will insert the string ``hello'' associated with the given UUID.

Deletes both filler NAL units and filler SEI messages.
Insert, extract or remove Display orientation SEI messages. See H.264 section D.1.27 and D.2.27 for syntax and semantics.

Default is pass.

Insert mode works in conjunction with "rotate" and "flip" options. Any pre-existing Display orientation messages will be removed in insert or remove mode. Extract mode attaches the display matrix to the packet as side data.

rotate
Set rotation in display orientation SEI (anticlockwise angle in degrees). Range is -360 to +360. Default is NaN.
Set flip in display orientation SEI.

Default is unset.

Set the level in the SPS. Refer to H.264 section A.3 and tables A-1 to A-5.

The argument must be the name of a level (for example, 4.2), a level_idc value (for example, 42), or the special name auto indicating that the filter should attempt to guess the level from the input stream properties.

Convert an H.264 bitstream from length prefixed mode to start code prefixed mode (as defined in the Annex B of the ITU-T H.264 specification).

This is required by some streaming formats, typically the MPEG-2 transport stream format (muxer "mpegts").

For example to remux an MP4 file containing an H.264 stream to mpegts format with ffmpeg, you can use the command:

ffmpeg -i INPUT.mp4 -codec copy -bsf:v h264_mp4toannexb OUTPUT.ts

Please note that this filter is auto-inserted for MPEG-TS (muxer "mpegts") and raw H.264 (muxer "h264") output formats.

This applies a specific fixup to some Blu-ray streams which contain redundant PPSs modifying irrelevant parameters of the stream which confuse other transformations which require correct extradata.

Modify metadata embedded in an HEVC stream.

Insert or remove AUD NAL units in all access units of the stream.
Set the sample aspect ratio in the stream in the VUI parameters.
Set the video format in the stream (see H.265 section E.3.1 and table E.2).
Set the colour description in the stream (see H.265 section E.3.1 and tables E.3, E.4 and E.5).
Set the chroma sample location in the stream (see H.265 section E.3.1 and figure E.1).
Set the tick rate in the VPS and VUI parameters (time_scale / num_units_in_tick). Combined with num_ticks_poc_diff_one, this can set a constant framerate in the stream. Note that it is likely to be overridden by container parameters when the stream is in a container.
Set poc_proportional_to_timing_flag in VPS and VUI and use this value to set num_ticks_poc_diff_one_minus1 (see H.265 sections 7.4.3.1 and E.3.1). Ignored if tick_rate is not also set.
Set the conformance window cropping offsets in the SPS. These values will replace the current ones if the stream is already cropped.

These fields are set in pixels. Note that some sizes may not be representable if the chroma is subsampled (H.265 section 7.4.3.2.1).

Set width and height after crop.
Set the level in the VPS and SPS. See H.265 section A.4 and tables A.6 and A.7.

The argument must be the name of a level (for example, 5.1), a general_level_idc value (for example, 153 for level 5.1), or the special name auto indicating that the filter should attempt to guess the level from the input stream properties.

Convert an HEVC/H.265 bitstream from length prefixed mode to start code prefixed mode (as defined in the Annex B of the ITU-T H.265 specification).

This is required by some streaming formats, typically the MPEG-2 transport stream format (muxer "mpegts").

For example to remux an MP4 file containing an HEVC stream to mpegts format with ffmpeg, you can use the command:

ffmpeg -i INPUT.mp4 -codec copy -bsf:v hevc_mp4toannexb OUTPUT.ts

Please note that this filter is auto-inserted for MPEG-TS (muxer "mpegts") and raw HEVC/H.265 (muxer "h265" or "hevc") output formats.

Modifies the bitstream to fit in MOV and to be usable by the Final Cut Pro decoder. This filter only applies to the mpeg2video codec, and is likely not needed for Final Cut Pro 7 and newer with the appropriate -tag:v.

For example, to remux 30 MB/sec NTSC IMX to MOV:

ffmpeg -i input.mxf -c copy -bsf:v imxdump -tag:v mx3n output.mov

Convert MJPEG/AVI1 packets to full JPEG/JFIF packets.

MJPEG is a video codec wherein each video frame is essentially a JPEG image. The individual frames can be extracted without loss, e.g. by

ffmpeg -i ../some_mjpeg.avi -c:v copy frames_%d.jpg

Unfortunately, these chunks are incomplete JPEG images, because they lack the DHT segment required for decoding. Quoting from http://www.digitalpreservation.gov/formats/fdd/fdd000063.shtml:

Avery Lee, writing in the rec.video.desktop newsgroup in 2001, commented that "MJPEG, or at least the MJPEG in AVIs having the MJPG fourcc, is restricted JPEG with a fixed -- and *omitted* -- Huffman table. The JPEG must be YCbCr colorspace, it must be 4:2:2, and it must use basic Huffman encoding, not arithmetic or progressive. . . . You can indeed extract the MJPEG frames and decode them with a regular JPEG decoder, but you have to prepend the DHT segment to them, or else the decoder won't have any idea how to decompress the data. The exact table necessary is given in the OpenDML spec."

This bitstream filter patches the header of frames extracted from an MJPEG stream (carrying the AVI1 header ID and lacking a DHT segment) to produce fully qualified JPEG images.

ffmpeg -i mjpeg-movie.avi -c:v copy -bsf:v mjpeg2jpeg frame_%d.jpg
exiftran -i -9 frame*.jpg
ffmpeg -i frame_%d.jpg -c:v copy rotated.avi

Add an MJPEG A header to the bitstream, to enable decoding by Quicktime.

Extract a representable text file from MOV subtitles, stripping the metadata header from each subtitle packet.

See also the text2movsub filter.

Modify metadata embedded in an MPEG-2 stream.

Set the display aspect ratio in the stream.

The following fixed values are supported:

4/3
16/9
221/100

Any other value will result in square pixels being signalled instead (see H.262 section 6.3.3 and table 6-3).

Set the frame rate in the stream. This is constructed from a table of known values combined with a small multiplier and divisor - if the supplied value is not exactly representable, the nearest representable value will be used instead (see H.262 section 6.3.3 and table 6-4).
Set the video format in the stream (see H.262 section 6.3.6 and table 6-6).
Set the colour description in the stream (see H.262 section 6.3.6 and tables 6-7, 6-8 and 6-9).

Unpack DivX-style packed B-frames.

DivX-style packed B-frames are not valid MPEG-4 and were only a workaround for the broken Video for Windows subsystem. They use more space, can cause minor AV sync issues, require more CPU power to decode (unless the player has some decoded picture queue to compensate the 2,0,2,0 frame per packet style) and cause trouble if copied into a standard container like mp4 or mpeg-ps/ts, because MPEG-4 decoders may not be able to decode them, since they are not valid MPEG-4.

For example to fix an AVI file containing an MPEG-4 stream with DivX-style packed B-frames using ffmpeg, you can use the command:

ffmpeg -i INPUT.avi -codec copy -bsf:v mpeg4_unpack_bframes OUTPUT.avi

Damages the contents of packets or simply drops them without damaging the container. Can be used for fuzzing or testing error resilience/concealment.

Parameters:

Accepts an expression whose evaluation per-packet determines how often bytes in that packet will be modified. A value below 0 will result in a variable frequency. Default is 0 which results in no modification. However, if neither amount nor drop is specified, amount will be set to -1. See below for accepted variables.
Accepts an expression evaluated per-packet whose value determines whether that packet is dropped. Evaluation to a positive value results in the packet being dropped. Evaluation to a negative value results in a variable chance of it being dropped, roughly inverse in proportion to the magnitude of the value. Default is 0 which results in no drops. See below for accepted variables.
Accepts a non-negative integer, which assigns a variable chance of it being dropped, roughly inverse in proportion to the value. Default is 0 which results in no drops. This option is kept for backwards compatibility and is equivalent to setting drop to a negative value with the same magnitude i.e. "dropamount=4" is the same as "drop=-4". Ignored if drop is also specified.

Both "amount" and "drop" accept expressions containing the following variables:

The index of the packet, starting from zero.
The timebase for packet timestamps.
Packet presentation timestamp.
Packet decoding timestamp.
Constant representing AV_NOPTS_VALUE.
First non-AV_NOPTS_VALUE PTS seen in the stream.
First non-AV_NOPTS_VALUE DTS seen in the stream.
Packet duration, in timebase units.
Packet position in input; may be -1 when unknown or not set.
Packet size, in bytes.
Whether packet is marked as a keyframe.
A pseudo random integer, primarily derived from the content of packet payload.

Examples

Apply modification to every byte but don't drop any packets.

ffmpeg -i INPUT -c copy -bsf noise=1 output.mkv

Drop every video packet not marked as a keyframe after timestamp 30s but do not modify any of the remaining packets.

ffmpeg -i INPUT -c copy -bsf:v noise=drop='gt(t\,30)*not(key)' output.mkv

Drop one second of audio every 10 seconds and add some random noise to the rest.

ffmpeg -i INPUT -c copy -bsf:a noise=amount=-1:drop='between(mod(t\,10)\,9\,10)' output.mkv

This bitstream filter passes the packets through unchanged.

Repacketize PCM audio to a fixed number of samples per packet or a fixed packet rate per second. This is similar to the asetnsamples audio filter but works on audio packets instead of audio frames.

Set the number of samples per each output audio packet. The number is intended as the number of samples per each channel. Default value is 1024.
If set to 1, the filter will pad the last audio packet with silence, so that it will contain the same number of samples (or roughly the same number of samples, see frame_rate) as the previous ones. Default value is 1.
This option makes the filter output a fixed number of packets per second instead of a fixed number of samples per packet. If the audio sample rate is not divisible by the frame rate then the number of samples will not be constant but will vary slightly so that each packet will start as close to the frame boundary as possible. Using this option has precedence over nb_out_samples.

You can generate the well known 1602-1601-1602-1601-1602 pattern of 48kHz audio for NTSC frame rate using the frame_rate option.

ffmpeg -f lavfi -i sine=r=48000:d=1 -c pcm_s16le -bsf pcm_rechunk=r=30000/1001 -f framecrc -

Merge a sequence of PGS Subtitle segments ending with an "end of display set" segment into a single packet.

This is required by some containers that support PGS subtitles (muxer "matroska").

Modify color property metadata embedded in prores stream.

Set the color primaries. Available values are:
Keep the same color primaries property (default).
BT601 625
BT601 525
DCI P3
P3 D65
Set the color transfer. Available values are:
Keep the same transfer characteristics property (default).
BT 601, BT 709, BT 2020
SMPTE ST 2084
ARIB STD-B67
Set the matrix coefficient. Available values are:
Keep the same colorspace property (default).
BT 601

Set Rec709 colorspace for each frame of the file

ffmpeg -i INPUT -c copy -bsf:v prores_metadata=color_primaries=bt709:color_trc=bt709:colorspace=bt709 output.mov

Set Hybrid Log-Gamma parameters for each frame of the file

ffmpeg -i INPUT -c copy -bsf:v prores_metadata=color_primaries=bt2020:color_trc=arib-std-b67:colorspace=bt2020nc output.mov

Remove extradata from packets.

It accepts the following parameter:

Set which frame types to remove extradata from.
Remove extradata from non-keyframes only.
Remove extradata from keyframes only.
Remove extradata from all frames.

Set PTS and DTS in packets.

It accepts the following parameters:

Set expressions for PTS, DTS or both.
Set expression for duration.
Set output time base.

The expressions are evaluated through the eval API and can contain the following constants:

The count of the input packet. Starting from 0.
The demux timestamp in input in case of "ts" or "dts" option or presentation timestamp in case of "pts" option.
The original position in the file of the packet, or undefined if undefined for the current packet
The demux timestamp in input.
The presentation timestamp in input.
The duration in input.
The DTS of the first packet.
The PTS of the first packet.
The previous input DTS.
The previous input PTS.
The previous input duration.
The previous output DTS.
The previous output PTS.
The previous output duration.
The next input DTS.
The next input PTS.
The next input duration.
The timebase of stream packet belongs.
The output timebase.
The sample rate of stream packet belongs.
The AV_NOPTS_VALUE constant.

For example, to set PTS equal to DTS (not recommended if B-frames are involved):

ffmpeg -i INPUT -c:a copy -bsf:a setts=pts=DTS out.mkv

Log basic packet information. Mainly useful for testing, debugging, and development.

Convert text subtitles to MOV subtitles (as used by the "mov_text" codec) with metadata headers.

See also the mov2textsub filter.

Log trace output containing all syntax elements in the coded stream headers (everything above the level of individual coded blocks). This can be useful for debugging low-level stream issues.

Supports AV1, H.264, H.265, (M)JPEG, MPEG-2 and VP9, but depending on the build only a subset of these may be available.

Extract the core from a TrueHD stream, dropping ATMOS data.

Modify metadata embedded in a VP9 stream.

Set the color space value in the frame header. Note that any frame set to RGB will be implicitly set to PC range and that RGB is incompatible with profiles 0 and 2.
Set the color range value in the frame header. Note that any value imposed by the color space will take precedence over this value.

Merge VP9 invisible (alt-ref) frames back into VP9 superframes. This fixes merging of split/segmented VP9 streams where the alt-ref frame was split from its visible counterpart.

Split VP9 superframes into single frames.

Given a VP9 stream with correct timestamps but possibly out of order, insert additional show-existing-frame packets to correct the ordering.

The libavformat library provides some generic global options, which can be set on all the muxers and demuxers. In addition each muxer or demuxer may support so-called private options, which are specific for that component.

Options may be set by specifying -option value in the FFmpeg tools, or by setting the value explicitly in the "AVFormatContext" options or using the libavutil/opt.h API for programmatic use.

The list of supported options follows:

Possible values:
Reduce buffering.
Set probing size in bytes, i.e. the size of the data to analyze to get stream information. A higher value will enable detecting more information in case it is dispersed into the stream, but will increase latency. Must be an integer not lesser than 32. It is 5000000 by default.
Set the maximum number of buffered packets when probing a codec. Default is 2500 packets.
Set packet size.
Set format flags. Some are implemented for a limited number of formats.

Possible values for input files:

Discard corrupted packets.
Enable fast, but inaccurate seeks for some formats.
Generate missing PTS if DTS is present.
Ignore DTS if PTS is also set. In case the PTS is set, the DTS value is set to NOPTS. This is ignored when the "nofillin" flag is set.
Ignore index.
Reduce the latency introduced by buffering during initial input streams analysis.
Do not fill in missing values in packet fields that can be exactly calculated.
Disable AVParsers, this needs "+nofillin" too.
Try to interleave output packets by DTS. At present, available only for AVIs with an index.

Possible values for output files:

Automatically apply bitstream filters as required by the output format. Enabled by default.
Only write platform-, build- and time-independent data. This ensures that file and data checksums are reproducible and match between platforms. Its primary use is for regression testing.
Write out packets immediately.
Stop muxing at the end of the shortest stream. It may be needed to increase max_interleave_delta to avoid flushing the longer streams before EOF.
Allow seeking to non-keyframes on demuxer level when supported if set to 1. Default is 0.
Specify how many microseconds are analyzed to probe the input. A higher value will enable detecting more accurate information, but will increase latency. It defaults to 5,000,000 microseconds = 5 seconds.
Set decryption key.
Set max memory used for timestamp index (per stream).
Set max memory used for buffering real-time frames.
Print specific debug info.

Possible values:

Set maximum muxing or demuxing delay in microseconds.
Set number of frames used to probe fps.
Set microseconds by which audio packets should be interleaved earlier.
Set microseconds for each chunk.
Set size in bytes for each chunk.
Set error detection flags. "f_err_detect" is deprecated and should be used only via the ffmpeg tool.

Possible values:

Verify embedded CRCs.
Detect bitstream specification deviations.
buffer
Detect improper bitstream length.
Abort decoding on minor error detection.
Consider things that violate the spec and have not been seen in the wild as errors.
Consider all spec non compliancies as errors.
Consider things that a sane encoder should not do as an error.
Set maximum buffering duration for interleaving. The duration is expressed in microseconds, and defaults to 10000000 (10 seconds).

To ensure all the streams are interleaved correctly, libavformat will wait until it has at least one packet for each stream before actually writing any packets to the output file. When some streams are "sparse" (i.e. there are large gaps between successive packets), this can result in excessive buffering.

This field specifies the maximum difference between the timestamps of the first and the last packet in the muxing queue, above which libavformat will output a packet regardless of whether it has queued a packet for all the streams.

If set to 0, libavformat will continue buffering packets until it has a packet for each stream, regardless of the maximum timestamp difference between the buffered packets.

Use wallclock as timestamps if set to 1. Default is 0.
Possible values:
Shift timestamps to make them non-negative. Also note that this affects only leading negative timestamps, and not non-monotonic negative timestamps.
Shift timestamps so that the first timestamp is 0.
Enables shifting when required by the target format.
Disables shifting of timestamp.

When shifting is enabled, all output timestamps are shifted by the same amount. Audio, video, and subtitles desynching and relative timestamp differences are preserved compared to how they would have been without shifting.

Set number of bytes to skip before reading header and frames if set to 1. Default is 0.
Correct single timestamp overflows if set to 1. Default is 1.
Flush the underlying I/O stream after each packet. Default is -1 (auto), which means that the underlying protocol will decide, 1 enables it, and has the effect of reducing the latency, 0 disables it and may increase IO throughput in some cases.
Set the output time offset.

offset must be a time duration specification, see the Time duration section in the ffmpeg-utils(1) manual.

The offset is added by the muxer to the output timestamps.

Specifying a positive offset means that the corresponding streams are delayed bt the time duration specified in offset. Default value is 0 (meaning that no offset is applied).

"," separated list of allowed demuxers. By default all are allowed.
Separator used to separate the fields printed on the command line about the Stream parameters. For example, to separate the fields with newlines and indentation:
ffprobe -dump_separator "
                          "  -i ~/videos/matrixbench_mpeg2.mpg
Specifies the maximum number of streams. This can be used to reject files that would require too many resources due to a large number of streams.
Skip estimation of input duration if it requires an additional probing for PTS at end of file. At present, applicable for MPEG-PS and MPEG-TS.
Set probing size, in bytes, for input duration estimation when it actually requires an additional probing for PTS at end of file (at present: MPEG-PS and MPEG-TS). It is aimed at users interested in better durations probing for itself, or indirectly because using the concat demuxer, for example. The typical use case is an MPEG-TS CBR with a high bitrate, high video buffering and ending cleaning with similar PTS for video and audio: in such a scenario, the large physical gap between the last video packet and the last audio packet makes it necessary to read many bytes in order to get the video stream duration. Another use case is where the default probing behaviour only reaches a single video frame which is not the last one of the stream due to frame reordering, so the duration is not accurate. Setting this option has a performance impact even for small files because the probing size is fixed. Default behaviour is a general purpose trade-off, largely adaptive, but the probing size will not be extended to get streams durations at all costs. Must be an integer not lesser than 1, or 0 for default behaviour.
Specify how strictly to follow the standards. "f_strict" is deprecated and should be used only via the ffmpeg tool.

Possible values:

strictly conform to an older more strict version of the spec or reference software
strictly conform to all the things in the spec no matter what consequences
allow unofficial extensions
allow non standardized experimental things, experimental (unfinished/work in progress/not well tested) decoders and encoders. Note: experimental decoders can pose a security risk, do not use this for decoding untrusted input.

Format stream specifiers allow selection of one or more streams that match specific properties.

The exact semantics of stream specifiers is defined by the avformat_match_stream_specifier() function declared in the libavformat/avformat.h header and documented in the Stream specifiers section in the ffmpeg(1) manual.

Demuxers are configured elements in FFmpeg that can read the multimedia streams from a particular type of file.

When you configure your FFmpeg build, all the supported demuxers are enabled by default. You can list all available ones using the configure option "--list-demuxers".

You can disable all the demuxers using the configure option "--disable-demuxers", and selectively enable a single demuxer with the option "--enable-demuxer=DEMUXER", or disable it with the option "--disable-demuxer=DEMUXER".

The option "-demuxers" of the ff* tools will display the list of enabled demuxers. Use "-formats" to view a combined list of enabled demuxers and muxers.

The description of some of the currently available demuxers follows.

Audible Format 2, 3, and 4 demuxer.

This demuxer is used to demux Audible Format 2, 3, and 4 (.aa) files.

Raw Audio Data Transport Stream AAC demuxer.

This demuxer is used to demux an ADTS input containing a single AAC stream alongwith any ID3v1/2 or APE tags in it.

Animated Portable Network Graphics demuxer.

This demuxer is used to demux APNG files. All headers, but the PNG signature, up to (but not including) the first fcTL chunk are transmitted as extradata. Frames are then split as being all the chunks between two fcTL ones, or between the last fcTL and IEND chunks.

Ignore the loop variable in the file if set. Default is enabled.
Maximum framerate in frames per second. Default of 0 imposes no limit.
Default framerate in frames per second when none is specified in the file (0 meaning as fast as possible). Default is 15.

Advanced Systems Format demuxer.

This demuxer is used to demux ASF files and MMS network streams.

Do not try to resynchronize by looking for a certain optional start code.

Virtual concatenation script demuxer.

This demuxer reads a list of files and other directives from a text file and demuxes them one after the other, as if all their packets had been muxed together.

The timestamps in the files are adjusted so that the first file starts at 0 and each next file starts where the previous one finishes. Note that it is done globally and may cause gaps if all streams do not have exactly the same length.

All files must have the same streams (same codecs, same time base, etc.).

The duration of each file is used to adjust the timestamps of the next file: if the duration is incorrect (because it was computed using the bit-rate or because the file is truncated, for example), it can cause artifacts. The "duration" directive can be used to override the duration stored in each file.

Syntax

The script is a text file in extended-ASCII, with one directive per line. Empty lines, leading spaces and lines starting with '#' are ignored. The following directive is recognized:

"file path"
Path to a file to read; special characters and spaces must be escaped with backslash or single quotes.

All subsequent file-related directives apply to that file.

"ffconcat version 1.0"
Identify the script type and version.

To make FFmpeg recognize the format automatically, this directive must appear exactly as is (no extra space or byte-order-mark) on the very first line of the script.

"duration dur"
Duration of the file. This information can be specified from the file; specifying it here may be more efficient or help if the information from the file is not available or accurate.

If the duration is set for all files, then it is possible to seek in the whole concatenated video.

"inpoint timestamp"
In point of the file. When the demuxer opens the file it instantly seeks to the specified timestamp. Seeking is done so that all streams can be presented successfully at In point.

This directive works best with intra frame codecs, because for non-intra frame ones you will usually get extra packets before the actual In point and the decoded content will most likely contain frames before In point too.

For each file, packets before the file In point will have timestamps less than the calculated start timestamp of the file (negative in case of the first file), and the duration of the files (if not specified by the "duration" directive) will be reduced based on their specified In point.

Because of potential packets before the specified In point, packet timestamps may overlap between two concatenated files.

"outpoint timestamp"
Out point of the file. When the demuxer reaches the specified decoding timestamp in any of the streams, it handles it as an end of file condition and skips the current and all the remaining packets from all streams.

Out point is exclusive, which means that the demuxer will not output packets with a decoding timestamp greater or equal to Out point.

This directive works best with intra frame codecs and formats where all streams are tightly interleaved. For non-intra frame codecs you will usually get additional packets with presentation timestamp after Out point therefore the decoded content will most likely contain frames after Out point too. If your streams are not tightly interleaved you may not get all the packets from all streams before Out point and you may only will be able to decode the earliest stream until Out point.

The duration of the files (if not specified by the "duration" directive) will be reduced based on their specified Out point.

"file_packet_metadata key=value"
Metadata of the packets of the file. The specified metadata will be set for each file packet. You can specify this directive multiple times to add multiple metadata entries. This directive is deprecated, use "file_packet_meta" instead.
"file_packet_meta key value"
Metadata of the packets of the file. The specified metadata will be set for each file packet. You can specify this directive multiple times to add multiple metadata entries.
"option key value"
Option to access, open and probe the file. Can be present multiple times.
"stream"
Introduce a stream in the virtual file. All subsequent stream-related directives apply to the last introduced stream. Some streams properties must be set in order to allow identifying the matching streams in the subfiles. If no streams are defined in the script, the streams from the first file are copied.
"exact_stream_id id"
Set the id of the stream. If this directive is given, the string with the corresponding id in the subfiles will be used. This is especially useful for MPEG-PS (VOB) files, where the order of the streams is not reliable.
"stream_meta key value"
Metadata for the stream. Can be present multiple times.
"stream_codec value"
Codec for the stream.
"stream_extradata hex_string"
Extradata for the string, encoded in hexadecimal.
"chapter id start end"
Add a chapter. id is an unique identifier, possibly small and consecutive.

Options

This demuxer accepts the following option:

If set to 1, reject unsafe file paths and directives. A file path is considered safe if it does not contain a protocol specification and is relative and all components only contain characters from the portable character set (letters, digits, period, underscore and hyphen) and have no period at the beginning of a component.

If set to 0, any file name is accepted.

The default is 1.

If set to 1, try to perform automatic conversions on packet data to make the streams concatenable. The default is 1.

Currently, the only conversion is adding the h264_mp4toannexb bitstream filter to H.264 streams in MP4 format. This is necessary in particular if there are resolution changes.

If set to 1, every packet will contain the lavf.concat.start_time and the lavf.concat.duration packet metadata values which are the start_time and the duration of the respective file segments in the concatenated output expressed in microseconds. The duration metadata is only set if it is known based on the concat file. The default is 0.

Examples

  • Use absolute filenames and include some comments:
    # my first filename
    file /mnt/share/file-1.wav
    # my second filename including whitespace
    file '/mnt/share/file 2.wav'
    # my third filename including whitespace plus single quote
    file '/mnt/share/file 3'\''.wav'
    
  • Allow for input format auto-probing, use safe filenames and set the duration of the first file:
    ffconcat version 1.0
    
    file file-1.wav
    duration 20.0
    
    file subdir/file-2.wav
    

Dynamic Adaptive Streaming over HTTP demuxer.

This demuxer presents all AVStreams found in the manifest. By setting the discard flags on AVStreams the caller can decide which streams to actually receive. Each stream mirrors the "id" and "bandwidth" properties from the "<Representation>" as metadata keys named "id" and "variant_bitrate" respectively.

Options

This demuxer accepts the following option:

16-byte key, in hex, to decrypt files encrypted using ISO Common Encryption (CENC/AES-128 CTR; ISO/IEC 23001-7).

DVD-Video demuxer, powered by libdvdnav and libdvdread.

Can directly ingest DVD titles, specifically sequential PGCs, into a conversion pipeline. Menu assets, such as background video or audio, can also be demuxed given the menu's coordinates (at best effort). Seeking is not supported at this time.

Block devices (DVD drives), ISO files, and directory structures are accepted. Activate with "-f dvdvideo" in front of one of these inputs.

This demuxer does NOT have decryption code of any kind. You are on your own working with encrypted DVDs, and should not expect support on the matter.

Underlying playback is handled by libdvdnav, and structure parsing by libdvdread. FFmpeg must be built with GPL library support available as well as the configure switches "--enable-libdvdnav" and "--enable-libdvdread".

You will need to provide either the desired "title number" or exact PGC/PG coordinates. Many open-source DVD players and tools can aid in providing this information. If not specified, the demuxer will default to title 1 which works for many discs. However, due to the flexibility of the format, it is recommended to check manually. There are many discs that are authored strangely or with invalid headers.

If the input is a real DVD drive, please note that there are some drives which may silently fail on reading bad sectors from the disc, returning random bits instead which is effectively corrupt data. This is especially prominent on aging or rotting discs. A second pass and integrity checks would be needed to detect the corruption. This is not an FFmpeg issue.

Background

DVD-Video is not a directly accessible, linear container format in the traditional sense. Instead, it allows for complex and programmatic playback of carefully muxed MPEG-PS streams that are stored in headerless VOB files. To the end-user, these streams are known simply as "titles", but the actual logical playback sequence is defined by one or more "PGCs", or Program Group Chains, within the title. The PGC is in turn comprised of multiple "PGs", or Programs", which are the actual video segments (and for a typical video feature, sequentially ordered). The PGC structure, along with stream layout and metadata, are stored in IFO files that need to be parsed. PGCs can be thought of as playlists in easier terms.

An actual DVD player relies on user GUI interaction via menus and an internal VM to drive the direction of demuxing. Generally, the user would either navigate (via menus) or automatically be redirected to the PGC of their choice. During this process and the subsequent playback, the DVD player's internal VM also maintains a state and executes instructions that can create jumps to different sectors during playback. This is why libdvdnav is involved, as a linear read of the MPEG-PS blobs on the disc (VOBs) is not enough to produce the right sequence in many cases.

There are many other DVD structures (a long subject) that will not be discussed here. NAV packets, in particular, are handled by this demuxer to build accurate timing but not emitted as a stream. For a good high-level understanding, refer to: https://code.videolan.org/videolan/libdvdnav/-/blob/master/doc/dvd_structures

Options

This demuxer accepts the following options:

The title number to play. Must be set if pgc and pg are not set. Not applicable to menus. Default is 0 (auto), which currently only selects the first available title (title 1) and notifies the user about the implications.
The chapter, or PTT (part-of-title), number to start at. Not applicable to menus. Default is 1.
The chapter, or PTT (part-of-title), number to end at. Not applicable to menus. Default is 0, which is a special value to signal end at the last possible chapter.
The video angle number, referring to what is essentially an additional video stream that is composed from alternate frames interleaved in the VOBs. Not applicable to menus. Default is 1.
The region code to use for playback. Some discs may use this to default playback at a particular angle in different regions. This option will not affect the region code of a real DVD drive, if used as an input. Not applicable to menus. Default is 0, "world".
Demux menu assets instead of navigating a title. Requires exact coordinates of the menu (menu_lu, menu_vts, pgc, pg). Default is false.
The menu language to demux. In DVD, menus are grouped by language. Default is 0, the first language unit.
The VTS where the menu lives, or 0 if it is a VMG menu (root-level). Default is 0, VMG menu.
The entry PGC to start playback, in conjunction with pg. Alternative to setting title. Chapter markers are not supported at this time. Must be explicitly set for menus. Default is 0, automatically resolve from value of title.
The entry PG to start playback, in conjunction with pgc. Alternative to setting title. Chapter markers are not supported at this time. Default is 0, automatically resolve from value of title, or start from the beginning (PG 1) of the menu.
Enable this to have accurate chapter (PTT) markers and duration measurement, which requires a slow second pass read in order to index the chapter marker timestamps from NAV packets. This is non-ideal extra work for real optical drives. It is recommended and faster to use this option with a backup of the DVD structure stored on a hard drive. Not compatible with pgc and pg. Not applicable to menus. Default is 0, false.
trim bool
Skip padding cells (i.e. cells shorter than 1 second) from the beginning. There exist many discs with filler segments at the beginning of the PGC, often with junk data intended for controlling a real DVD player's buffering speed and with no other material data value. Not applicable to menus. Default is 1, true.

Examples

  • Open title 3 from a given DVD structure:
    ffmpeg -f dvdvideo -title 3 -i <path to DVD> ...
    
  • Open chapters 3-6 from title 1 from a given DVD structure:
    ffmpeg -f dvdvideo -chapter_start 3 -chapter_end 6 -title 1 -i <path to DVD> ...
    
  • Open only chapter 5 from title 1 from a given DVD structure:
    ffmpeg -f dvdvideo -chapter_start 5 -chapter_end 5 -title 1 -i <path to DVD> ...
    
  • Demux menu with language 1 from VTS 1, PGC 1, starting at PG 1:
    ffmpeg -f dvdvideo -menu 1 -menu_lu 1 -menu_vts 1 -pgc 1 -pg 1 -i <path to DVD> ...
    

Electronic Arts Multimedia format demuxer.

This format is used by various Electronic Arts games.

Options

Normally the VP6 alpha channel (if exists) is returned as a secondary video stream, by setting this option you can make the demuxer return a single video stream which contains the alpha channel in addition to the ordinary video.

Interoperable Master Format demuxer.

This demuxer presents audio and video streams found in an IMF Composition, as specified in https://doi.org/10.5594/SMPTE.ST2067-2.2020.

ffmpeg [-assetmaps <path of ASSETMAP1>,<path of ASSETMAP2>,...] -i <path of CPL> ...

If "-assetmaps" is not specified, the demuxer looks for a file called ASSETMAP.xml in the same directory as the CPL.

Adobe Flash Video Format demuxer.

This demuxer is used to demux FLV files and RTMP network streams. In case of live network streams, if you force format, you may use live_flv option instead of flv to survive timestamp discontinuities. KUX is a flv variant used on the Youku platform.

ffmpeg -f flv -i myfile.flv ...
ffmpeg -f live_flv -i rtmp://<any.server>/anything/key ....
Allocate the streams according to the onMetaData array content.
Ignore the size of previous tag value.
Output all context of the onMetadata.

Animated GIF demuxer.

It accepts the following options:

Set the minimum valid delay between frames in hundredths of seconds. Range is 0 to 6000. Default value is 2.
Set the maximum valid delay between frames in hundredth of seconds. Range is 0 to 65535. Default value is 65535 (nearly eleven minutes), the maximum value allowed by the specification.
Set the default delay between frames in hundredths of seconds. Range is 0 to 6000. Default value is 10.
GIF files can contain information to loop a certain number of times (or infinitely). If ignore_loop is set to 1, then the loop setting from the input will be ignored and looping will not occur. If set to 0, then looping will occur and will cycle the number of times according to the GIF. Default value is 1.

For example, with the overlay filter, place an infinitely looping GIF over another video:

ffmpeg -i input.mp4 -ignore_loop 0 -i input.gif -filter_complex overlay=shortest=1 out.mkv

Note that in the above example the shortest option for overlay filter is used to end the output video at the length of the shortest input file, which in this case is input.mp4 as the GIF in this example loops infinitely.

HLS demuxer

Apple HTTP Live Streaming demuxer.

This demuxer presents all AVStreams from all variant streams. The id field is set to the bitrate variant index number. By setting the discard flags on AVStreams (by pressing 'a' or 'v' in ffplay), the caller can decide which variant streams to actually receive. The total bitrate of the variant that the stream belongs to is available in a metadata key named "variant_bitrate".

It accepts the following options:

segment index to start live streams at (negative values are from the end).
prefer to use #EXT-X-START if it's in playlist instead of live_start_index.
',' separated list of file extensions that hls is allowed to access.
Maximum number of times a insufficient list is attempted to be reloaded. Default value is 1000.
The maximum number of times to load m3u8 when it refreshes without new segments. Default value is 1000.
Use persistent HTTP connections. Applicable only for HTTP streams. Enabled by default.
Use multiple HTTP connections for downloading HTTP segments. Enabled by default for HTTP/1.1 servers.
Use HTTP partial requests for downloading HTTP segments. 0 = disable, 1 = enable, -1 = auto, Default is auto.
Set options for the demuxer of media segments using a list of key=value pairs separated by ":".
Maximum number of times to reload a segment on error, useful when segment skip on network error is not desired. Default value is 0.

Image file demuxer.

This demuxer reads from a list of image files specified by a pattern. The syntax and meaning of the pattern is specified by the option pattern_type.

The pattern may contain a suffix which is used to automatically determine the format of the images contained in the files.

The size, the pixel format, and the format of each image must be the same for all the files in the sequence.

This demuxer accepts the following options:

framerate
Set the frame rate for the video stream. It defaults to 25.
loop
If set to 1, loop over the input. Default value is 0.
Select the pattern type used to interpret the provided filename.

pattern_type accepts one of the following values.

Disable pattern matching, therefore the video will only contain the specified image. You should use this option if you do not want to create sequences from multiple images and your filenames may contain special pattern characters.
Select a sequence pattern type, used to specify a sequence of files indexed by sequential numbers.

A sequence pattern may contain the string "%d" or "%0Nd", which specifies the position of the characters representing a sequential number in each filename matched by the pattern. If the form "%d0Nd" is used, the string representing the number in each filename is 0-padded and N is the total number of 0-padded digits representing the number. The literal character '%' can be specified in the pattern with the string "%%".

If the sequence pattern contains "%d" or "%0Nd", the first filename of the file list specified by the pattern must contain a number inclusively contained between start_number and start_number+start_number_range-1, and all the following numbers must be sequential.

For example the pattern "img-%03d.bmp" will match a sequence of filenames of the form img-001.bmp, img-002.bmp, ..., img-010.bmp, etc.; the pattern "i%%m%%g-%d.jpg" will match a sequence of filenames of the form i%m%g-1.jpg, i%m%g-2.jpg, ..., i%m%g-10.jpg, etc.

Note that the pattern must not necessarily contain "%d" or "%0Nd", for example to convert a single image file img.jpeg you can employ the command:

ffmpeg -i img.jpeg img.png
Select a glob wildcard pattern type.

The pattern is interpreted like a glob() pattern. This is only selectable if libavformat was compiled with globbing support.

Select a mixed glob wildcard/sequence pattern.

If your version of libavformat was compiled with globbing support, and the provided pattern contains at least one glob meta character among "%*?[]{}" that is preceded by an unescaped "%", the pattern is interpreted like a glob() pattern, otherwise it is interpreted like a sequence pattern.

All glob special characters "%*?[]{}" must be prefixed with "%". To escape a literal "%" you shall use "%%".

For example the pattern "foo-%*.jpeg" will match all the filenames prefixed by "foo-" and terminating with ".jpeg", and "foo-%?%?%?.jpeg" will match all the filenames prefixed with "foo-", followed by a sequence of three characters, and terminating with ".jpeg".

This pattern type is deprecated in favor of glob and sequence.

Default value is glob_sequence.

Set the pixel format of the images to read. If not specified the pixel format is guessed from the first image file in the sequence.
Set the index of the file matched by the image file pattern to start to read from. Default value is 0.
Set the index interval range to check when looking for the first image file in the sequence, starting from start_number. Default value is 5.
If set to 1, will set frame timestamp to modification time of image file. Note that monotonity of timestamps is not provided: images go in the same order as without this option. Default value is 0. If set to 2, will set frame timestamp to the modification time of the image file in nanosecond precision.
Set the video size of the images to read. If not specified the video size is guessed from the first image file in the sequence.
If set to 1, will add two extra fields to the metadata found in input, making them also available for other filters (see drawtext filter for examples). Default value is 0. The extra fields are described below:
Corresponds to the full path to the input file being read.
Corresponds to the name of the file being read.

Examples

  • Use ffmpeg for creating a video from the images in the file sequence img-001.jpeg, img-002.jpeg, ..., assuming an input frame rate of 10 frames per second:
    ffmpeg -framerate 10 -i 'img-%03d.jpeg' out.mkv
    
  • As above, but start by reading from a file with index 100 in the sequence:
    ffmpeg -framerate 10 -start_number 100 -i 'img-%03d.jpeg' out.mkv
    
  • Read images matching the "*.png" glob pattern , that is all the files terminating with the ".png" suffix:
    ffmpeg -framerate 10 -pattern_type glob -i "*.png" out.mkv
    

The Game Music Emu library is a collection of video game music file emulators.

See https://bitbucket.org/mpyne/game-music-emu/overview for more information.

It accepts the following options:

Set the index of which track to demux. The demuxer can only export one track. Track indexes start at 0. Default is to pick the first track. Number of tracks is exported as tracks metadata entry.
Set the sampling rate of the exported track. Range is 1000 to 999999. Default is 44100.
The demuxer buffers the entire file into memory. Adjust this value to set the maximum buffer size, which in turn, acts as a ceiling for the size of files that can be read. Default is 50 MiB.

ModPlug based module demuxer

See https://github.com/Konstanty/libmodplug

It will export one 2-channel 16-bit 44.1 kHz audio stream. Optionally, a "pal8" 16-color video stream can be exported with or without printed metadata.

It accepts the following options:

Apply a simple low-pass filter. Can be 1 (on) or 0 (off). Default is 0.
Set amount of reverb. Range 0-100. Default is 0.
Set delay in ms, clamped to 40-250 ms. Default is 0.
Apply bass expansion a.k.a. XBass or megabass. Range is 0 (quiet) to 100 (loud). Default is 0.
Set cutoff i.e. upper-bound for bass frequencies. Range is 10-100 Hz. Default is 0.
Apply a Dolby Pro-Logic surround effect. Range is 0 (quiet) to 100 (heavy). Default is 0.
Set surround delay in ms, clamped to 5-40 ms. Default is 0.
The demuxer buffers the entire file into memory. Adjust this value to set the maximum buffer size, which in turn, acts as a ceiling for the size of files that can be read. Range is 0 to 100 MiB. 0 removes buffer size limit (not recommended). Default is 5 MiB.
String which is evaluated using the eval API to assign colors to the generated video stream. Variables which can be used are "x", "y", "w", "h", "t", "speed", "tempo", "order", "pattern" and "row".
Generate video stream. Can be 1 (on) or 0 (off). Default is 0.
Set video frame width in 'chars' where one char indicates 8 pixels. Range is 20-512. Default is 30.
Set video frame height in 'chars' where one char indicates 8 pixels. Range is 20-512. Default is 30.
Print metadata on video stream. Includes "speed", "tempo", "order", "pattern", "row" and "ts" (time in ms). Can be 1 (on) or 0 (off). Default is 1.

libopenmpt based module demuxer

See https://lib.openmpt.org/libopenmpt/ for more information.

Some files have multiple subsongs (tracks) this can be set with the subsong option.

It accepts the following options:

Set the subsong index. This can be either 'all', 'auto', or the index of the subsong. Subsong indexes start at 0. The default is 'auto'.

The default value is to let libopenmpt choose.

Set the channel layout. Valid values are 1, 2, and 4 channel layouts. The default value is STEREO.
Set the sample rate for libopenmpt to output. Range is from 1000 to INT_MAX. The value default is 48000.

Demuxer for Quicktime File Format & ISO/IEC Base Media File Format (ISO/IEC 14496-12 or MPEG-4 Part 12, ISO/IEC 15444-12 or JPEG 2000 Part 12).

Registered extensions: mov, mp4, m4a, 3gp, 3g2, mj2, psp, m4b, ism, ismv, isma, f4v

Options

This demuxer accepts the following options:

Enable loading of external tracks, disabled by default. Enabling this can theoretically leak information in some use cases.
Allows loading of external tracks via absolute paths, disabled by default. Enabling this poses a security risk. It should only be enabled if the source is known to be non-malicious.
When seeking, identify the closest point in each stream individually and demux packets in that stream from identified point. This can lead to a different sequence of packets compared to demuxing linearly from the beginning. Default is true.
Ignore any edit list atoms. The demuxer, by default, modifies the stream index to reflect the timeline described by the edit list. Default is false.
Modify the stream index to reflect the timeline described by the edit list. "ignore_editlist" must be set to false for this option to be effective. If both "ignore_editlist" and this option are set to false, then only the start of the stream index is modified to reflect initial dwell time or starting timestamp described by the edit list. Default is true.
Don't parse chapters. This includes GoPro 'HiLight' tags/moments. Note that chapters are only parsed when input is seekable. Default is false.
For seekable fragmented input, set fragment's starting timestamp from media fragment random access box, if present.

Following options are available:

Auto-detect whether to set mfra timestamps as PTS or DTS (default)
Set mfra timestamps as DTS
Set mfra timestamps as PTS
0
Don't use mfra box to set timestamps
For fragmented input, set fragment's starting timestamp to "baseMediaDecodeTime" from the "tfdt" box. Default is enabled, which will prefer to use the "tfdt" box to set DTS. Disable to use the "earliest_presentation_time" from the "sidx" box. In either case, the timestamp from the "mfra" box will be used if it's available and "use_mfra_for" is set to pts or dts.
Export unrecognized boxes within the udta box as metadata entries. The first four characters of the box type are set as the key. Default is false.
Export entire contents of XMP_ box and uuid box as a string with key "xmp". Note that if "export_all" is set and this option isn't, the contents of XMP_ box are still exported but with key "XMP_". Default is false.
4-byte key required to decrypt Audible AAX and AAX+ files. See Audible AAX subsection below.
Fixed key used for handling Audible AAX/AAX+ files. It has been pre-set so should not be necessary to specify.
16-byte key, in hex, to decrypt files encrypted using ISO Common Encryption (CENC/AES-128 CTR; ISO/IEC 23001-7).
Very high sample deltas written in a trak's stts box may occasionally be intended but usually they are written in error or used to store a negative value for dts correction when treated as signed 32-bit integers. This option lets the user set an upper limit, beyond which the delta is clamped to 1. Values greater than the limit if negative when cast to int32 are used to adjust onward dts.

Unit is the track time scale. Range is 0 to UINT_MAX. Default is "UINT_MAX - 48000*10" which allows up to a 10 second dts correction for 48 kHz audio streams while accommodating 99.9% of "uint32" range.

Interleave packets from multiple tracks at demuxer level. For badly interleaved files, this prevents playback issues caused by large gaps between packets in different tracks, as MOV/MP4 do not have packet placement requirements. However, this can cause excessive seeking on very badly interleaved files, due to seeking between tracks, so disabling it may prevent I/O issues, at the expense of playback.

Audible AAX

Audible AAX files are encrypted M4B files, and they can be decrypted by specifying a 4 byte activation secret.

ffmpeg -activation_bytes 1CEB00DA -i test.aax -vn -c:a copy output.mp4

MPEG-2 transport stream demuxer.

This demuxer accepts the following options:

Set size limit for looking up a new synchronization. Default value is 65536.
Skip PMTs for programs not defined in the PAT. Default value is 0.
Override teletext packet PTS and DTS values with the timestamps calculated from the PCR of the first program which the teletext stream is part of and is not discarded. Default value is 1, set this option to 0 if you want your teletext packet PTS and DTS values untouched.
Output option carrying the raw packet size in bytes. Show the detected raw packet size, cannot be set by the user.
Scan and combine all PMTs. The value is an integer with value from -1 to 1 (-1 means automatic setting, 1 means enabled, 0 means disabled). Default value is -1.
Re-use existing streams when a PMT's version is updated and elementary streams move to different PIDs. Default value is 0.
Set maximum size, in bytes, of packet emitted by the demuxer. Payloads above this size are split across multiple packets. Range is 1 to INT_MAX/2. Default is 204800 bytes.

MJPEG encapsulated in multi-part MIME demuxer.

This demuxer allows reading of MJPEG, where each frame is represented as a part of multipart/x-mixed-replace stream.

Default implementation applies a relaxed standard to multi-part MIME boundary detection, to prevent regression with numerous existing endpoints not generating a proper MIME MJPEG stream. Turning this option on by setting it to 1 will result in a stricter check of the boundary value.

Raw video demuxer.

This demuxer allows one to read raw video data. Since there is no header specifying the assumed video parameters, the user must specify them in order to be able to decode the data correctly.

This demuxer accepts the following options:

framerate
Set input video frame rate. Default value is 25.
Set the input video pixel format. Default value is "yuv420p".
Set the input video size. This value must be specified explicitly.

For example to read a rawvideo file input.raw with ffplay, assuming a pixel format of "rgb24", a video size of "320x240", and a frame rate of 10 images per second, use the command:

ffplay -f rawvideo -pixel_format rgb24 -video_size 320x240 -framerate 10 input.raw

RCWT (Raw Captions With Time) is a format native to ccextractor, a commonly used open source tool for processing 608/708 Closed Captions (CC) sources. For more information on the format, see .

This demuxer implements the specification as of March 2024, which has been stable and unchanged since April 2014.

Examples

  • Render CC to ASS using the built-in decoder:
    ffmpeg -i CC.rcwt.bin CC.ass
    

    Note that if your output appears to be empty, you may have to manually set the decoder's data_field option to pick the desired CC substream.

  • Convert an RCWT backup to Scenarist (SCC) format:
    ffmpeg -i CC.rcwt.bin -c:s copy CC.scc
    

    Note that the SCC format does not support all of the possible CC extensions that can be stored in RCWT (such as EIA-708).

SBaGen script demuxer.

This demuxer reads the script language used by SBaGen http://uazu.net/sbagen/ to generate binaural beats sessions. A SBG script looks like that:

-SE
a: 300-2.5/3 440+4.5/0
b: 300-2.5/0 440+4.5/3
off: -
NOW      == a
+0:07:00 == b
+0:14:00 == a
+0:21:00 == b
+0:30:00    off

A SBG script can mix absolute and relative timestamps. If the script uses either only absolute timestamps (including the script start time) or only relative ones, then its layout is fixed, and the conversion is straightforward. On the other hand, if the script mixes both kind of timestamps, then the NOW reference for relative timestamps will be taken from the current time of day at the time the script is read, and the script layout will be frozen according to that reference. That means that if the script is directly played, the actual times will match the absolute timestamps up to the sound controller's clock accuracy, but if the user somehow pauses the playback or seeks, all times will be shifted accordingly.

JSON captions used for http://www.ted.com/.

TED does not provide links to the captions, but they can be guessed from the page. The file tools/bookmarklets.html from the FFmpeg source tree contains a bookmarklet to expose them.

This demuxer accepts the following option:

Set the start time of the TED talk, in milliseconds. The default is 15000 (15s). It is used to sync the captions with the downloadable videos, because they include a 15s intro.

Example: convert the captions to a format most players understand:

ffmpeg -i http://www.ted.com/talks/subtitles/id/1/lang/en talk1-en.srt

Vapoursynth wrapper.

Due to security concerns, Vapoursynth scripts will not be autodetected so the input format has to be forced. For ff* CLI tools, add "-f vapoursynth" before the input "-i yourscript.vpy".

This demuxer accepts the following option:

The demuxer buffers the entire script into memory. Adjust this value to set the maximum buffer size, which in turn, acts as a ceiling for the size of scripts that can be read. Default is 1 MiB.

Sony Wave64 Audio demuxer.

This demuxer accepts the following options:

See the same option for the wav demuxer.

RIFF Wave Audio demuxer.

This demuxer accepts the following options:

Specify the maximum packet size in bytes for the demuxed packets. By default this is set to 0, which means that a sensible value is chosen based on the input format.

FFmpeg is able to dump metadata from media files into a simple UTF-8-encoded INI-like text file and then load it back using the metadata muxer/demuxer.

The file format is as follows:

1.
A file consists of a header and a number of metadata tags divided into sections, each on its own line.
2.
The header is a ;FFMETADATA string, followed by a version number (now 1).
3.
Metadata tags are of the form key=value
4.
Immediately after header follows global metadata
5.
After global metadata there may be sections with per-stream/per-chapter metadata.
6.
A section starts with the section name in uppercase (i.e. STREAM or CHAPTER) in brackets ([, ]) and ends with next section or end of file.
7.
At the beginning of a chapter section there may be an optional timebase to be used for start/end values. It must be in form TIMEBASE=num/den, where num and den are integers. If the timebase is missing then start/end times are assumed to be in nanoseconds.

Next a chapter section must contain chapter start and end times in form START=num, END=num, where num is a positive integer.

8.
Empty lines and lines starting with ; or # are ignored.
9.
Metadata keys or values containing special characters (=, ;, #, \ and a newline) must be escaped with a backslash \.
10.
Note that whitespace in metadata (e.g. foo = bar) is considered to be a part of the tag (in the example above key is foo , value is
bar).

A ffmetadata file might look like this:

;FFMETADATA1
title=bike\\shed
;this is a comment
artist=FFmpeg troll team

[CHAPTER]
TIMEBASE=1/1000
START=0
#chapter ends at 0:01:00
END=60000
title=chapter \#1
[STREAM]
title=multi\
line

By using the ffmetadata muxer and demuxer it is possible to extract metadata from an input file to an ffmetadata file, and then transcode the file into an output file with the edited ffmetadata file.

Extracting an ffmetadata file with ffmpeg goes as follows:

ffmpeg -i INPUT -f ffmetadata FFMETADATAFILE

Reinserting edited metadata information from the FFMETADATAFILE file can be done as:

ffmpeg -i INPUT -i FFMETADATAFILE -map_metadata 1 -codec copy OUTPUT

The libavformat library provides some generic global options, which can be set on all the protocols. In addition each protocol may support so-called private options, which are specific for that component.

Options may be set by specifying -option value in the FFmpeg tools, or by setting the value explicitly in the "AVFormatContext" options or using the libavutil/opt.h API for programmatic use.

The list of supported options follows:

Set a ","-separated list of allowed protocols. "ALL" matches all protocols. Protocols prefixed by "-" are disabled. All protocols are allowed by default but protocols used by an another protocol (nested protocols) are restricted to a per protocol subset.

Protocols are configured elements in FFmpeg that enable access to resources that require specific protocols.

When you configure your FFmpeg build, all the supported protocols are enabled by default. You can list all available ones using the configure option "--list-protocols".

You can disable all the protocols using the configure option "--disable-protocols", and selectively enable a protocol using the option "--enable-protocol=PROTOCOL", or you can disable a particular protocol using the option "--disable-protocol=PROTOCOL".

The option "-protocols" of the ff* tools will display the list of supported protocols.

All protocols accept the following options:

Maximum time to wait for (network) read/write operations to complete, in microseconds.

A description of the currently available protocols follows.

Advanced Message Queueing Protocol (AMQP) version 0-9-1 is a broker based publish-subscribe communication protocol.

FFmpeg must be compiled with --enable-librabbitmq to support AMQP. A separate AMQP broker must also be run. An example open-source AMQP broker is RabbitMQ.

After starting the broker, an FFmpeg client may stream data to the broker using the command:

ffmpeg -re -i input -f mpegts amqp://[[user]:[password]@]hostname[:port][/vhost]

Where hostname and port (default is 5672) is the address of the broker. The client may also set a user/password for authentication. The default for both fields is "guest". Name of virtual host on broker can be set with vhost. The default value is "/".

Muliple subscribers may stream from the broker using the command:

ffplay amqp://[[user]:[password]@]hostname[:port][/vhost]

In RabbitMQ all data published to the broker flows through a specific exchange, and each subscribing client has an assigned queue/buffer. When a packet arrives at an exchange, it may be copied to a client's queue depending on the exchange and routing_key fields.

The following options are supported:

Sets the exchange to use on the broker. RabbitMQ has several predefined exchanges: "amq.direct" is the default exchange, where the publisher and subscriber must have a matching routing_key; "amq.fanout" is the same as a broadcast operation (i.e. the data is forwarded to all queues on the fanout exchange independent of the routing_key); and "amq.topic" is similar to "amq.direct", but allows for more complex pattern matching (refer to the RabbitMQ documentation).
Sets the routing key. The default value is "amqp". The routing key is used on the "amq.direct" and "amq.topic" exchanges to decide whether packets are written to the queue of a subscriber.
Maximum size of each packet sent/received to the broker. Default is 131072. Minimum is 4096 and max is any large value (representable by an int). When receiving packets, this sets an internal buffer size in FFmpeg. It should be equal to or greater than the size of the published packets to the broker. Otherwise the received message may be truncated causing decoding errors.
The timeout in seconds during the initial connection to the broker. The default value is rw_timeout, or 5 seconds if rw_timeout is not set.
Sets the delivery mode of each message sent to broker. The following values are accepted:
Delivery mode set to "persistent" (2). This is the default value. Messages may be written to the broker's disk depending on its setup.
Delivery mode set to "non-persistent" (1). Messages will stay in broker's memory unless the broker is under memory pressure.

Asynchronous data filling wrapper for input stream.

Fill data in a background thread, to decouple I/O operation from demux thread.

async:<URL>
async:http://host/resource
async:cache:http://host/resource

Read BluRay playlist.

The accepted options are:

BluRay angle
Start chapter (1...N)
Playlist to read (BDMV/PLAYLIST/?????.mpls)

Examples:

Read longest playlist from BluRay mounted to /mnt/bluray:

bluray:/mnt/bluray

Read angle 2 of playlist 4 from BluRay mounted to /mnt/bluray, start from chapter 2:

-playlist 4 -angle 2 -chapter 2 bluray:/mnt/bluray

Caching wrapper for input stream.

Cache the input stream to temporary file. It brings seeking capability to live streams.

The accepted options are:

Amount in bytes that may be read ahead when seeking isn't supported. Range is -1 to INT_MAX. -1 for unlimited. Default is 65536.

URL Syntax is

cache:<URL>

Physical concatenation protocol.

Read and seek from many resources in sequence as if they were a unique resource.

A URL accepted by this protocol has the syntax:

concat:<URL1>|<URL2>|...|<URLN>

where URL1, URL2, ..., URLN are the urls of the resource to be concatenated, each one possibly specifying a distinct protocol.

For example to read a sequence of files split1.mpeg, split2.mpeg, split3.mpeg with ffplay use the command:

ffplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg

Note that you may need to escape the character "|" which is special for many shells.

Physical concatenation protocol using a line break delimited list of resources.

Read and seek from many resources in sequence as if they were a unique resource.

A URL accepted by this protocol has the syntax:

concatf:<URL>

where URL is the url containing a line break delimited list of resources to be concatenated, each one possibly specifying a distinct protocol. Special characters must be escaped with backslash or single quotes. See the "Quoting and escaping" section in the ffmpeg-utils(1) manual.

For example to read a sequence of files split1.mpeg, split2.mpeg, split3.mpeg listed in separate lines within a file split.txt with ffplay use the command:

ffplay concatf:split.txt

Where split.txt contains the lines:

split1.mpeg
split2.mpeg
split3.mpeg

AES-encrypted stream reading protocol.

The accepted options are:

Set the AES decryption key binary block from given hexadecimal representation.
Set the AES decryption initialization vector binary block from given hexadecimal representation.

Accepted URL formats:

crypto:<URL>
crypto+<URL>

Data in-line in the URI. See http://en.wikipedia.org/wiki/Data_URI_scheme.

For example, to convert a GIF file given inline with ffmpeg:

ffmpeg -i "data:image/gif;base64,R0lGODdhCAAIAMIEAAAAAAAA//8AAP//AP///////////////ywAAAAACAAIAAADF0gEDLojDgdGiJdJqUX02iB4E8Q9jUMkADs=" smiley.png

File descriptor access protocol.

The accepted syntax is:

fd: -fd <file_descriptor>

If fd is not specified, by default the stdout file descriptor will be used for writing, stdin for reading. Unlike the pipe protocol, fd protocol has seek support if it corresponding to a regular file. fd protocol doesn't support pass file descriptor via URL for security.

This protocol accepts the following options:

Set I/O operation maximum block size, in bytes. Default value is "INT_MAX", which results in not limiting the requested block size. Setting this value reasonably low improves user termination request reaction time, which is valuable if data transmission is slow.
fd
Set file descriptor.

File access protocol.

Read from or write to a file.

A file URL can have the form:

file:<filename>

where filename is the path of the file to read.

An URL that does not have a protocol prefix will be assumed to be a file URL. Depending on the build, an URL that looks like a Windows path with the drive letter at the beginning will also be assumed to be a file URL (usually not the case in builds for unix-like systems).

For example to read from a file input.mpeg with ffmpeg use the command:

ffmpeg -i file:input.mpeg output.mpeg

This protocol accepts the following options:

Truncate existing files on write, if set to 1. A value of 0 prevents truncating. Default value is 1.
Set I/O operation maximum block size, in bytes. Default value is "INT_MAX", which results in not limiting the requested block size. Setting this value reasonably low improves user termination request reaction time, which is valuable for files on slow medium.
If set to 1, the protocol will retry reading at the end of the file, allowing reading files that still are being written. In order for this to terminate, you either need to use the rw_timeout option, or use the interrupt callback (for API users).
Controls if seekability is advertised on the file. 0 means non-seekable, -1 means auto (seekable for normal files, non-seekable for named pipes).

Many demuxers handle seekable and non-seekable resources differently, overriding this might speed up opening certain files at the cost of losing some features (e.g. accurate seeking).

FTP (File Transfer Protocol).

Read from or write to remote resources using FTP protocol.

Following syntax is required.

ftp://[user[:password]@]server[:port]/path/to/remote/resource.mpeg

This protocol accepts the following options.

Set timeout in microseconds of socket I/O operations used by the underlying low level operation. By default it is set to -1, which means that the timeout is not specified.
Set a user to be used for authenticating to the FTP server. This is overridden by the user in the FTP URL.
Set a password to be used for authenticating to the FTP server. This is overridden by the password in the FTP URL, or by ftp-anonymous-password if no user is set.
Password used when login as anonymous user. Typically an e-mail address should be used.
Control seekability of connection during encoding. If set to 1 the resource is supposed to be seekable, if set to 0 it is assumed not to be seekable. Default value is 0.

NOTE: Protocol can be used as output, but it is recommended to not do it, unless special care is taken (tests, customized server configuration etc.). Different FTP servers behave in different way during seek operation. ff* tools may produce incomplete content due to server limitations.

Gopher protocol.

Gophers protocol.

The Gopher protocol with TLS encapsulation.

Read Apple HTTP Live Streaming compliant segmented stream as a uniform one. The M3U8 playlists describing the segments can be remote HTTP resources or local files, accessed using the standard file protocol. The nested protocol is declared by specifying "+proto" after the hls URI scheme name, where proto is either "file" or "http".

hls+http://host/path/to/remote/resource.m3u8
hls+file://path/to/local/resource.m3u8

Using this protocol is discouraged - the hls demuxer should work just as well (if not, please report the issues) and is more complete. To use the hls demuxer instead, simply use the direct URLs to the m3u8 files.

HTTP (Hyper Text Transfer Protocol).

This protocol accepts the following options:

Control seekability of connection. If set to 1 the resource is supposed to be seekable, if set to 0 it is assumed not to be seekable, if set to -1 it will try to autodetect if it is seekable. Default value is -1.
If set to 1 use chunked Transfer-Encoding for posts, default is 1.
set HTTP proxy to tunnel through e.g. http://example.com:1234
Set custom HTTP headers, can override built in default headers. The value must be a string encoding the headers.
Set a specific content type for the POST messages or for listen mode.
Override the User-Agent header. If not specified the protocol will use a string describing the libavformat build. ("Lavf/<version>")
Set the Referer header. Include 'Referer: URL' header in HTTP request.
Use persistent connections if set to 1, default is 0.
Set custom HTTP post data.
Export the MIME type.
Exports the HTTP response version number. Usually "1.0" or "1.1".
Set the cookies to be sent in future requests. The format of each cookie is the same as the value of a Set-Cookie HTTP response field. Multiple cookies can be delimited by a newline character.
If set to 1 request ICY (SHOUTcast) metadata from the server. If the server supports this, the metadata has to be retrieved by the application by reading the icy_metadata_headers and icy_metadata_packet options. The default is 1.
If the server supports ICY metadata, this contains the ICY-specific HTTP reply headers, separated by newline characters.
If the server supports ICY metadata, and icy was set to 1, this contains the last non-empty metadata packet sent by the server. It should be polled in regular intervals by applications interested in mid-stream metadata updates.
Set an exported dictionary containing Icecast metadata from the bitstream, if present. Only useful with the C API.
Set HTTP authentication type. No option for Digest, since this method requires getting nonce parameters from the server first and can't be used straight away like Basic.
Choose the HTTP authentication type automatically. This is the default.
Choose the HTTP basic authentication.

Basic authentication sends a Base64-encoded string that contains a user name and password for the client. Base64 is not a form of encryption and should be considered the same as sending the user name and password in clear text (Base64 is a reversible encoding). If a resource needs to be protected, strongly consider using an authentication scheme other than basic authentication. HTTPS/TLS should be used with basic authentication. Without these additional security enhancements, basic authentication should not be used to protect sensitive or valuable information.

Send an Expect: 100-continue header for POST. If set to 1 it will send, if set to 0 it won't, if set to -1 it will try to send if it is applicable. Default value is -1.
An exported dictionary containing the content location. Only useful with the C API.
Set initial byte offset.
Try to limit the request to bytes preceding this offset.
When used as a client option it sets the HTTP method for the request.

When used as a server option it sets the HTTP method that is going to be expected from the client(s). If the expected and the received HTTP method do not match the client will be given a Bad Request response. When unset the HTTP method is not checked for now. This will be replaced by autodetection in the future.

Reconnect automatically when disconnected before EOF is hit.
If set then eof is treated like an error and causes reconnection, this is useful for live / endless streams.
Reconnect automatically in case of TCP/TLS errors during connect.
A comma separated list of HTTP status codes to reconnect on. The list can include specific status codes (e.g. '503') or the strings '4xx' / '5xx'.
If set then even streamed/non seekable streams will be reconnected on errors.
Set the maximum delay in seconds after which to give up reconnecting.
Set the maximum number of times to retry a connection. Default unset.
Set the maximum total delay in seconds after which to give up reconnecting.
If enabled, and a Retry-After header is encountered, its requested reconnection delay will be honored, rather than using exponential backoff. Useful for 429 and 503 errors. Default enabled.
If set to 1 enables experimental HTTP server. This can be used to send data when used as an output option, or read data from a client with HTTP POST when used as an input option. If set to 2 enables experimental multi-client HTTP server. This is not yet implemented in ffmpeg.c and thus must not be used as a command line option.
# Server side (sending):
ffmpeg -i somefile.ogg -c copy -listen 1 -f ogg http://<server>:<port>

# Client side (receiving):
ffmpeg -i http://<server>:<port> -c copy somefile.ogg

# Client can also be done with wget:
wget http://<server>:<port> -O somefile.ogg

# Server side (receiving):
ffmpeg -listen 1 -i http://<server>:<port> -c copy somefile.ogg

# Client side (sending):
ffmpeg -i somefile.ogg -chunked_post 0 -c copy -f ogg http://<server>:<port>

# Client can also be done with wget:
wget --post-file=somefile.ogg http://<server>:<port>
The resource requested by a client, when the experimental HTTP server is in use.
The HTTP code returned to the client, when the experimental HTTP server is in use.
Set the threshold, in bytes, for when a readahead should be prefered over a seek and new HTTP request. This is useful, for example, to make sure the same connection is used for reading large video packets with small audio packets in between.

HTTP Cookies

Some HTTP requests will be denied unless cookie values are passed in with the request. The cookies option allows these cookies to be specified. At the very least, each cookie must specify a value along with a path and domain. HTTP requests that match both the domain and path will automatically include the cookie value in the HTTP Cookie header field. Multiple cookies can be delimited by a newline.

The required syntax to play a stream specifying a cookie is:

ffplay -cookies "nlqptid=nltid=tsn; path=/; domain=somedomain.com;" http://somedomain.com/somestream.m3u8

Icecast protocol (stream to Icecast servers)

This protocol accepts the following options:

Set the stream genre.
Set the stream name.
Set the stream description.
Set the stream website URL.
Set if the stream should be public. The default is 0 (not public).
Override the User-Agent header. If not specified a string of the form "Lavf/<version>" will be used.
Set the Icecast mountpoint password.
Set the stream content type. This must be set if it is different from audio/mpeg.
This enables support for Icecast versions < 2.4.0, that do not support the HTTP PUT method but the SOURCE method.
tls
Establish a TLS (HTTPS) connection to Icecast.
icecast://[<username>[:<password>]@]<server>:<port>/<mountpoint>

InterPlanetary File System (IPFS) protocol support. One can access files stored on the IPFS network through so-called gateways. These are http(s) endpoints. This protocol wraps the IPFS native protocols (ipfs:// and ipns://) to be sent to such a gateway. Users can (and should) host their own node which means this protocol will use one's local gateway to access files on the IPFS network.

This protocol accepts the following options:

Defines the gateway to use. When not set, the protocol will first try locating the local gateway by looking at $IPFS_GATEWAY, $IPFS_PATH and "$HOME/.ipfs/", in that order.

One can use this protocol in 2 ways. Using IPFS:

ffplay ipfs://<hash>

Or the IPNS protocol (IPNS is mutable IPFS):

ffplay ipns://<hash>

MMS (Microsoft Media Server) protocol over TCP.

MMS (Microsoft Media Server) protocol over HTTP.

The required syntax is:

mmsh://<server>[:<port>][/<app>][/<playpath>]

MD5 output protocol.

Computes the MD5 hash of the data to be written, and on close writes this to the designated output or stdout if none is specified. It can be used to test muxers without writing an actual file.

Some examples follow.

# Write the MD5 hash of the encoded AVI file to the file output.avi.md5.
ffmpeg -i input.flv -f avi -y md5:output.avi.md5

# Write the MD5 hash of the encoded AVI file to stdout.
ffmpeg -i input.flv -f avi -y md5:

Note that some formats (typically MOV) require the output protocol to be seekable, so they will fail with the MD5 output protocol.

UNIX pipe access protocol.

Read and write from UNIX pipes.

The accepted syntax is:

pipe:[<number>]

If fd isn't specified, number is the number corresponding to the file descriptor of the pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr). If number is not specified, by default the stdout file descriptor will be used for writing, stdin for reading.

For example to read from stdin with ffmpeg:

cat test.wav | ffmpeg -i pipe:0
# ...this is the same as...
cat test.wav | ffmpeg -i pipe:

For writing to stdout with ffmpeg:

ffmpeg -i test.wav -f avi pipe:1 | cat > test.avi
# ...this is the same as...
ffmpeg -i test.wav -f avi pipe: | cat > test.avi

This protocol accepts the following options:

Set I/O operation maximum block size, in bytes. Default value is "INT_MAX", which results in not limiting the requested block size. Setting this value reasonably low improves user termination request reaction time, which is valuable if data transmission is slow.
fd
Set file descriptor.

Note that some formats (typically MOV), require the output protocol to be seekable, so they will fail with the pipe output protocol.

Pro-MPEG Code of Practice #3 Release 2 FEC protocol.

The Pro-MPEG CoP#3 FEC is a 2D parity-check forward error correction mechanism for MPEG-2 Transport Streams sent over RTP.

This protocol must be used in conjunction with the "rtp_mpegts" muxer and the "rtp" protocol.

The required syntax is:

-f rtp_mpegts -fec prompeg=<option>=<val>... rtp://<hostname>:<port>

The destination UDP ports are "port + 2" for the column FEC stream and "port + 4" for the row FEC stream.

This protocol accepts the following options:

The number of columns (4-20, LxD <= 100)
The number of rows (4-20, LxD <= 100)

Example usage:

-f rtp_mpegts -fec prompeg=l=8:d=4 rtp://<hostname>:<port>

Reliable Internet Streaming Transport protocol

The accepted options are:

Supported values:
This one is default.
Set internal RIST buffer size in milliseconds for retransmission of data. Default value is 0 which means the librist default (1 sec). Maximum value is 30 seconds.
Size of the librist receiver output fifo in number of packets. This must be a power of 2. Defaults to 8192 (vs the librist default of 1024).
Survive in case of librist fifo buffer overrun. Default value is 0.
Set maximum packet size for sending data. 1316 by default.
Set loglevel for RIST logging messages. You only need to set this if you explicitly want to enable debug level messages or packet loss simulation, otherwise the regular loglevel is respected.
Set override of encryption secret, by default is unset.
Set encryption type, by default is disabled. Acceptable values are 128 and 256.

Real-Time Messaging Protocol.

The Real-Time Messaging Protocol (RTMP) is used for streaming multimedia content across a TCP/IP network.

The required syntax is:

rtmp://[<username>:<password>@]<server>[:<port>][/<app>][/<instance>][/<playpath>]

The accepted parameters are:

An optional username (mostly for publishing).
An optional password (mostly for publishing).
The address of the RTMP server.
The number of the TCP port to use (by default is 1935).
It is the name of the application to access. It usually corresponds to the path where the application is installed on the RTMP server (e.g. /ondemand/, /flash/live/, etc.). You can override the value parsed from the URI through the "rtmp_app" option, too.
It is the path or name of the resource to play with reference to the application specified in app, may be prefixed by "mp4:". You can override the value parsed from the URI through the "rtmp_playpath" option, too.
Act as a server, listening for an incoming connection.
Maximum time to wait for the incoming connection. Implies listen.

Additionally, the following parameters can be set via command line options (or in code via "AVOption"s):

Name of application to connect on the RTMP server. This option overrides the parameter specified in the URI.
Set the client buffer time in milliseconds. The default is 3000.
Extra arbitrary AMF connection parameters, parsed from a string, e.g. like "B:1 S:authMe O:1 NN:code:1.23 NS:flag:ok O:0". Each value is prefixed by a single character denoting the type, B for Boolean, N for number, S for string, O for object, or Z for null, followed by a colon. For Booleans the data must be either 0 or 1 for FALSE or TRUE, respectively. Likewise for Objects the data must be 0 or 1 to end or begin an object, respectively. Data items in subobjects may be named, by prefixing the type with 'N' and specifying the name before the value (i.e. "NB:myFlag:1"). This option may be used multiple times to construct arbitrary AMF sequences.
Specify the list of codecs the client advertises to support in an enhanced RTMP stream. This option should be set to a comma separated list of fourcc values, like "hvc1,av01,vp09" for multiple codecs or "hvc1" for only one codec. The specified list will be presented in the "fourCcLive" property of the Connect Command Message.
Version of the Flash plugin used to run the SWF player. The default is LNX 9,0,124,2. (When publishing, the default is FMLE/3.0 (compatible; <libavformat version>).)
Number of packets flushed in the same request (RTMPT only). The default is 10.
Specify that the media is a live stream. No resuming or seeking in live streams is possible. The default value is "any", which means the subscriber first tries to play the live stream specified in the playpath. If a live stream of that name is not found, it plays the recorded stream. The other possible values are "live" and "recorded".
URL of the web page in which the media was embedded. By default no value will be sent.
Stream identifier to play or to publish. This option overrides the parameter specified in the URI.
Name of live stream to subscribe to. By default no value will be sent. It is only sent if the option is specified or if rtmp_live is set to live.
SHA256 hash of the decompressed SWF file (32 bytes).
Size of the decompressed SWF file, required for SWFVerification.
URL of the SWF player for the media. By default no value will be sent.
URL to player swf file, compute hash/size automatically.
URL of the target stream. Defaults to proto://host[:port]/app.
Set TCP_NODELAY to disable Nagle's algorithm. Default value is 0.

Remark: Writing to the socket is currently not optimized to minimize system calls and reduces the efficiency / effect of TCP_NODELAY.

For example to read with ffplay a multimedia resource named "sample" from the application "vod" from an RTMP server "myserver":

ffplay rtmp://myserver/vod/sample

To publish to a password protected server, passing the playpath and app names separately:

ffmpeg -re -i <input> -f flv -rtmp_playpath some/long/path -rtmp_app long/app/name rtmp://username:password@myserver/

Encrypted Real-Time Messaging Protocol.

The Encrypted Real-Time Messaging Protocol (RTMPE) is used for streaming multimedia content within standard cryptographic primitives, consisting of Diffie-Hellman key exchange and HMACSHA256, generating a pair of RC4 keys.

Real-Time Messaging Protocol over a secure SSL connection.

The Real-Time Messaging Protocol (RTMPS) is used for streaming multimedia content across an encrypted connection.

Real-Time Messaging Protocol tunneled through HTTP.

The Real-Time Messaging Protocol tunneled through HTTP (RTMPT) is used for streaming multimedia content within HTTP requests to traverse firewalls.

Encrypted Real-Time Messaging Protocol tunneled through HTTP.

The Encrypted Real-Time Messaging Protocol tunneled through HTTP (RTMPTE) is used for streaming multimedia content within HTTP requests to traverse firewalls.

Real-Time Messaging Protocol tunneled through HTTPS.

The Real-Time Messaging Protocol tunneled through HTTPS (RTMPTS) is used for streaming multimedia content within HTTPS requests to traverse firewalls.

libsmbclient permits one to manipulate CIFS/SMB network resources.

Following syntax is required.

smb://[[domain:]user[:password@]]server[/share[/path[/file]]]

This protocol accepts the following options.

Set timeout in milliseconds of socket I/O operations used by the underlying low level operation. By default it is set to -1, which means that the timeout is not specified.
Truncate existing files on write, if set to 1. A value of 0 prevents truncating. Default value is 1.
Set the workgroup used for making connections. By default workgroup is not specified.

For more information see: http://www.samba.org/.

Secure File Transfer Protocol via libssh

Read from or write to remote resources using SFTP protocol.

Following syntax is required.

sftp://[user[:password]@]server[:port]/path/to/remote/resource.mpeg

This protocol accepts the following options.

Set timeout of socket I/O operations used by the underlying low level operation. By default it is set to -1, which means that the timeout is not specified.
Truncate existing files on write, if set to 1. A value of 0 prevents truncating. Default value is 1.
Specify the path of the file containing private key to use during authorization. By default libssh searches for keys in the ~/.ssh/ directory.

Example: Play a file stored on remote server.

ffplay sftp://user:password@server_address:22/home/user/resource.mpeg

Real-Time Messaging Protocol and its variants supported through librtmp.

Requires the presence of the librtmp headers and library during configuration. You need to explicitly configure the build with "--enable-librtmp". If enabled this will replace the native RTMP protocol.

This protocol provides most client functions and a few server functions needed to support RTMP, RTMP tunneled in HTTP (RTMPT), encrypted RTMP (RTMPE), RTMP over SSL/TLS (RTMPS) and tunneled variants of these encrypted types (RTMPTE, RTMPTS).

The required syntax is:

<rtmp_proto>://<server>[:<port>][/<app>][/<playpath>] <options>

where rtmp_proto is one of the strings "rtmp", "rtmpt", "rtmpe", "rtmps", "rtmpte", "rtmpts" corresponding to each RTMP variant, and server, port, app and playpath have the same meaning as specified for the RTMP native protocol. options contains a list of space-separated options of the form key=val.

See the librtmp manual page (man 3 librtmp) for more information.

For example, to stream a file in real-time to an RTMP server using ffmpeg:

ffmpeg -re -i myfile -f flv rtmp://myserver/live/mystream

To play the same stream using ffplay:

ffplay "rtmp://myserver/live/mystream live=1"

Real-time Transport Protocol.

The required syntax for an RTP URL is: rtp://hostname[:port][?option=val...]

port specifies the RTP port to use.

The following URL options are supported:

Set the TTL (Time-To-Live) value (for multicast only).
Set the remote RTCP port to n.
Set the local RTP port to n.
Set the local RTCP port to n.
Set max packet size (in bytes) to n.
Set the maximum UDP socket buffer size in bytes.
Do a connect() on the UDP socket (if set to 1) or not (if set to 0).
List allowed source IP addresses.
List disallowed (blocked) source IP addresses.
Send packets to the source address of the latest received packet (if set to 1) or to a default remote address (if set to 0).
Set the local RTP port to n.
Local IP address of a network interface used for sending packets or joining multicast groups.
Set timeout (in microseconds) of socket I/O operations to n.

This is a deprecated option. Instead, localrtpport should be used.

Important notes:

1.
If rtcpport is not set the RTCP port will be set to the RTP port value plus 1.
2.
If localrtpport (the local RTP port) is not set any available port will be used for the local RTP and RTCP ports.
3.
If localrtcpport (the local RTCP port) is not set it will be set to the local RTP port value plus 1.

Real-Time Streaming Protocol.

RTSP is not technically a protocol handler in libavformat, it is a demuxer and muxer. The demuxer supports both normal RTSP (with data transferred over RTP; this is used by e.g. Apple and Microsoft) and Real-RTSP (with data transferred over RDT).

The muxer can be used to send a stream using RTSP ANNOUNCE to a server supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's https://github.com/revmischa/rtsp-server).

The required syntax for a RTSP url is:

rtsp://<hostname>[:<port>]/<path>

Options can be set on the ffmpeg/ffplay command line, or set in code via "AVOption"s or in "avformat_open_input".

Muxer

The following options are supported.

Set RTSP transport protocols.

It accepts the following values:

udp
Use UDP as lower transport protocol.
tcp
Use TCP (interleaving within the RTSP control channel) as lower transport protocol.

Default value is 0.

Set RTSP flags.

The following values are accepted:

Use MP4A-LATM packetization instead of MPEG4-GENERIC for AAC.
Use RFC 2190 packetization instead of RFC 4629 for H.263.
Don't send RTCP sender reports.
Use mode 0 for H.264 in RTP.
Send RTCP BYE packets when finishing.

Default value is 0.

Set minimum local UDP port. Default value is 5000.
Set maximum local UDP port. Default value is 65000.
Set the maximum socket buffer size in bytes.
Set max send packet size (in bytes). Default value is 1472.

Demuxer

The following options are supported.

Do not start playing the stream immediately if set to 1. Default value is 0.
Set RTSP transport protocols.

It accepts the following values:

udp
Use UDP as lower transport protocol.
tcp
Use TCP (interleaving within the RTSP control channel) as lower transport protocol.
Use UDP multicast as lower transport protocol.
http
Use HTTP tunneling as lower transport protocol, which is useful for passing proxies.
Use HTTPs tunneling as lower transport protocol, which is useful for passing proxies and widely used for security consideration.

Multiple lower transport protocols may be specified, in that case they are tried one at a time (if the setup of one fails, the next one is tried). For the muxer, only the tcp and udp options are supported.

Set RTSP flags.

The following values are accepted:

Accept packets only from negotiated peer address and port.
Act as a server, listening for an incoming connection.
Try TCP for RTP transport first, if TCP is available as RTSP RTP transport.
Export raw MPEG-TS stream instead of demuxing. The flag will simply write out the raw stream, with the original PAT/PMT/PIDs intact.

Default value is none.

Set media types to accept from the server.

The following flags are accepted:

By default it accepts all media types.

Set minimum local UDP port. Default value is 5000.
Set maximum local UDP port. Default value is 65000.
Set maximum timeout (in seconds) to establish an initial connection. Setting listen_timeout > 0 sets rtsp_flags to listen. Default is -1 which means an infinite timeout when listen mode is set.
Set number of packets to buffer for handling of reordered packets.
Set socket TCP I/O timeout in microseconds.
Override User-Agent header. If not specified, it defaults to the libavformat identifier string.
Set the maximum socket buffer size in bytes.

When receiving data over UDP, the demuxer tries to reorder received packets (since they may arrive out of order, or packets may get lost totally). This can be disabled by setting the maximum demuxing delay to zero (via the "max_delay" field of AVFormatContext).

When watching multi-bitrate Real-RTSP streams with ffplay, the streams to display can be chosen with "-vst" n and "-ast" n for video and audio respectively, and can be switched on the fly by pressing "v" and "a".

Examples

The following examples all make use of the ffplay and ffmpeg tools.

  • Watch a stream over UDP, with a max reordering delay of 0.5 seconds:
    ffplay -max_delay 500000 -rtsp_transport udp rtsp://server/video.mp4
    
  • Watch a stream tunneled over HTTP:
    ffplay -rtsp_transport http rtsp://server/video.mp4
    
  • Send a stream in realtime to a RTSP server, for others to watch:
    ffmpeg -re -i <input> -f rtsp -muxdelay 0.1 rtsp://server/live.sdp
    
  • Receive a stream in realtime:
    ffmpeg -rtsp_flags listen -i rtsp://ownaddress/live.sdp <output>
    

Session Announcement Protocol (RFC 2974). This is not technically a protocol handler in libavformat, it is a muxer and demuxer. It is used for signalling of RTP streams, by announcing the SDP for the streams regularly on a separate port.

Muxer

The syntax for a SAP url given to the muxer is:

sap://<destination>[:<port>][?<options>]

The RTP packets are sent to destination on port port, or to port 5004 if no port is specified. options is a "&"-separated list. The following options are supported:

Specify the destination IP address for sending the announcements to. If omitted, the announcements are sent to the commonly used SAP announcement multicast address 224.2.127.254 (sap.mcast.net), or ff0e::2:7ffe if destination is an IPv6 address.
Specify the port to send the announcements on, defaults to 9875 if not specified.
Specify the time to live value for the announcements and RTP packets, defaults to 255.
If set to 1, send all RTP streams on the same port pair. If zero (the default), all streams are sent on unique ports, with each stream on a port 2 numbers higher than the previous. VLC/Live555 requires this to be set to 1, to be able to receive the stream. The RTP stack in libavformat for receiving requires all streams to be sent on unique ports.

Example command lines follow.

To broadcast a stream on the local subnet, for watching in VLC:

ffmpeg -re -i <input> -f sap sap://224.0.0.255?same_port=1

Similarly, for watching in ffplay:

ffmpeg -re -i <input> -f sap sap://224.0.0.255

And for watching in ffplay, over IPv6:

ffmpeg -re -i <input> -f sap sap://[ff0e::1:2:3:4]

Demuxer

The syntax for a SAP url given to the demuxer is:

sap://[<address>][:<port>]

address is the multicast address to listen for announcements on, if omitted, the default 224.2.127.254 (sap.mcast.net) is used. port is the port that is listened on, 9875 if omitted.

The demuxers listens for announcements on the given address and port. Once an announcement is received, it tries to receive that particular stream.

Example command lines follow.

To play back the first stream announced on the normal SAP multicast address:

ffplay sap://

To play back the first stream announced on one the default IPv6 SAP multicast address:

ffplay sap://[ff0e::2:7ffe]

Stream Control Transmission Protocol.

The accepted URL syntax is:

sctp://<host>:<port>[?<options>]

The protocol accepts the following options:

If set to any value, listen for an incoming connection. Outgoing connection is done by default.
Set the maximum number of streams. By default no limit is set.

Haivision Secure Reliable Transport Protocol via libsrt.

The supported syntax for a SRT URL is:

srt://<hostname>:<port>[?<options>]

options contains a list of &-separated options of the form key=val.

or

<options> srt://<hostname>:<port>

options contains a list of '-key val' options.

This protocol accepts the following options.

Connection timeout; SRT cannot connect for RTT > 1500 msec (2 handshake exchanges) with the default connect timeout of 3 seconds. This option applies to the caller and rendezvous connection modes. The connect timeout is 10 times the value set for the rendezvous mode (which can be used as a workaround for this connection problem with earlier versions).
Flight Flag Size (Window Size), in bytes. FFS is actually an internal parameter and you should set it to not less than recv_buffer_size and mss. The default value is relatively large, therefore unless you set a very large receiver buffer, you do not need to change this option. Default value is 25600.
Sender nominal input rate, in bytes per seconds. Used along with oheadbw, when maxbw is set to relative (0), to calculate maximum sending rate when recovery packets are sent along with the main media stream: inputbw * (100 + oheadbw) / 100 if inputbw is not set while maxbw is set to relative (0), the actual input rate is evaluated inside the library. Default value is 0.
IP Type of Service. Applies to sender only. Default value is 0xB8.
IP Time To Live. Applies to sender only. Default value is 64.
Timestamp-based Packet Delivery Delay. Used to absorb bursts of missed packet retransmissions. This flag sets both rcvlatency and peerlatency to the same value. Note that prior to version 1.3.0 this is the only flag to set the latency, however this is effectively equivalent to setting peerlatency, when side is sender and rcvlatency when side is receiver, and the bidirectional stream sending is not supported.
Set socket listen timeout.
Maximum sending bandwidth, in bytes per seconds. -1 infinite (CSRTCC limit is 30mbps) 0 relative to input rate (see inputbw) >0 absolute limit value Default value is 0 (relative)
Connection mode. caller opens client connection. listener starts server to listen for incoming connections. rendezvous use Rendez-Vous connection mode. Default value is caller.
Maximum Segment Size, in bytes. Used for buffer allocation and rate calculation using a packet counter assuming fully filled packets. The smallest MSS between the peers is used. This is 1500 by default in the overall internet. This is the maximum size of the UDP packet and can be only decreased, unless you have some unusual dedicated network settings. Default value is 1500.
If set to 1, Receiver will send `UMSG_LOSSREPORT` messages periodically until a lost packet is retransmitted or intentionally dropped. Default value is 1.
Recovery bandwidth overhead above input rate, in percents. See inputbw. Default value is 25%.
HaiCrypt Encryption/Decryption Passphrase string, length from 10 to 79 characters. The passphrase is the shared secret between the sender and the receiver. It is used to generate the Key Encrypting Key using PBKDF2 (Password-Based Key Derivation Function). It is used only if pbkeylen is non-zero. It is used on the receiver only if the received data is encrypted. The configured passphrase cannot be recovered (write-only).
If true, both connection parties must have the same password set (including empty, that is, with no encryption). If the password doesn't match or only one side is unencrypted, the connection is rejected. Default is true.
The number of packets to be transmitted after which the encryption key is switched to a new key. Default is -1. -1 means auto (0x1000000 in srt library). The range for this option is integers in the 0 - "INT_MAX".
The interval between when a new encryption key is sent and when switchover occurs. This value also applies to the subsequent interval between when switchover occurs and when the old encryption key is decommissioned. Default is -1. -1 means auto (0x1000 in srt library). The range for this option is integers in the 0 - "INT_MAX".
The sender's extra delay before dropping packets. This delay is added to the default drop delay time interval value.

Special value -1: Do not drop packets on the sender at all.

Sets the maximum declared size of a packet transferred during the single call to the sending function in Live mode. Use 0 if this value isn't used (which is default in file mode). Default is -1 (automatic), which typically means MPEG-TS; if you are going to use SRT to send any different kind of payload, such as, for example, wrapping a live stream in very small frames, then you can use a bigger maximum frame size, though not greater than 1456 bytes.
Alias for payload_size.
The latency value (as described in rcvlatency) that is set by the sender side as a minimum value for the receiver.
Sender encryption key length, in bytes. Only can be set to 0, 16, 24 and 32. Enable sender encryption if not 0. Not required on receiver (set to 0), key size obtained from sender in HaiCrypt handshake. Default value is 0.
The time that should elapse since the moment when the packet was sent and the moment when it's delivered to the receiver application in the receiving function. This time should be a buffer time large enough to cover the time spent for sending, unexpectedly extended RTT time, and the time needed to retransmit the lost UDP packet. The effective latency value will be the maximum of this options' value and the value of peerlatency set by the peer side. Before version 1.3.0 this option is only available as latency.
Set UDP receive buffer size, expressed in bytes.
Set UDP send buffer size, expressed in bytes.
Set raise error timeouts for read, write and connect operations. Note that the SRT library has internal timeouts which can be controlled separately, the value set here is only a cap on those.
Too-late Packet Drop. When enabled on receiver, it skips missing packets that have not been delivered in time and delivers the following packets to the application when their time-to-play has come. It also sends a fake ACK to the sender. When enabled on sender and enabled on the receiving peer, the sender drops the older packets that have no chance of being delivered in time. It was automatically enabled in the sender if the receiver supports it.
Set send buffer size, expressed in bytes.
Set receive buffer size, expressed in bytes.

Receive buffer must not be greater than ffs.

The value up to which the Reorder Tolerance may grow. When Reorder Tolerance is > 0, then packet loss report is delayed until that number of packets come in. Reorder Tolerance increases every time a "belated" packet has come, but it wasn't due to retransmission (that is, when UDP packets tend to come out of order), with the difference between the latest sequence and this packet's sequence, and not more than the value of this option. By default it's 0, which means that this mechanism is turned off, and the loss report is always sent immediately upon experiencing a "gap" in sequences.
The minimum SRT version that is required from the peer. A connection to a peer that does not satisfy the minimum version requirement will be rejected.

The version format in hex is 0xXXYYZZ for x.y.z in human readable form.

A string limited to 512 characters that can be set on the socket prior to connecting. This stream ID will be able to be retrieved by the listener side from the socket that is returned from srt_accept and was connected by a socket with that set stream ID. SRT does not enforce any special interpretation of the contents of this string. This option doesn’t make sense in Rendezvous connection; the result might be that simply one side will override the value from the other side and it’s the matter of luck which one would win
Alias for streamid to avoid conflict with ffmpeg command line option.
The type of Smoother used for the transmission for that socket, which is responsible for the transmission and congestion control. The Smoother type must be exactly the same on both connecting parties, otherwise the connection is rejected.
When set, this socket uses the Message API, otherwise it uses Buffer API. Note that in live mode (see transtype) there’s only message API available. In File mode you can chose to use one of two modes:

Stream API (default, when this option is false). In this mode you may send as many data as you wish with one sending instruction, or even use dedicated functions that read directly from a file. The internal facility will take care of any speed and congestion control. When receiving, you can also receive as many data as desired, the data not extracted will be waiting for the next call. There is no boundary between data portions in the Stream mode.

Message API. In this mode your single sending instruction passes exactly one piece of data that has boundaries (a message). Contrary to Live mode, this message may span across multiple UDP packets and the only size limitation is that it shall fit as a whole in the sending buffer. The receiver shall use as large buffer as necessary to receive the message, otherwise the message will not be given up. When the message is not complete (not all packets received or there was a packet loss) it will not be given up.

Sets the transmission type for the socket, in particular, setting this option sets multiple other parameters to their default values as required for a particular transmission type.

live: Set options as for live transmission. In this mode, you should send by one sending instruction only so many data that fit in one UDP packet, and limited to the value defined first in payload_size (1316 is default in this mode). There is no speed control in this mode, only the bandwidth control, if configured, in order to not exceed the bandwidth with the overhead transmission (retransmitted and control packets).

file: Set options as for non-live transmission. See messageapi for further explanations

The number of seconds that the socket waits for unsent data when closing. Default is -1. -1 means auto (off with 0 seconds in live mode, on with 180 seconds in file mode). The range for this option is integers in the 0 - "INT_MAX".
When true, use Timestamp-based Packet Delivery mode. The default behavior depends on the transmission type: enabled in live mode, disabled in file mode.

For more information see: https://github.com/Haivision/srt.

Secure Real-time Transport Protocol.

The accepted options are:

Select input and output encoding suites.

Supported values:

Set input and output encoding parameters, which are expressed by a base64-encoded representation of a binary block. The first 16 bytes of this binary block are used as master key, the following 14 bytes are used as master salt.

Virtually extract a segment of a file or another stream. The underlying stream must be seekable.

Accepted options:

Start offset of the extracted segment, in bytes.
End offset of the extracted segment, in bytes. If set to 0, extract till end of file.

Examples:

Extract a chapter from a DVD VOB file (start and end sectors obtained externally and multiplied by 2048):

subfile,,start,153391104,end,268142592,,:/media/dvd/VIDEO_TS/VTS_08_1.VOB

Play an AVI file directly from a TAR archive:

subfile,,start,183241728,end,366490624,,:archive.tar

Play a MPEG-TS file from start offset till end:

subfile,,start,32815239,end,0,,:video.ts

Writes the output to multiple protocols. The individual outputs are separated by |

tee:file://path/to/local/this.avi|file://path/to/local/that.avi

Transmission Control Protocol.

The required syntax for a TCP url is:

tcp://<hostname>:<port>[?<options>]

options contains a list of &-separated options of the form key=val.

The list of supported options follows.

Listen for an incoming connection. 0 disables listen, 1 enables listen in single client mode, 2 enables listen in multi-client mode. Default value is 0.
Local IP address of a network interface used for tcp socket connect.
Local port used for tcp socket connect.
Set raise error timeout, expressed in microseconds.

This option is only relevant in read mode: if no data arrived in more than this time interval, raise error.

Set listen timeout, expressed in milliseconds.
Set receive buffer size, expressed bytes.
Set send buffer size, expressed bytes.
Set TCP_NODELAY to disable Nagle's algorithm. Default value is 0.

Remark: Writing to the socket is currently not optimized to minimize system calls and reduces the efficiency / effect of TCP_NODELAY.

Set maximum segment size for outgoing TCP packets, expressed in bytes.

The following example shows how to setup a listening TCP connection with ffmpeg, which is then accessed with ffplay:

ffmpeg -i <input> -f <format> tcp://<hostname>:<port>?listen
ffplay tcp://<hostname>:<port>

Transport Layer Security (TLS) / Secure Sockets Layer (SSL)

The required syntax for a TLS/SSL url is:

tls://<hostname>:<port>[?<options>]

The following parameters can be set via command line options (or in code via "AVOption"s):

A file containing certificate authority (CA) root certificates to treat as trusted. If the linked TLS library contains a default this might not need to be specified for verification to work, but not all libraries and setups have defaults built in. The file must be in OpenSSL PEM format.
If enabled, try to verify the peer that we are communicating with. Note, if using OpenSSL, this currently only makes sure that the peer certificate is signed by one of the root certificates in the CA database, but it does not validate that the certificate actually matches the host name we are trying to connect to. (With other backends, the host name is validated as well.)

This is disabled by default since it requires a CA database to be provided by the caller in many cases.

A file containing a certificate to use in the handshake with the peer. (When operating as server, in listen mode, this is more often required by the peer, while client certificates only are mandated in certain setups.)
A file containing the private key for the certificate.
If enabled, listen for connections on the provided port, and assume the server role in the handshake instead of the client role.
The HTTP proxy to tunnel through, e.g. "http://example.com:1234". The proxy must support the CONNECT method.

Example command lines:

To create a TLS/SSL server that serves an input stream.

ffmpeg -i <input> -f <format> tls://<hostname>:<port>?listen&cert=<server.crt>&key=<server.key>

To play back a stream from the TLS/SSL server using ffplay:

ffplay tls://<hostname>:<port>

User Datagram Protocol.

The required syntax for an UDP URL is:

udp://<hostname>:<port>[?<options>]

options contains a list of &-separated options of the form key=val.

In case threading is enabled on the system, a circular buffer is used to store the incoming data, which allows one to reduce loss of data due to UDP socket buffer overruns. The fifo_size and overrun_nonfatal options are related to this buffer.

The list of supported options follows.

Set the UDP maximum socket buffer size in bytes. This is used to set either the receive or send buffer size, depending on what the socket is used for. Default is 32 KB for output, 384 KB for input. See also fifo_size.
If set to nonzero, the output will have the specified constant bitrate if the input has enough packets to sustain it.
When using bitrate this specifies the maximum number of bits in packet bursts.
Override the local UDP port to bind with.
Local IP address of a network interface used for sending packets or joining multicast groups.
Set the size in bytes of UDP packets.
Explicitly allow or disallow reusing UDP sockets.
Set the time to live value (for multicast only).
Initialize the UDP socket with connect(). In this case, the destination address can't be changed with ff_udp_set_remote_url later. If the destination address isn't known at the start, this option can be specified in ff_udp_set_remote_url, too. This allows finding out the source address for the packets with getsockname, and makes writes return with AVERROR(ECONNREFUSED) if "destination unreachable" is received. For receiving, this gives the benefit of only receiving packets from the specified peer address/port.
Only receive packets sent from the specified addresses. In case of multicast, also subscribe to multicast traffic coming from these addresses only.
Ignore packets sent from the specified addresses. In case of multicast, also exclude the source addresses in the multicast subscription.
Set the UDP receiving circular buffer size, expressed as a number of packets with size of 188 bytes. If not specified defaults to 7*4096.
Survive in case of UDP receiving circular buffer overrun. Default value is 0.
Set raise error timeout, expressed in microseconds.

This option is only relevant in read mode: if no data arrived in more than this time interval, raise error.

Explicitly allow or disallow UDP broadcasting.

Note that broadcasting may not work properly on networks having a broadcast storm protection.

Examples

  • Use ffmpeg to stream over UDP to a remote endpoint:
    ffmpeg -i <input> -f <format> udp://<hostname>:<port>
    
  • Use ffmpeg to stream in mpegts format over UDP using 188 sized UDP packets, using a large input buffer:
    ffmpeg -i <input> -f mpegts udp://<hostname>:<port>?pkt_size=188&buffer_size=65535
    
  • Use ffmpeg to receive over UDP from a remote endpoint:
    ffmpeg -i udp://[<multicast-address>]:<port> ...
    

Unix local socket

The required syntax for a Unix socket URL is:

unix://<filepath>

The following parameters can be set via command line options (or in code via "AVOption"s):

Timeout in ms.
Create the Unix socket in listening mode.

ZeroMQ asynchronous messaging using the libzmq library.

This library supports unicast streaming to multiple clients without relying on an external server.

The required syntax for streaming or connecting to a stream is:

zmq:tcp://ip-address:port

Example: Create a localhost stream on port 5555:

ffmpeg -re -i input -f mpegts zmq:tcp://127.0.0.1:5555

Multiple clients may connect to the stream using:

ffplay zmq:tcp://127.0.0.1:5555

Streaming to multiple clients is implemented using a ZeroMQ Pub-Sub pattern. The server side binds to a port and publishes data. Clients connect to the server (via IP address/port) and subscribe to the stream. The order in which the server and client start generally does not matter.

ffmpeg must be compiled with the --enable-libzmq option to support this protocol.

Options can be set on the ffmpeg/ffplay command line. The following options are supported:

Forces the maximum packet size for sending/receiving data. The default value is 131,072 bytes. On the server side, this sets the maximum size of sent packets via ZeroMQ. On the clients, it sets an internal buffer size for receiving packets. Note that pkt_size on the clients should be equal to or greater than pkt_size on the server. Otherwise the received message may be truncated causing decoding errors.

The libavdevice library provides the same interface as libavformat. Namely, an input device is considered like a demuxer, and an output device like a muxer, and the interface and generic device options are the same provided by libavformat (see the ffmpeg-formats manual).

In addition each input or output device may support so-called private options, which are specific for that component.

Options may be set by specifying -option value in the FFmpeg tools, or by setting the value explicitly in the device "AVFormatContext" options or using the libavutil/opt.h API for programmatic use.

Input devices are configured elements in FFmpeg which enable accessing the data coming from a multimedia device attached to your system.

When you configure your FFmpeg build, all the supported input devices are enabled by default. You can list all available ones using the configure option "--list-indevs".

You can disable all the input devices using the configure option "--disable-indevs", and selectively enable an input device using the option "--enable-indev=INDEV", or you can disable a particular input device using the option "--disable-indev=INDEV".

The option "-devices" of the ff* tools will display the list of supported input devices.

A description of the currently available input devices follows.

ALSA (Advanced Linux Sound Architecture) input device.

To enable this input device during configuration you need libasound installed on your system.

This device allows capturing from an ALSA device. The name of the device to capture has to be an ALSA card identifier.

An ALSA identifier has the syntax:

hw:<CARD>[,<DEV>[,<SUBDEV>]]

where the DEV and SUBDEV components are optional.

The three arguments (in order: CARD,DEV,SUBDEV) specify card number or identifier, device number and subdevice number (-1 means any).

To see the list of cards currently recognized by your system check the files /proc/asound/cards and /proc/asound/devices.

For example to capture with ffmpeg from an ALSA device with card id 0, you may run the command:

ffmpeg -f alsa -i hw:0 alsaout.wav

For more information see: http://www.alsa-project.org/alsa-doc/alsa-lib/pcm.html

Options

Set the sample rate in Hz. Default is 48000.
Set the number of channels. Default is 2.

Android camera input device.

This input devices uses the Android Camera2 NDK API which is available on devices with API level 24+. The availability of android_camera is autodetected during configuration.

This device allows capturing from all cameras on an Android device, which are integrated into the Camera2 NDK API.

The available cameras are enumerated internally and can be selected with the camera_index parameter. The input file string is discarded.

Generally the back facing camera has index 0 while the front facing camera has index 1.

Options

Set the video size given as a string such as 640x480 or hd720. Falls back to the first available configuration reported by Android if requested video size is not available or by default.
framerate
Set the video framerate. Falls back to the first available configuration reported by Android if requested framerate is not available or by default (-1).
Set the index of the camera to use. Default is 0.
Set the maximum number of frames to buffer. Default is 5.

AVFoundation input device.

AVFoundation is the currently recommended framework by Apple for streamgrabbing on OSX >= 10.7 as well as on iOS.

The input filename has to be given in the following syntax:

-i "[[VIDEO]:[AUDIO]]"

The first entry selects the video input while the latter selects the audio input. The stream has to be specified by the device name or the device index as shown by the device list. Alternatively, the video and/or audio input device can be chosen by index using the

B<-video_device_index E<lt>INDEXE<gt>>

and/or

B<-audio_device_index E<lt>INDEXE<gt>>

, overriding any device name or index given in the input filename.

All available devices can be enumerated by using -list_devices true, listing all device names and corresponding indices.

There are two device name aliases:

"default"
Select the AVFoundation default device of the corresponding type.
"none"
Do not record the corresponding media type. This is equivalent to specifying an empty device name or index.

Options

AVFoundation supports the following options:

If set to true, a list of all available input devices is given showing all device names and indices.
Specify the video device by its index. Overrides anything given in the input filename.
Specify the audio device by its index. Overrides anything given in the input filename.
Request the video device to use a specific pixel format. If the specified format is not supported, a list of available formats is given and the first one in this list is used instead. Available pixel formats are: "monob, rgb555be, rgb555le, rgb565be, rgb565le, rgb24, bgr24, 0rgb, bgr0, 0bgr, rgb0,
bgr48be, uyvy422, yuva444p, yuva444p16le, yuv444p, yuv422p16, yuv422p10, yuv444p10,
yuv420p, nv12, yuyv422, gray"
-framerate
Set the grabbing frame rate. Default is "ntsc", corresponding to a frame rate of "30000/1001".
Set the video frame size.
Capture the mouse pointer. Default is 0.
Capture the screen mouse clicks. Default is 0.
Capture the raw device data. Default is 0. Using this option may result in receiving the underlying data delivered to the AVFoundation framework. E.g. for muxed devices that sends raw DV data to the framework (like tape-based camcorders), setting this option to false results in extracted video frames captured in the designated pixel format only. Setting this option to true results in receiving the raw DV stream untouched.

Examples

  • Print the list of AVFoundation supported devices and exit:
    $ ffmpeg -f avfoundation -list_devices true -i ""
    
  • Record video from video device 0 and audio from audio device 0 into out.avi:
    $ ffmpeg -f avfoundation -i "0:0" out.avi
    
  • Record video from video device 2 and audio from audio device 1 into out.avi:
    $ ffmpeg -f avfoundation -video_device_index 2 -i ":1" out.avi
    
  • Record video from the system default video device using the pixel format bgr0 and do not record any audio into out.avi:
    $ ffmpeg -f avfoundation -pixel_format bgr0 -i "default:none" out.avi
    
  • Record raw DV data from a suitable input device and write the output into out.dv:
    $ ffmpeg -f avfoundation -capture_raw_data true -i "zr100:none" out.dv
    

BSD video input device. Deprecated and will be removed - please contact the developers if you are interested in maintaining it.

Options

framerate
Set the frame rate.
Set the video frame size. Default is "vga".
Available values are:

The decklink input device provides capture capabilities for Blackmagic DeckLink devices.

To enable this input device, you need the Blackmagic DeckLink SDK and you need to configure with the appropriate "--extra-cflags" and "--extra-ldflags". On Windows, you need to run the IDL files through widl.

DeckLink is very picky about the formats it supports. Pixel format of the input can be set with raw_format. Framerate and video size must be determined for your device with -list_formats 1. Audio sample rate is always 48 kHz and the number of channels can be 2, 8 or 16. Note that all audio channels are bundled in one single audio track.

Options

If set to true, print a list of devices and exit. Defaults to false. This option is deprecated, please use the "-sources" option of ffmpeg to list the available input devices.
If set to true, print a list of supported formats and exit. Defaults to false.
This sets the input video format to the format given by the FourCC. To see the supported values of your device(s) use list_formats. Note that there is a FourCC 'pal ' that can also be used as pal (3 letters). Default behavior is autodetection of the input video format, if the hardware supports it.
Set the pixel format of the captured video. Available values are:
This is the default which means 8-bit YUV 422 or 8-bit ARGB if format autodetection is used, 8-bit YUV 422 otherwise.
8-bit YUV 422.
10-bit YUV 422.
8-bit RGB.
8-bit RGB.
10-bit RGB.
If set to nonzero, an additional teletext stream will be captured from the vertical ancillary data. Both SD PAL (576i) and HD (1080i or 1080p) sources are supported. In case of HD sources, OP47 packets are decoded.

This option is a bitmask of the SD PAL VBI lines captured, specifically lines 6 to 22, and lines 318 to 335. Line 6 is the LSB in the mask. Selected lines which do not contain teletext information will be ignored. You can use the special all constant to select all possible lines, or standard to skip lines 6, 318 and 319, which are not compatible with all receivers.

For SD sources, ffmpeg needs to be compiled with "--enable-libzvbi". For HD sources, on older (pre-4K) DeckLink card models you have to capture in 10 bit mode.

Defines number of audio channels to capture. Must be 2, 8 or 16. Defaults to 2.
Sets the decklink device duplex/profile mode. Must be unset, half, full, one_sub_device_full, one_sub_device_half, two_sub_device_full, four_sub_device_half Defaults to unset.

Note: DeckLink SDK 11.0 have replaced the duplex property by a profile property. For the DeckLink Duo 2 and DeckLink Quad 2, a profile is shared between any 2 sub-devices that utilize the same connectors. For the DeckLink 8K Pro, a profile is shared between all 4 sub-devices. So DeckLink 8K Pro support four profiles.

Valid profile modes for DeckLink 8K Pro(with DeckLink SDK >= 11.0): one_sub_device_full, one_sub_device_half, two_sub_device_full, four_sub_device_half

Valid profile modes for DeckLink Quad 2 and DeckLink Duo 2: half, full

Timecode type to include in the frame and video stream metadata. Must be none, rp188vitc, rp188vitc2, rp188ltc, rp188hfr, rp188any, vitc, vitc2, or serial. Defaults to none (not included).

In order to properly support 50/60 fps timecodes, the ordering of the queried timecode types for rp188any is HFR, VITC1, VITC2 and LTC for >30 fps content. Note that this is slightly different to the ordering used by the DeckLink API, which is HFR, VITC1, LTC, VITC2.

Sets the video input source. Must be unset, sdi, hdmi, optical_sdi, component, composite or s_video. Defaults to unset.
Sets the audio input source. Must be unset, embedded, aes_ebu, analog, analog_xlr, analog_rca or microphone. Defaults to unset.
Sets the video packet timestamp source. Must be video, audio, reference, wallclock or abs_wallclock. Defaults to video.
Sets the audio packet timestamp source. Must be video, audio, reference, wallclock or abs_wallclock. Defaults to audio.
If set to true, color bars are drawn in the event of a signal loss. Defaults to true. This option is deprecated, please use the "signal_loss_action" option.
Sets the action to take in the event of a signal loss. Accepts one of the following values:
1, none
Do nothing on signal loss. This usually results in black frames.
2, bars
Draw color bars on signal loss. Only supported for 8-bit input signals.
3, repeat
Repeat the last video frame on signal loss.

Defaults to bars.

Sets maximum input buffer size in bytes. If the buffering reaches this value, incoming frames will be dropped. Defaults to 1073741824.
Sets the audio sample bit depth. Must be 16 or 32. Defaults to 16.
If set to true, timestamps are forwarded as they are without removing the initial offset. Defaults to false.
Capture start time alignment in seconds. If set to nonzero, input frames are dropped till the system timestamp aligns with configured value. Alignment difference of up to one frame duration is tolerated. This is useful for maintaining input synchronization across N different hardware devices deployed for 'N-way' redundancy. The system time of different hardware devices should be synchronized with protocols such as NTP or PTP, before using this option. Note that this method is not foolproof. In some border cases input synchronization may not happen due to thread scheduling jitters in the OS. Either sync could go wrong by 1 frame or in a rarer case timestamp_align seconds. Defaults to 0.
Drop frames till a frame with timecode is received. Sometimes serial timecode isn't received with the first input frame. If that happens, the stored stream timecode will be inaccurate. If this option is set to true, input frames are dropped till a frame with timecode is received. Option timecode_format must be specified. Defaults to false.
If set to true, extracts KLV data from VANC and outputs KLV packets. KLV VANC packets are joined based on MID and PSC fields and aggregated into one KLV packet. Defaults to false.

Examples

  • List input devices:
    ffmpeg -sources decklink
    
  • List supported formats:
    ffmpeg -f decklink -list_formats 1 -i 'Intensity Pro'
    
  • Capture video clip at 1080i50:
    ffmpeg -format_code Hi50 -f decklink -i 'Intensity Pro' -c:a copy -c:v copy output.avi
    
  • Capture video clip at 1080i50 10 bit:
    ffmpeg -raw_format yuv422p10 -format_code Hi50 -f decklink -i 'UltraStudio Mini Recorder' -c:a copy -c:v copy output.avi
    
  • Capture video clip at 1080i50 with 16 audio channels:
    ffmpeg -channels 16 -format_code Hi50 -f decklink -i 'UltraStudio Mini Recorder' -c:a copy -c:v copy output.avi
    

Windows DirectShow input device.

DirectShow support is enabled when FFmpeg is built with the mingw-w64 project. Currently only audio and video devices are supported.

Multiple devices may be opened as separate inputs, but they may also be opened on the same input, which should improve synchronism between them.

The input name should be in the format:

<TYPE>=<NAME>[:<TYPE>=<NAME>]

where TYPE can be either audio or video, and NAME is the device's name or alternative name..

Options

If no options are specified, the device's defaults are used. If the device does not support the requested options, it will fail to open.

Set the video size in the captured video.
framerate
Set the frame rate in the captured video.
Set the sample rate (in Hz) of the captured audio.
Set the sample size (in bits) of the captured audio.
Set the number of channels in the captured audio.
If set to true, print a list of devices and exit.
If set to true, print a list of selected device's options and exit.
Set video device number for devices with the same name (starts at 0, defaults to 0).
Set audio device number for devices with the same name (starts at 0, defaults to 0).
Select pixel format to be used by DirectShow. This may only be set when the video codec is not set or set to rawvideo.
Set audio device buffer size in milliseconds (which can directly impact latency, depending on the device). Defaults to using the audio device's default buffer size (typically some multiple of 500ms). Setting this value too low can degrade performance. See also http://msdn.microsoft.com/en-us/library/windows/desktop/dd377582(v=vs.85).aspx
Select video capture pin to use by name or alternative name.
Select audio capture pin to use by name or alternative name.
Select video input pin number for crossbar device. This will be routed to the crossbar device's Video Decoder output pin. Note that changing this value can affect future invocations (sets a new default) until system reboot occurs.
Select audio input pin number for crossbar device. This will be routed to the crossbar device's Audio Decoder output pin. Note that changing this value can affect future invocations (sets a new default) until system reboot occurs.
If set to true, before capture starts, popup a display dialog to the end user, allowing them to change video filter properties and configurations manually. Note that for crossbar devices, adjusting values in this dialog may be needed at times to toggle between PAL (25 fps) and NTSC (29.97) input frame rates, sizes, interlacing, etc. Changing these values can enable different scan rates/frame rates and avoiding green bars at the bottom, flickering scan lines, etc. Note that with some devices, changing these properties can also affect future invocations (sets new defaults) until system reboot occurs.
If set to true, before capture starts, popup a display dialog to the end user, allowing them to change audio filter properties and configurations manually.
If set to true, before capture starts, popup a display dialog to the end user, allowing them to manually modify crossbar pin routings, when it opens a video device.
If set to true, before capture starts, popup a display dialog to the end user, allowing them to manually modify crossbar pin routings, when it opens an audio device.
If set to true, before capture starts, popup a display dialog to the end user, allowing them to manually modify TV channels and frequencies.
If set to true, before capture starts, popup a display dialog to the end user, allowing them to manually modify TV audio (like mono vs. stereo, Language A,B or C).
Load an audio capture filter device from file instead of searching it by name. It may load additional parameters too, if the filter supports the serialization of its properties to. To use this an audio capture source has to be specified, but it can be anything even fake one.
Save the currently used audio capture filter device and its parameters (if the filter supports it) to a file. If a file with the same name exists it will be overwritten.
Load a video capture filter device from file instead of searching it by name. It may load additional parameters too, if the filter supports the serialization of its properties to. To use this a video capture source has to be specified, but it can be anything even fake one.
Save the currently used video capture filter device and its parameters (if the filter supports it) to a file. If a file with the same name exists it will be overwritten.
If set to false, the timestamp for video frames will be derived from the wallclock instead of the timestamp provided by the capture device. This allows working around devices that provide unreliable timestamps.

Examples

  • Print the list of DirectShow supported devices and exit:
    $ ffmpeg -list_devices true -f dshow -i dummy
    
  • Open video device Camera:
    $ ffmpeg -f dshow -i video="Camera"
    
  • Open second video device with name Camera:
    $ ffmpeg -f dshow -video_device_number 1 -i video="Camera"
    
  • Open video device Camera and audio device Microphone:
    $ ffmpeg -f dshow -i video="Camera":audio="Microphone"
    
  • Print the list of supported options in selected device and exit:
    $ ffmpeg -list_options true -f dshow -i video="Camera"
    
  • Specify pin names to capture by name or alternative name, specify alternative device name:
    $ ffmpeg -f dshow -audio_pin_name "Audio Out" -video_pin_name 2 -i video=video="@device_pnp_\\?\pci#ven_1a0a&dev_6200&subsys_62021461&rev_01#4&e2c7dd6&0&00e1#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\{ca465100-deb0-4d59-818f-8c477184adf6}":audio="Microphone"
    
  • Configure a crossbar device, specifying crossbar pins, allow user to adjust video capture properties at startup:
    $ ffmpeg -f dshow -show_video_device_dialog true -crossbar_video_input_pin_number 0
         -crossbar_audio_input_pin_number 3 -i video="AVerMedia BDA Analog Capture":audio="AVerMedia BDA Analog Capture"
    

Linux framebuffer input device.

The Linux framebuffer is a graphic hardware-independent abstraction layer to show graphics on a computer monitor, typically on the console. It is accessed through a file device node, usually /dev/fb0.

For more detailed information read the file Documentation/fb/framebuffer.txt included in the Linux source tree.

See also http://linux-fbdev.sourceforge.net/, and fbset(1).

To record from the framebuffer device /dev/fb0 with ffmpeg:

ffmpeg -f fbdev -framerate 10 -i /dev/fb0 out.avi

You can take a single screenshot image with the command:

ffmpeg -f fbdev -framerate 1 -i /dev/fb0 -frames:v 1 screenshot.jpeg

Options

framerate
Set the frame rate. Default is 25.

Win32 GDI-based screen capture device.

This device allows you to capture a region of the display on Windows.

Amongst options for the imput filenames are such elements as:

desktop

or

title=<window_title>

or

hwnd=<window_hwnd>

The first option will capture the entire desktop, or a fixed region of the desktop. The second and third options will instead capture the contents of a single window, regardless of its position on the screen.

For example, to grab the entire desktop using ffmpeg:

ffmpeg -f gdigrab -framerate 6 -i desktop out.mpg

Grab a 640x480 region at position "10,20":

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size vga -i desktop out.mpg

Grab the contents of the window named "Calculator"

ffmpeg -f gdigrab -framerate 6 -i title=Calculator out.mpg

Options

Specify whether to draw the mouse pointer. Use the value 0 to not draw the pointer. Default value is 1.
framerate
Set the grabbing frame rate. Default value is "ntsc", corresponding to a frame rate of "30000/1001".
Show grabbed region on screen.

If show_region is specified with 1, then the grabbing region will be indicated on screen. With this option, it is easy to know what is being grabbed if only a portion of the screen is grabbed.

Note that show_region is incompatible with grabbing the contents of a single window.

For example:

ffmpeg -f gdigrab -show_region 1 -framerate 6 -video_size cif -offset_x 10 -offset_y 20 -i desktop out.mpg
Set the video frame size. The default is to capture the full screen if desktop is selected, or the full window size if title=window_title is selected.
When capturing a region with video_size, set the distance from the left edge of the screen or desktop.

Note that the offset calculation is from the top left corner of the primary monitor on Windows. If you have a monitor positioned to the left of your primary monitor, you will need to use a negative offset_x value to move the region to that monitor.

When capturing a region with video_size, set the distance from the top edge of the screen or desktop.

Note that the offset calculation is from the top left corner of the primary monitor on Windows. If you have a monitor positioned above your primary monitor, you will need to use a negative offset_y value to move the region to that monitor.

FireWire DV/HDV input device using libiec61883.

To enable this input device, you need libiec61883, libraw1394 and libavc1394 installed on your system. Use the configure option "--enable-libiec61883" to compile with the device enabled.

The iec61883 capture device supports capturing from a video device connected via IEEE1394 (FireWire), using libiec61883 and the new Linux FireWire stack (juju). This is the default DV/HDV input method in Linux Kernel 2.6.37 and later, since the old FireWire stack was removed.

Specify the FireWire port to be used as input file, or "auto" to choose the first port connected.

Options

Override autodetection of DV/HDV. This should only be used if auto detection does not work, or if usage of a different device type should be prohibited. Treating a DV device as HDV (or vice versa) will not work and result in undefined behavior. The values auto, dv and hdv are supported.
Set maximum size of buffer for incoming data, in frames. For DV, this is an exact value. For HDV, it is not frame exact, since HDV does not have a fixed frame size.
Select the capture device by specifying its GUID. Capturing will only be performed from the specified device and fails if no device with the given GUID is found. This is useful to select the input if multiple devices are connected at the same time. Look at /sys/bus/firewire/devices to find out the GUIDs.

Examples

  • Grab and show the input of a FireWire DV/HDV device.
    ffplay -f iec61883 -i auto
    
  • Grab and record the input of a FireWire DV/HDV device, using a packet buffer of 100000 packets if the source is HDV.
    ffmpeg -f iec61883 -i auto -dvbuffer 100000 out.mpg
    

JACK input device.

To enable this input device during configuration you need libjack installed on your system.

A JACK input device creates one or more JACK writable clients, one for each audio channel, with name client_name:input_N, where client_name is the name provided by the application, and N is a number which identifies the channel. Each writable client will send the acquired data to the FFmpeg input device.

Once you have created one or more JACK readable clients, you need to connect them to one or more JACK writable clients.

To connect or disconnect JACK clients you can use the jack_connect and jack_disconnect programs, or do it through a graphical interface, for example with qjackctl.

To list the JACK clients and their properties you can invoke the command jack_lsp.

Follows an example which shows how to capture a JACK readable client with ffmpeg.

# Create a JACK writable client with name "ffmpeg".
$ ffmpeg -f jack -i ffmpeg -y out.wav

# Start the sample jack_metro readable client.
$ jack_metro -b 120 -d 0.2 -f 4000

# List the current JACK clients.
$ jack_lsp -c
system:capture_1
system:capture_2
system:playback_1
system:playback_2
ffmpeg:input_1
metro:120_bpm

# Connect metro to the ffmpeg writable client.
$ jack_connect metro:120_bpm ffmpeg:input_1

For more information read: http://jackaudio.org/

Options

Set the number of channels. Default is 2.

KMS video input device.

Captures the KMS scanout framebuffer associated with a specified CRTC or plane as a DRM object that can be passed to other hardware functions.

Requires either DRM master or CAP_SYS_ADMIN to run.

If you don't understand what all of that means, you probably don't want this. Look at x11grab instead.

Options

DRM device to capture on. Defaults to /dev/dri/card0.
format
Pixel format of the framebuffer. This can be autodetected if you are running Linux 5.7 or later, but needs to be provided for earlier versions. Defaults to bgr0, which is the most common format used by the Linux console and Xorg X server.
Format modifier to signal on output frames. This is necessary to import correctly into some APIs. It can be autodetected if you are running Linux 5.7 or later, but will need to be provided explicitly when needed in earlier versions. See the libdrm documentation for possible values.
KMS CRTC ID to define the capture source. The first active plane on the given CRTC will be used.
KMS plane ID to define the capture source. Defaults to the first active plane found if neither crtc_id nor plane_id are specified.
framerate
Framerate to capture at. This is not synchronised to any page flipping or framebuffer changes - it just defines the interval at which the framebuffer is sampled. Sampling faster than the framebuffer update rate will generate independent frames with the same content. Defaults to 30.

Examples

  • Capture from the first active plane, download the result to normal frames and encode. This will only work if the framebuffer is both linear and mappable - if not, the result may be scrambled or fail to download.
    ffmpeg -f kmsgrab -i - -vf 'hwdownload,format=bgr0' output.mp4
    
  • Capture from CRTC ID 42 at 60fps, map the result to VAAPI, convert to NV12 and encode as H.264.
    ffmpeg -crtc_id 42 -framerate 60 -f kmsgrab -i - -vf 'hwmap=derive_device=vaapi,scale_vaapi=w=1920:h=1080:format=nv12' -c:v h264_vaapi output.mp4
    
  • To capture only part of a plane the output can be cropped - this can be used to capture a single window, as long as it has a known absolute position and size. For example, to capture and encode the middle quarter of a 1920x1080 plane:
    ffmpeg -f kmsgrab -i - -vf 'hwmap=derive_device=vaapi,crop=960:540:480:270,scale_vaapi=960:540:nv12' -c:v h264_vaapi output.mp4
    

Libavfilter input virtual device.

This input device reads data from the open output pads of a libavfilter filtergraph.

For each filtergraph open output, the input device will create a corresponding stream which is mapped to the generated output. The filtergraph is specified through the option graph.

Options

Specify the filtergraph to use as input. Each video open output must be labelled by a unique string of the form "outN", where N is a number starting from 0 corresponding to the mapped input stream generated by the device. The first unlabelled output is automatically assigned to the "out0" label, but all the others need to be specified explicitly.

The suffix "+subcc" can be appended to the output label to create an extra stream with the closed captions packets attached to that output (experimental; only for EIA-608 / CEA-708 for now). The subcc streams are created after all the normal streams, in the order of the corresponding stream. For example, if there is "out19+subcc", "out7+subcc" and up to "out42", the stream #43 is subcc for stream #7 and stream #44 is subcc for stream #19.

If not specified defaults to the filename specified for the input device.

Set the filename of the filtergraph to be read and sent to the other filters. Syntax of the filtergraph is the same as the one specified by the option graph.
Dump graph to stderr.

Examples

  • Create a color video stream and play it back with ffplay:
    ffplay -f lavfi -graph "color=c=pink [out0]" dummy
    
  • As the previous example, but use filename for specifying the graph description, and omit the "out0" label:
    ffplay -f lavfi color=c=pink
    
  • Create three different video test filtered sources and play them:
    ffplay -f lavfi -graph "testsrc [out0]; testsrc,hflip [out1]; testsrc,negate [out2]" test3
    
  • Read an audio stream from a file using the amovie source and play it back with ffplay:
    ffplay -f lavfi "amovie=test.wav"
    
  • Read an audio stream and a video stream and play it back with ffplay:
    ffplay -f lavfi "movie=test.avi[out0];amovie=test.wav[out1]"
    
  • Dump decoded frames to images and Closed Captions to an RCWT backup:
    ffmpeg -f lavfi -i "movie=test.ts[out0+subcc]" -map v frame%08d.png -map s -c copy -f rcwt subcc.bin
    

Audio-CD input device based on libcdio.

To enable this input device during configuration you need libcdio installed on your system. It requires the configure option "--enable-libcdio".

This device allows playing and grabbing from an Audio-CD.

For example to copy with ffmpeg the entire Audio-CD in /dev/sr0, you may run the command:

ffmpeg -f libcdio -i /dev/sr0 cd.wav

Options

Set drive reading speed. Default value is 0.

The speed is specified CD-ROM speed units. The speed is set through the libcdio "cdio_cddap_speed_set" function. On many CD-ROM drives, specifying a value too large will result in using the fastest speed.

Set paranoia recovery mode flags. It accepts one of the following values:

Default value is disable.

For more information about the available recovery modes, consult the paranoia project documentation.

IIDC1394 input device, based on libdc1394 and libraw1394.

Requires the configure option "--enable-libdc1394".

Options

framerate
Set the frame rate. Default is "ntsc", corresponding to a frame rate of "30000/1001".
Select the pixel format. Default is "uyvy422".
Set the video size given as a string such as "640x480" or "hd720". Default is "qvga".

The OpenAL input device provides audio capture on all systems with a working OpenAL 1.1 implementation.

To enable this input device during configuration, you need OpenAL headers and libraries installed on your system, and need to configure FFmpeg with "--enable-openal".

OpenAL headers and libraries should be provided as part of your OpenAL implementation, or as an additional download (an SDK). Depending on your installation you may need to specify additional flags via the "--extra-cflags" and "--extra-ldflags" for allowing the build system to locate the OpenAL headers and libraries.

An incomplete list of OpenAL implementations follows:

The official Windows implementation, providing hardware acceleration with supported devices and software fallback. See http://openal.org/.
Portable, open source (LGPL) software implementation. Includes backends for the most common sound APIs on the Windows, Linux, Solaris, and BSD operating systems. See http://kcat.strangesoft.net/openal.html.
OpenAL is part of Core Audio, the official Mac OS X Audio interface. See http://developer.apple.com/technologies/mac/audio-and-video.html

This device allows one to capture from an audio input device handled through OpenAL.

You need to specify the name of the device to capture in the provided filename. If the empty string is provided, the device will automatically select the default device. You can get the list of the supported devices by using the option list_devices.

Options

Set the number of channels in the captured audio. Only the values 1 (monaural) and 2 (stereo) are currently supported. Defaults to 2.
Set the sample size (in bits) of the captured audio. Only the values 8 and 16 are currently supported. Defaults to 16.
Set the sample rate (in Hz) of the captured audio. Defaults to 44.1k.
If set to true, print a list of devices and exit. Defaults to false.

Examples

Print the list of OpenAL supported devices and exit:

$ ffmpeg -list_devices true -f openal -i dummy out.ogg

Capture from the OpenAL device DR-BT101 via PulseAudio:

$ ffmpeg -f openal -i 'DR-BT101 via PulseAudio' out.ogg

Capture from the default device (note the empty string '' as filename):

$ ffmpeg -f openal -i '' out.ogg

Capture from two devices simultaneously, writing to two different files, within the same ffmpeg command:

$ ffmpeg -f openal -i 'DR-BT101 via PulseAudio' out1.ogg -f openal -i 'ALSA Default' out2.ogg

Note: not all OpenAL implementations support multiple simultaneous capture - try the latest OpenAL Soft if the above does not work.

Open Sound System input device.

The filename to provide to the input device is the device node representing the OSS input device, and is usually set to /dev/dsp.

For example to grab from /dev/dsp using ffmpeg use the command:

ffmpeg -f oss -i /dev/dsp /tmp/oss.wav

For more information about OSS see: http://manuals.opensound.com/usersguide/dsp.html

Options

Set the sample rate in Hz. Default is 48000.
Set the number of channels. Default is 2.

PulseAudio input device.

To enable this output device you need to configure FFmpeg with "--enable-libpulse".

The filename to provide to the input device is a source device or the string "default"

To list the PulseAudio source devices and their properties you can invoke the command pactl list sources.

More information about PulseAudio can be found on http://www.pulseaudio.org.

Options

Connect to a specific PulseAudio server, specified by an IP address. Default server is used when not provided.
Specify the application name PulseAudio will use when showing active clients, by default it is the "LIBAVFORMAT_IDENT" string.
Specify the stream name PulseAudio will use when showing active streams, by default it is "record".
Specify the samplerate in Hz, by default 48kHz is used.
Specify the channels in use, by default 2 (stereo) is set.
This option does nothing and is deprecated.
Specify the size in bytes of the minimal buffering fragment in PulseAudio, it will affect the audio latency. By default it is set to 50 ms amount of data.
Set the initial PTS using the current time. Default is 1.

Examples

Record a stream from default device:

ffmpeg -f pulse -i default /tmp/pulse.wav

sndio input device.

To enable this input device during configuration you need libsndio installed on your system.

The filename to provide to the input device is the device node representing the sndio input device, and is usually set to /dev/audio0.

For example to grab from /dev/audio0 using ffmpeg use the command:

ffmpeg -f sndio -i /dev/audio0 /tmp/oss.wav

Options

Set the sample rate in Hz. Default is 48000.
Set the number of channels. Default is 2.

Video4Linux2 input video device.

"v4l2" can be used as alias for "video4linux2".

If FFmpeg is built with v4l-utils support (by using the "--enable-libv4l2" configure option), it is possible to use it with the "-use_libv4l2" input device option.

The name of the device to grab is a file device node, usually Linux systems tend to automatically create such nodes when the device (e.g. an USB webcam) is plugged into the system, and has a name of the kind /dev/videoN, where N is a number associated to the device.

Video4Linux2 devices usually support a limited set of widthxheight sizes and frame rates. You can check which are supported using -list_formats all for Video4Linux2 devices. Some devices, like TV cards, support one or more standards. It is possible to list all the supported standards using -list_standards all.

The time base for the timestamps is 1 microsecond. Depending on the kernel version and configuration, the timestamps may be derived from the real time clock (origin at the Unix Epoch) or the monotonic clock (origin usually at boot time, unaffected by NTP or manual changes to the clock). The -timestamps abs or -ts abs option can be used to force conversion into the real time clock.

Some usage examples of the video4linux2 device with ffmpeg and ffplay:

  • List supported formats for a video4linux2 device:
    ffplay -f video4linux2 -list_formats all /dev/video0
    
  • Grab and show the input of a video4linux2 device:
    ffplay -f video4linux2 -framerate 30 -video_size hd720 /dev/video0
    
  • Grab and record the input of a video4linux2 device, leave the frame rate and size as previously set:
    ffmpeg -f video4linux2 -input_format mjpeg -i /dev/video0 out.mpeg
    

For more information about Video4Linux, check http://linuxtv.org/.

Options

Set the standard. Must be the name of a supported standard. To get a list of the supported standards, use the list_standards option.
Set the input channel number. Default to -1, which means using the previously selected channel.
Set the video frame size. The argument must be a string in the form WIDTHxHEIGHT or a valid size abbreviation.
Select the pixel format (only valid for raw video input).
Set the preferred pixel format (for raw video) or a codec name. This option allows one to select the input format, when several are available.
framerate
Set the preferred video frame rate.
List available formats (supported pixel formats, codecs, and frame sizes) and exit.

Available values are:

Show all available (compressed and non-compressed) formats.
Show only raw video (non-compressed) formats.
Show only compressed formats.
List supported standards and exit.

Available values are:

Show all supported standards.
Set type of timestamps for grabbed frames.

Available values are:

Use timestamps from the kernel.
Use absolute timestamps (wall clock).
Force conversion from monotonic to absolute timestamps.

Default value is "default".

Use libv4l2 (v4l-utils) conversion functions. Default is 0.

VfW (Video for Windows) capture input device.

The filename passed as input is the capture driver number, ranging from 0 to 9. You may use "list" as filename to print a list of drivers. Any other filename will be interpreted as device number 0.

Options

Set the video frame size.
framerate
Set the grabbing frame rate. Default value is "ntsc", corresponding to a frame rate of "30000/1001".

X11 video input device.

To enable this input device during configuration you need libxcb installed on your system. It will be automatically detected during configuration.

This device allows one to capture a region of an X11 display.

The filename passed as input has the syntax:

[<hostname>]:<display_number>.<screen_number>[+<x_offset>,<y_offset>]

hostname:display_number.screen_number specifies the X11 display name of the screen to grab from. hostname can be omitted, and defaults to "localhost". The environment variable DISPLAY contains the default display name.

x_offset and y_offset specify the offsets of the grabbed area with respect to the top-left border of the X11 screen. They default to 0.

Check the X11 documentation (e.g. man X) for more detailed information.

Use the xdpyinfo program for getting basic information about the properties of your X11 display (e.g. grep for "name" or "dimensions").

For example to grab from :0.0 using ffmpeg:

ffmpeg -f x11grab -framerate 25 -video_size cif -i :0.0 out.mpg

Grab at position "10,20":

ffmpeg -f x11grab -framerate 25 -video_size cif -i :0.0+10,20 out.mpg

Options

Specify whether to select the grabbing area graphically using the pointer. A value of 1 prompts the user to select the grabbing area graphically by clicking and dragging. A single click with no dragging will select the whole screen. A region with zero width or height will also select the whole screen. This option overwrites the video_size, grab_x, and grab_y options. Default value is 0.
Specify whether to draw the mouse pointer. A value of 0 specifies not to draw the pointer. Default value is 1.
Make the grabbed area follow the mouse. The argument can be "centered" or a number of pixels PIXELS.

When it is specified with "centered", the grabbing region follows the mouse pointer and keeps the pointer at the center of region; otherwise, the region follows only when the mouse pointer reaches within PIXELS (greater than zero) to the edge of region.

For example:

ffmpeg -f x11grab -follow_mouse centered -framerate 25 -video_size cif -i :0.0 out.mpg

To follow only when the mouse pointer reaches within 100 pixels to edge:

ffmpeg -f x11grab -follow_mouse 100 -framerate 25 -video_size cif -i :0.0 out.mpg
framerate
Set the grabbing frame rate. Default value is "ntsc", corresponding to a frame rate of "30000/1001".
Show grabbed region on screen.

If show_region is specified with 1, then the grabbing region will be indicated on screen. With this option, it is easy to know what is being grabbed if only a portion of the screen is grabbed.

Set the region border thickness if -show_region 1 is used. Range is 1 to 128 and default is 3 (XCB-based x11grab only).

For example:

ffmpeg -f x11grab -show_region 1 -framerate 25 -video_size cif -i :0.0+10,20 out.mpg

With follow_mouse:

ffmpeg -f x11grab -follow_mouse centered -show_region 1 -framerate 25 -video_size cif -i :0.0 out.mpg
Grab this window, instead of the whole screen. Default value is 0, which maps to the whole screen (root window).

The id of a window can be found using the xwininfo program, possibly with options -tree and -root.

If the window is later enlarged, the new area is not recorded. Video ends when the window is closed, unmapped (i.e., iconified) or shrunk beyond the video size (which defaults to the initial window size).

This option disables options follow_mouse and select_region.

Set the video frame size. Default is the full desktop or window.
Set the grabbing region coordinates. They are expressed as offset from the top left corner of the X11 window and correspond to the x_offset and y_offset parameters in the device name. The default value for both options is 0.

The audio resampler supports the following named options.

Options may be set by specifying -option value in the FFmpeg tools, option=value for the aresample filter, by setting the value explicitly in the "SwrContext" options or using the libavutil/opt.h API for programmatic use.

Set used input channel layout. Default is unset. This option is only used for special remapping.
Set the input sample rate. Default value is 0.
Set the output sample rate. Default value is 0.
Specify the input sample format. It is set by default to "none".
Specify the output sample format. It is set by default to "none".
Set the internal sample format. Default value is "none". This will automatically be chosen when it is not explicitly set.
Set the input/output channel layout.

See the Channel Layout section in the ffmpeg-utils(1) manual for the required syntax.

Set the center mix level. It is a value expressed in deciBel, and must be in the interval [-32,32].
Set the surround mix level. It is a value expressed in deciBel, and must be in the interval [-32,32].
Set LFE mix into non LFE level. It is used when there is a LFE input but no LFE output. It is a value expressed in deciBel, and must be in the interval [-32,32].
Set rematrix volume. Default value is 1.0.
Set maximum output value for rematrixing. This can be used to prevent clipping vs. preventing volume reduction. A value of 1.0 prevents clipping.
Set flags used by the converter. Default value is 0.

It supports the following individual flags:

force resampling, this flag forces resampling to be used even when the input and output sample rates match.
Set the dither scale. Default value is 1.
Set dither method. Default value is 0.

Supported values:

select rectangular dither
select triangular dither
select triangular dither with high pass
select Lipshitz noise shaping dither.
select Shibata noise shaping dither.
select low Shibata noise shaping dither.
select high Shibata noise shaping dither.
select f-weighted noise shaping dither
select modified-e-weighted noise shaping dither
select improved-e-weighted noise shaping dither
Set resampling engine. Default value is swr.

Supported values:

select the native SW Resampler; filter options precision and cheby are not applicable in this case.
select the SoX Resampler (where available); compensation, and filter options filter_size, phase_shift, exact_rational, filter_type & kaiser_beta, are not applicable in this case.
For swr only, set resampling filter size, default value is 32.
For swr only, set resampling phase shift, default value is 10, and must be in the interval [0,30].
Use linear interpolation when enabled (the default). Disable it if you want to preserve speed instead of quality when exact_rational fails.
For swr only, when enabled, try to use exact phase_count based on input and output sample rate. However, if it is larger than "1 << phase_shift", the phase_count will be "1 << phase_shift" as fallback. Default is enabled.
Set cutoff frequency (swr: 6dB point; soxr: 0dB point) ratio; must be a float value between 0 and 1. Default value is 0.97 with swr, and 0.91 with soxr (which, with a sample-rate of 44100, preserves the entire audio band to 20kHz).
For soxr only, the precision in bits to which the resampled signal will be calculated. The default value of 20 (which, with suitable dithering, is appropriate for a destination bit-depth of 16) gives SoX's 'High Quality'; a value of 28 gives SoX's 'Very High Quality'.
For soxr only, selects passband rolloff none (Chebyshev) & higher-precision approximation for 'irrational' ratios. Default value is 0.
async
For swr only, simple 1 parameter audio sync to timestamps using stretching, squeezing, filling and trimming. Setting this to 1 will enable filling and trimming, larger values represent the maximum amount in samples that the data may be stretched or squeezed for each second. Default value is 0, thus no compensation is applied to make the samples match the audio timestamps.
For swr only, assume the first pts should be this value. The time unit is 1 / sample rate. This allows for padding/trimming at the start of stream. By default, no assumption is made about the first frame's expected pts, so no padding or trimming is done. For example, this could be set to 0 to pad the beginning with silence if an audio stream starts after the video stream or to trim any samples with a negative pts due to encoder delay.
For swr only, set the minimum difference between timestamps and audio data (in seconds) to trigger stretching/squeezing/filling or trimming of the data to make it match the timestamps. The default is that stretching/squeezing/filling and trimming is disabled (min_comp = "FLT_MAX").
For swr only, set the minimum difference between timestamps and audio data (in seconds) to trigger adding/dropping samples to make it match the timestamps. This option effectively is a threshold to select between hard (trim/fill) and soft (squeeze/stretch) compensation. Note that all compensation is by default disabled through min_comp. The default is 0.1.
For swr only, set duration (in seconds) over which data is stretched/squeezed to make it match the timestamps. Must be a non-negative double float value, default value is 1.0.
For swr only, set maximum factor by which data is stretched/squeezed to make it match the timestamps. Must be a non-negative double float value, default value is 0.
Select matrixed stereo encoding.

It accepts the following values:

select none
select Dolby
select Dolby Pro Logic II

Default value is "none".

For swr only, select resampling filter type. This only affects resampling operations.

It accepts the following values:

select cubic
select Blackman Nuttall windowed sinc
select Kaiser windowed sinc
For swr only, set Kaiser window beta value. Must be a double float value in the interval [2,16], default value is 9.
For swr only, set number of used output sample bits for dithering. Must be an integer in the interval [0,64], default value is 0, which means it's not used.

The video scaler supports the following named options.

Options may be set by specifying -option value in the FFmpeg tools, with a few API-only exceptions noted below. For programmatic use, they can be set explicitly in the "SwsContext" options or through the libavutil/opt.h API.

Set the scaler flags. This is also used to set the scaling algorithm. Only a single algorithm should be selected. Default value is bicubic.

It accepts the following values:

Select fast bilinear scaling algorithm.
Select bilinear scaling algorithm.
Select bicubic scaling algorithm.
Select experimental scaling algorithm.
Select nearest neighbor rescaling algorithm.
Select averaging area rescaling algorithm.
Select bicubic scaling algorithm for the luma component, bilinear for chroma components.
Select Gaussian rescaling algorithm.
sinc
Select sinc rescaling algorithm.
Select Lanczos rescaling algorithm. The default width (alpha) is 3 and can be changed by setting "param0".
Select natural bicubic spline rescaling algorithm.
Enable printing/debug logging.
Enable accurate rounding.
Enable full chroma interpolation.
Select full chroma input.
Enable bitexact output.
Set source width.
Set source height.
Set destination width.
Set destination height.
Set source pixel format (must be expressed as an integer).
Set destination pixel format (must be expressed as an integer).
If value is set to 1, indicates source is full range. Default value is 0, which indicates source is limited range.
If value is set to 1, enable full range for destination. Default value is 0, which enables limited range.
Set scaling algorithm parameters. The specified values are specific of some scaling algorithms and ignored by others. The specified values are floating point number values.
Set the dithering algorithm. Accepts one of the following values. Default value is auto.
automatic choice
no dithering
bayer dither
error diffusion dither
arithmetic dither, based using addition
arithmetic dither, based using xor (more random/less apparent patterning that a_dither).
Set the alpha blending to use when the input has alpha but the output does not. Default value is none.
Blend onto a uniform background color
Blend onto a checkerboard
No blending

Filtering in FFmpeg is enabled through the libavfilter library.

In libavfilter, a filter can have multiple inputs and multiple outputs. To illustrate the sorts of things that are possible, we consider the following filtergraph.

                [main]
input --> split ---------------------> overlay --> output
            |                             ^
            |[tmp]                  [flip]|
            +-----> crop --> vflip -------+

This filtergraph splits the input stream in two streams, then sends one stream through the crop filter and the vflip filter, before merging it back with the other stream by overlaying it on top. You can use the following command to achieve this:

ffmpeg -i INPUT -vf "split [main][tmp]; [tmp] crop=iw:ih/2:0:0, vflip [flip]; [main][flip] overlay=0:H/2" OUTPUT

The result will be that the top half of the video is mirrored onto the bottom half of the output video.

Filters in the same linear chain are separated by commas, and distinct linear chains of filters are separated by semicolons. In our example, crop,vflip are in one linear chain, split and overlay are separately in another. The points where the linear chains join are labelled by names enclosed in square brackets. In the example, the split filter generates two outputs that are associated to the labels [main] and [tmp].

The stream sent to the second output of split, labelled as [tmp], is processed through the crop filter, which crops away the lower half part of the video, and then vertically flipped. The overlay filter takes in input the first unchanged output of the split filter (which was labelled as [main]), and overlay on its lower half the output generated by the crop,vflip filterchain.

Some filters take in input a list of parameters: they are specified after the filter name and an equal sign, and are separated from each other by a colon.

There exist so-called source filters that do not have an audio/video input, and sink filters that will not have audio/video output.

The graph2dot program included in the FFmpeg tools directory can be used to parse a filtergraph description and issue a corresponding textual representation in the dot language.

Invoke the command:

graph2dot -h

to see how to use graph2dot.

You can then pass the dot description to the dot program (from the graphviz suite of programs) and obtain a graphical representation of the filtergraph.

For example the sequence of commands:

echo <GRAPH_DESCRIPTION> | \
tools/graph2dot -o graph.tmp && \
dot -Tpng graph.tmp -o graph.png && \
display graph.png

can be used to create and display an image representing the graph described by the GRAPH_DESCRIPTION string. Note that this string must be a complete self-contained graph, with its inputs and outputs explicitly defined. For example if your command line is of the form:

ffmpeg -i infile -vf scale=640:360 outfile

your GRAPH_DESCRIPTION string will need to be of the form:

nullsrc,scale=640:360,nullsink

you may also need to set the nullsrc parameters and add a format filter in order to simulate a specific input file.

A filtergraph is a directed graph of connected filters. It can contain cycles, and there can be multiple links between a pair of filters. Each link has one input pad on one side connecting it to one filter from which it takes its input, and one output pad on the other side connecting it to one filter accepting its output.

Each filter in a filtergraph is an instance of a filter class registered in the application, which defines the features and the number of input and output pads of the filter.

A filter with no input pads is called a "source", and a filter with no output pads is called a "sink".

A filtergraph has a textual representation, which is recognized by the -filter/-vf/-af and -filter_complex options in ffmpeg and -vf/-af in ffplay, and by the avfilter_graph_parse_ptr() function defined in libavfilter/avfilter.h.

A filterchain consists of a sequence of connected filters, each one connected to the previous one in the sequence. A filterchain is represented by a list of ","-separated filter descriptions.

A filtergraph consists of a sequence of filterchains. A sequence of filterchains is represented by a list of ";"-separated filterchain descriptions.

A filter is represented by a string of the form: [in_link_1]...[in_link_N]filter_name@id=arguments[out_link_1]...[out_link_M]

filter_name is the name of the filter class of which the described filter is an instance of, and has to be the name of one of the filter classes registered in the program optionally followed by "@id". The name of the filter class is optionally followed by a string "=arguments".

arguments is a string which contains the parameters used to initialize the filter instance. It may have one of two forms:

  • A ':'-separated list of key=value pairs.
  • A ':'-separated list of value. In this case, the keys are assumed to be the option names in the order they are declared. E.g. the "fade" filter declares three options in this order -- type, start_frame and nb_frames. Then the parameter list in:0:30 means that the value in is assigned to the option type, 0 to start_frame and 30 to nb_frames.
  • A ':'-separated list of mixed direct value and long key=value pairs. The direct value must precede the key=value pairs, and follow the same constraints order of the previous point. The following key=value pairs can be set in any preferred order.

If the option value itself is a list of items (e.g. the "format" filter takes a list of pixel formats), the items in the list are usually separated by |.

The list of arguments can be quoted using the character ' as initial and ending mark, and the character \ for escaping the characters within the quoted text; otherwise the argument string is considered terminated when the next special character (belonging to the set []=;,) is encountered.

A special syntax implemented in the ffmpeg CLI tool allows loading option values from files. This is done be prepending a slash '/' to the option name, then the supplied value is interpreted as a path from which the actual value is loaded. E.g.

ffmpeg -i <INPUT> -vf drawtext=/text=/tmp/some_text <OUTPUT>

will load the text to be drawn from /tmp/some_text. API users wishing to implement a similar feature should use the "avfilter_graph_segment_*()" functions together with custom IO code.

The name and arguments of the filter are optionally preceded and followed by a list of link labels. A link label allows one to name a link and associate it to a filter output or input pad. The preceding labels in_link_1 ... in_link_N, are associated to the filter input pads, the following labels out_link_1 ... out_link_M, are associated to the output pads.

When two link labels with the same name are found in the filtergraph, a link between the corresponding input and output pad is created.

If an output pad is not labelled, it is linked by default to the first unlabelled input pad of the next filter in the filterchain. For example in the filterchain

nullsrc, split[L1], [L2]overlay, nullsink

the split filter instance has two output pads, and the overlay filter instance two input pads. The first output pad of split is labelled "L1", the first input pad of overlay is labelled "L2", and the second output pad of split is linked to the second input pad of overlay, which are both unlabelled.

In a filter description, if the input label of the first filter is not specified, "in" is assumed; if the output label of the last filter is not specified, "out" is assumed.

In a complete filterchain all the unlabelled filter input and output pads must be connected. A filtergraph is considered valid if all the filter input and output pads of all the filterchains are connected.

Leading and trailing whitespaces (space, tabs, or line feeds) separating tokens in the filtergraph specification are ignored. This means that the filtergraph can be expressed using empty lines and spaces to improve redability.

For example, the filtergraph:

testsrc,split[L1],hflip[L2];[L1][L2] hstack

can be represented as:

testsrc,
split [L1], hflip [L2];

[L1][L2] hstack

Libavfilter will automatically insert scale filters where format conversion is required. It is possible to specify swscale flags for those automatically inserted scalers by prepending "sws_flags=flags;" to the filtergraph description.

Here is a BNF description of the filtergraph syntax:

<NAME>             ::= sequence of alphanumeric characters and '_'
<FILTER_NAME>      ::= <NAME>["@"<NAME>]
<LINKLABEL>        ::= "[" <NAME> "]"
<LINKLABELS>       ::= <LINKLABEL> [<LINKLABELS>]
<FILTER_ARGUMENTS> ::= sequence of chars (possibly quoted)
<FILTER>           ::= [<LINKLABELS>] <FILTER_NAME> ["=" <FILTER_ARGUMENTS>] [<LINKLABELS>]
<FILTERCHAIN>      ::= <FILTER> [,<FILTERCHAIN>]
<FILTERGRAPH>      ::= [sws_flags=<flags>;] <FILTERCHAIN> [;<FILTERGRAPH>]

Filtergraph description composition entails several levels of escaping. See the "Quoting and escaping" section in the ffmpeg-utils(1) manual for more information about the employed escaping procedure.

A first level escaping affects the content of each filter option value, which may contain the special character ":" used to separate values, or one of the escaping characters "\'".

A second level escaping affects the whole filter description, which may contain the escaping characters "\'" or the special characters "[],;" used by the filtergraph description.

Finally, when you specify a filtergraph on a shell commandline, you need to perform a third level escaping for the shell special characters contained within it.

For example, consider the following string to be embedded in the drawtext filter description text value:

this is a 'string': may contain one, or more, special characters

This string contains the "'" special escaping character, and the ":" special character, so it needs to be escaped in this way:

text=this is a \'string\'\: may contain one, or more, special characters

A second level of escaping is required when embedding the filter description in a filtergraph description, in order to escape all the filtergraph special characters. Thus the example above becomes:

drawtext=text=this is a \\\'string\\\'\\: may contain one\, or more\, special characters

(note that in addition to the "\'" escaping special characters, also "," needs to be escaped).

Finally an additional level of escaping is needed when writing the filtergraph description in a shell command, which depends on the escaping rules of the adopted shell. For example, assuming that "\" is special and needs to be escaped with another "\", the previous string will finally result in:

-vf "drawtext=text=this is a \\\\\\'string\\\\\\'\\\\: may contain one\\, or more\\, special characters"

In order to avoid cumbersome escaping when using a commandline tool accepting a filter specification as input, it is advisable to avoid direct inclusion of the filter or options specification in the shell.

For example, in case of the drawtext filter, you might prefer to use the textfile option in place of text to specify the text to render.

Some filters support a generic enable option. For the filters supporting timeline editing, this option can be set to an expression which is evaluated before sending a frame to the filter. If the evaluation is non-zero, the filter will be enabled, otherwise the frame will be sent unchanged to the next filter in the filtergraph.

The expression accepts the following values:

timestamp expressed in seconds, NAN if the input timestamp is unknown
sequential number of the input frame, starting from 0
the position in the file of the input frame, NAN if unknown; deprecated, do not use
width and height of the input frame if video

Additionally, these filters support an enable command that can be used to re-define the expression.

Like any other filtering option, the enable option follows the same rules.

For example, to enable a blur filter (smartblur) from 10 seconds to 3 minutes, and a curves filter starting at 3 seconds:

smartblur = enable='between(t,10,3*60)',
curves    = enable='gte(t,3)' : preset=cross_process

See "ffmpeg -filters" to view which filters have timeline support.

Some options can be changed during the operation of the filter using a command. These options are marked 'T' on the output of ffmpeg -h filter=<name of filter>. The name of the command is the name of the option and the argument is the new value.

Some filters with several inputs support a common set of options. These options can only be set by name, not with the short notation.

The action to take when EOF is encountered on the secondary input; it accepts one of the following values:
Repeat the last frame (the default).
End both streams.
Pass the main input through.
If set to 1, force the output to terminate when the shortest input terminates. Default value is 0.
If set to 1, force the filter to extend the last frame of secondary streams until the end of the primary stream. A value of 0 disables this behavior. Default value is 1.
How strictly to sync streams based on secondary input timestamps; it accepts one of the following values:
Frame from secondary input with the nearest lower or equal timestamp to the primary input frame.
Frame from secondary input with the absolute nearest timestamp to the primary input frame.

When you configure your FFmpeg build, you can disable any of the existing filters using "--disable-filters". The configure output will show the audio filters included in your build.

Below is a description of the currently available audio filters.

Apply Affine Projection algorithm to the first audio stream using the second audio stream.

This adaptive filter is used to estimate unknown audio based on multiple input audio samples. Affine projection algorithm can make trade-offs between computation complexity with convergence speed.

A description of the accepted options follows.

Set the filter order.
Set the projection order.
Set the filter mu.
Set the coefficient to initialize internal covariance matrix.
Set the filter output samples. It accepts the following values:
Pass the 1st input.
Pass the 2nd input.
Pass difference between desired, 2nd input and error signal estimate.
Pass difference between input, 1st input and error signal estimate.
Pass error signal estimated samples.

Default value is o.

Set which precision to use when processing samples.
Auto pick internal sample format depending on other filters.
Always use single-floating point precision sample format.
Always use double-floating point precision sample format.

A compressor is mainly used to reduce the dynamic range of a signal. Especially modern music is mostly compressed at a high ratio to improve the overall loudness. It's done to get the highest attention of a listener, "fatten" the sound and bring more "power" to the track. If a signal is compressed too much it may sound dull or "dead" afterwards or it may start to "pump" (which could be a powerful effect but can also destroy a track completely). The right compression is the key to reach a professional sound and is the high art of mixing and mastering. Because of its complex settings it may take a long time to get the right feeling for this kind of effect.

Compression is done by detecting the volume above a chosen level "threshold" and dividing it by the factor set with "ratio". So if you set the threshold to -12dB and your signal reaches -6dB a ratio of 2:1 will result in a signal at -9dB. Because an exact manipulation of the signal would cause distortion of the waveform the reduction can be levelled over the time. This is done by setting "Attack" and "Release". "attack" determines how long the signal has to rise above the threshold before any reduction will occur and "release" sets the time the signal has to fall below the threshold to reduce the reduction again. Shorter signals than the chosen attack time will be left untouched. The overall reduction of the signal can be made up afterwards with the "makeup" setting. So compressing the peaks of a signal about 6dB and raising the makeup to this level results in a signal twice as loud than the source. To gain a softer entry in the compression the "knee" flattens the hard edge at the threshold in the range of the chosen decibels.

The filter accepts the following options:

Set input gain. Default is 1. Range is between 0.015625 and 64.
Set mode of compressor operation. Can be "upward" or "downward". Default is "downward".
threshold
If a signal of stream rises above this level it will affect the gain reduction. By default it is 0.125. Range is between 0.00097563 and 1.
Set a ratio by which the signal is reduced. 1:2 means that if the level rose 4dB above the threshold, it will be only 2dB above after the reduction. Default is 2. Range is between 1 and 20.
Amount of milliseconds the signal has to rise above the threshold before gain reduction starts. Default is 20. Range is between 0.01 and 2000.
Amount of milliseconds the signal has to fall below the threshold before reduction is decreased again. Default is 250. Range is between 0.01 and 9000.
Set the amount by how much signal will be amplified after processing. Default is 1. Range is from 1 to 64.
Curve the sharp knee around the threshold to enter gain reduction more softly. Default is 2.82843. Range is between 1 and 8.
Choose if the "average" level between all channels of input stream or the louder("maximum") channel of input stream affects the reduction. Default is "average".
Should the exact signal be taken in case of "peak" or an RMS one in case of "rms". Default is "rms" which is mostly smoother.
mix
How much to use compressed signal in output. Default is 1. Range is between 0 and 1.

Commands

This filter supports the all above options as commands.

Simple audio dynamic range compression/expansion filter.

The filter accepts the following options:

Set contrast. Default is 33. Allowed range is between 0 and 100.

Copy the input audio source unchanged to the output. This is mainly useful for testing purposes.

Apply cross fade from one input audio stream to another input audio stream. The cross fade is applied for specified duration near the end of first stream.

The filter accepts the following options:

Specify the number of samples for which the cross fade effect has to last. At the end of the cross fade effect the first input audio will be completely silent. Default is 44100.
Specify the duration of the cross fade effect. See the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax. By default the duration is determined by nb_samples. If set this option is used instead of nb_samples.
Should first stream end overlap with second stream start. Default is enabled.
Set curve for cross fade transition for first stream.
Set curve for cross fade transition for second stream.

For description of available curve types see afade filter description.

Examples

  • Cross fade from one input to another:
    ffmpeg -i first.flac -i second.flac -filter_complex acrossfade=d=10:c1=exp:c2=exp output.flac
    
  • Cross fade from one input to another but without overlapping:
    ffmpeg -i first.flac -i second.flac -filter_complex acrossfade=d=10:o=0:c1=exp:c2=exp output.flac
    

Split audio stream into several bands.

This filter splits audio stream into two or more frequency ranges. Summing all streams back will give flat output.

The filter accepts the following options:

Set split frequencies. Those must be positive and increasing.
Set filter order for each band split. This controls filter roll-off or steepness of filter transfer function. Available values are:
2nd
12 dB per octave.
4th
24 dB per octave.
6th
36 dB per octave.
8th
48 dB per octave.
10th
60 dB per octave.
12th
72 dB per octave.
14th
84 dB per octave.
16th
96 dB per octave.
18th
108 dB per octave.
20th
120 dB per octave.

Default is 4th.

Set input gain level. Allowed range is from 0 to 1. Default value is 1.
Set output gain for each band. Default value is 1 for all bands.
Set which precision to use when processing samples.
Auto pick internal sample format depending on other filters.
Always use single-floating point precision sample format.
Always use double-floating point precision sample format.

Default value is "auto".

Examples

  • Split input audio stream into two bands (low and high) with split frequency of 1500 Hz, each band will be in separate stream:
    ffmpeg -i in.flac -filter_complex 'acrossover=split=1500[LOW][HIGH]' -map '[LOW]' low.wav -map '[HIGH]' high.wav
    
  • Same as above, but with higher filter order:
    ffmpeg -i in.flac -filter_complex 'acrossover=split=1500:order=8th[LOW][HIGH]' -map '[LOW]' low.wav -map '[HIGH]' high.wav
    
  • Same as above, but also with additional middle band (frequencies between 1500 and 8000):
    ffmpeg -i in.flac -filter_complex 'acrossover=split=1500 8000:order=8th[LOW][MID][HIGH]' -map '[LOW]' low.wav -map '[MID]' mid.wav -map '[HIGH]' high.wav
    

Reduce audio bit resolution.

This filter is bit crusher with enhanced functionality. A bit crusher is used to audibly reduce number of bits an audio signal is sampled with. This doesn't change the bit depth at all, it just produces the effect. Material reduced in bit depth sounds more harsh and "digital". This filter is able to even round to continuous values instead of discrete bit depths. Additionally it has a D/C offset which results in different crushing of the lower and the upper half of the signal. An Anti-Aliasing setting is able to produce "softer" crushing sounds.

Another feature of this filter is the logarithmic mode. This setting switches from linear distances between bits to logarithmic ones. The result is a much more "natural" sounding crusher which doesn't gate low signals for example. The human ear has a logarithmic perception, so this kind of crushing is much more pleasant. Logarithmic crushing is also able to get anti-aliased.

The filter accepts the following options:

Set level in.
Set level out.
Set bit reduction.
mix
Set mixing amount.
Can be linear: "lin" or logarithmic: "log".
Set DC.
aa
Set anti-aliasing.
Set sample reduction.
Enable LFO. By default disabled.
Set LFO range.
Set LFO rate.

Commands

This filter supports the all above options as commands.

Delay audio filtering until a given wallclock timestamp. See the cue filter.

Remove impulsive noise from input audio.

Samples detected as impulsive noise are replaced by interpolated samples using autoregressive modelling.

Set window size, in milliseconds. Allowed range is from 10 to 100. Default value is 55 milliseconds. This sets size of window which will be processed at once.
Set window overlap, in percentage of window size. Allowed range is from 50 to 95. Default value is 75 percent. Setting this to a very high value increases impulsive noise removal but makes whole process much slower.
Set autoregression order, in percentage of window size. Allowed range is from 0 to 25. Default value is 2 percent. This option also controls quality of interpolated samples using neighbour good samples.
Set threshold value. Allowed range is from 1 to 100. Default value is 2. This controls the strength of impulsive noise which is going to be removed. The lower value, the more samples will be detected as impulsive noise.
Set burst fusion, in percentage of window size. Allowed range is 0 to 10. Default value is 2. If any two samples detected as noise are spaced less than this value then any sample between those two samples will be also detected as noise.
Set overlap method.

It accepts the following values:

Select overlap-add method. Even not interpolated samples are slightly changed with this method.
Select overlap-save method. Not interpolated samples remain unchanged.

Default value is "a".

Remove clipped samples from input audio.

Samples detected as clipped are replaced by interpolated samples using autoregressive modelling.

Set window size, in milliseconds. Allowed range is from 10 to 100. Default value is 55 milliseconds. This sets size of window which will be processed at once.
Set window overlap, in percentage of window size. Allowed range is from 50 to 95. Default value is 75 percent.
Set autoregression order, in percentage of window size. Allowed range is from 0 to 25. Default value is 8 percent. This option also controls quality of interpolated samples using neighbour good samples.
Set threshold value. Allowed range is from 1 to 100. Default value is 10. Higher values make clip detection less aggressive.
Set size of histogram used to detect clips. Allowed range is from 100 to 9999. Default value is 1000. Higher values make clip detection less aggressive.
Set overlap method.

It accepts the following values:

Select overlap-add method. Even not interpolated samples are slightly changed with this method.
Select overlap-save method. Not interpolated samples remain unchanged.

Default value is "a".

Apply decorrelation to input audio stream.

The filter accepts the following options:

Set decorrelation stages of filtering. Allowed range is from 1 to 16. Default value is 6.
Set random seed used for setting delay in samples across channels.

Delay one or more audio channels.

Samples in delayed channel are filled with silence.

The filter accepts the following option:

Set list of delays in milliseconds for each channel separated by '|'. Unused delays will be silently ignored. If number of given delays is smaller than number of channels all remaining channels will not be delayed. If you want to delay exact number of samples, append 'S' to number. If you want instead to delay in seconds, append 's' to number.
Use last set delay for all remaining channels. By default is disabled. This option if enabled changes how option "delays" is interpreted.

Examples

  • Delay first channel by 1.5 seconds, the third channel by 0.5 seconds and leave the second channel (and any other channels that may be present) unchanged.
    adelay=1500|0|500
    
  • Delay second channel by 500 samples, the third channel by 700 samples and leave the first channel (and any other channels that may be present) unchanged.
    adelay=0|500S|700S
    
  • Delay all channels by same number of samples:
    adelay=delays=64S:all=1
    

Remedy denormals in audio by adding extremely low-level noise.

This filter shall be placed before any filter that can produce denormals.

A description of the accepted parameters follows.

Set level of added noise in dB. Default is -351. Allowed range is from -451 to -90.
Set type of added noise.
Add DC signal.
Add AC signal.
Add square signal.
pulse
Add pulse signal.

Default is "dc".

Commands

This filter supports the all above options as commands.

Compute derivative/integral of audio stream.

Applying both filters one after another produces original audio.

Apply spectral dynamic range controller filter to input audio stream.

A description of the accepted options follows.

Set the transfer expression.

The expression can contain the following constants:

current channel number
current sample number
number of channels
timestamp expressed in seconds
sr
sample rate
current frequency power value, in dB
current frequency in Hz

Default value is "p".

Set the attack in milliseconds. Default is 50 milliseconds. Allowed range is from 1 to 1000 milliseconds.
Set the release in milliseconds. Default is 100 milliseconds. Allowed range is from 5 to 2000 milliseconds.
Set which channels to filter, by default "all" channels in audio stream are filtered.

Commands

This filter supports the all above options as commands.

Examples

  • Apply spectral compression to all frequencies with threshold of -50 dB and 1:6 ratio:
    adrc=transfer='if(gt(p,-50),-50+(p-(-50))/6,p)':attack=50:release=100
    
  • Similar to above but with 1:2 ratio and filtering only front center channel:
    adrc=transfer='if(gt(p,-50),-50+(p-(-50))/2,p)':attack=50:release=100:channels=FC
    
  • Apply spectral noise gate to all frequencies with threshold of -85 dB and with short attack time and short release time:
    adrc=transfer='if(lte(p,-85),p-800,p)':attack=1:release=5
    
  • Apply spectral expansion to all frequencies with threshold of -10 dB and 1:2 ratio:
    adrc=transfer='if(lt(p,-10),-10+(p-(-10))*2,p)':attack=50:release=100
    
  • Apply limiter to max -60 dB to all frequencies, with attack of 2 ms and release of 10 ms:
    adrc=transfer='min(p,-60)':attack=2:release=10
    

Apply dynamic equalization to input audio stream.

A description of the accepted options follows.

threshold
Set the detection threshold used to trigger equalization. Threshold detection is using detection filter. Default value is 0. Allowed range is from 0 to 100.
Set the detection frequency in Hz used for detection filter used to trigger equalization. Default value is 1000 Hz. Allowed range is between 2 and 1000000 Hz.
Set the detection resonance factor for detection filter used to trigger equalization. Default value is 1. Allowed range is from 0.001 to 1000.
Set the target frequency of equalization filter. Default value is 1000 Hz. Allowed range is between 2 and 1000000 Hz.
Set the target resonance factor for target equalization filter. Default value is 1. Allowed range is from 0.001 to 1000.
Set the amount of milliseconds the signal from detection has to rise above the detection threshold before equalization starts. Default is 20. Allowed range is between 1 and 2000.
Set the amount of milliseconds the signal from detection has to fall below the detection threshold before equalization ends. Default is 200. Allowed range is between 1 and 2000.
Set the ratio by which the equalization gain is raised. Default is 1. Allowed range is between 0 and 30.
Set the makeup offset by which the equalization gain is raised. Default is 0. Allowed range is between 0 and 100.
Set the max allowed cut/boost amount. Default is 50. Allowed range is from 1 to 200.
Set the mode of filter operation, can be one of the following:
Output only isolated detection signal.
Cut frequencies below detection threshold.
Cut frequencies above detection threshold.
Boost frequencies below detection threshold.
Boost frequencies above detection threshold.

Default mode is cutbelow.

Set the type of detection filter, can be one of the following:
bandpass
lowpass
highpass

Default type is bandpass.

Set the type of target filter, can be one of the following:

Default type is bell.

Automatically gather threshold from detection filter. By default is disabled. This option is useful to detect threshold in certain time frame of input audio stream, in such case option value is changed at runtime.

Available values are:

Disable using automatically gathered threshold value.
Stop picking threshold value.
Start picking threshold value.
Adaptively pick threshold value, by calculating sliding window entropy.
Set which precision to use when processing samples.
Auto pick internal sample format depending on other filters.
Always use single-floating point precision sample format.
Always use double-floating point precision sample format.

Commands

This filter supports the all above options as commands.

Apply dynamic smoothing to input audio stream.

A description of the accepted options follows.

Set an amount of sensitivity to frequency fluctations. Default is 2. Allowed range is from 0 to 1e+06.
Set a base frequency for smoothing. Default value is 22050. Allowed range is from 2 to 1e+06.

Commands

This filter supports the all above options as commands.

Apply echoing to the input audio.

Echoes are reflected sound and can occur naturally amongst mountains (and sometimes large buildings) when talking or shouting; digital echo effects emulate this behaviour and are often used to help fill out the sound of a single instrument or vocal. The time difference between the original signal and the reflection is the "delay", and the loudness of the reflected signal is the "decay". Multiple echoes can have different delays and decays.

A description of the accepted parameters follows.

Set input gain of reflected signal. Default is 0.6.
Set output gain of reflected signal. Default is 0.3.
Set list of time intervals in milliseconds between original signal and reflections separated by '|'. Allowed range for each "delay" is "(0 - 90000.0]". Default is 1000.
Set list of loudness of reflected signals separated by '|'. Allowed range for each "decay" is "(0 - 1.0]". Default is 0.5.

Examples

  • Make it sound as if there are twice as many instruments as are actually playing:
    aecho=0.8:0.88:60:0.4
    
  • If delay is very short, then it sounds like a (metallic) robot playing music:
    aecho=0.8:0.88:6:0.4
    
  • A longer delay will sound like an open air concert in the mountains:
    aecho=0.8:0.9:1000:0.3
    
  • Same as above but with one more mountain:
    aecho=0.8:0.9:1000|1800:0.3|0.25
    

Audio emphasis filter creates or restores material directly taken from LPs or emphased CDs with different filter curves. E.g. to store music on vinyl the signal has to be altered by a filter first to even out the disadvantages of this recording medium. Once the material is played back the inverse filter has to be applied to restore the distortion of the frequency response.

The filter accepts the following options:

Set input gain.
Set output gain.
Set filter mode. For restoring material use "reproduction" mode, otherwise use "production" mode. Default is "reproduction" mode.
Set filter type. Selects medium. Can be one of the following:
select Columbia.
select EMI.
select BSI (78RPM).
select RIAA.
select Compact Disc (CD).
50fm
select 50µs (FM).
75fm
select 75µs (FM).
50kf
select 50µs (FM-KF).
75kf
select 75µs (FM-KF).

Commands

This filter supports the all above options as commands.

Modify an audio signal according to the specified expressions.

This filter accepts one or more expressions (one for each channel), which are evaluated and used to modify a corresponding audio signal.

It accepts the following parameters:

Set the '|'-separated expressions list for each separate channel. If the number of input channels is greater than the number of expressions, the last specified expression is used for the remaining output channels.
Set output channel layout. If not specified, the channel layout is specified by the number of expressions. If set to same, it will use by default the same input channel layout.

Each expression in exprs can contain the following constants and functions:

channel number of the current expression
number of the evaluated sample, starting from 0
sample rate
time of the evaluated sample expressed in seconds
input and output number of channels
the value of input channel with number CH

Note: this filter is slow. For faster processing you should use a dedicated filter.

Examples

  • Half volume:
    aeval=val(ch)/2:c=same
    
  • Invert phase of the second channel:
    aeval=val(0)|-val(1)
    

An exciter is used to produce high sound that is not present in the original signal. This is done by creating harmonic distortions of the signal which are restricted in range and added to the original signal. An Exciter raises the upper end of an audio signal without simply raising the higher frequencies like an equalizer would do to create a more "crisp" or "brilliant" sound.

The filter accepts the following options:

Set input level prior processing of signal. Allowed range is from 0 to 64. Default value is 1.
Set output level after processing of signal. Allowed range is from 0 to 64. Default value is 1.
Set the amount of harmonics added to original signal. Allowed range is from 0 to 64. Default value is 1.
Set the amount of newly created harmonics. Allowed range is from 0.1 to 10. Default value is 8.5.
blend
Set the octave of newly created harmonics. Allowed range is from -10 to 10. Default value is 0.
Set the lower frequency limit of producing harmonics in Hz. Allowed range is from 2000 to 12000 Hz. Default is 7500 Hz.
Set the upper frequency limit of producing harmonics. Allowed range is from 9999 to 20000 Hz. If value is lower than 10000 Hz no limit is applied.
Mute the original signal and output only added harmonics. By default is disabled.

Commands

This filter supports the all above options as commands.

Apply fade-in/out effect to input audio.

A description of the accepted parameters follows.

Specify the effect type, can be either "in" for fade-in, or "out" for a fade-out effect. Default is "in".
Specify the number of the start sample for starting to apply the fade effect. Default is 0.
Specify the number of samples for which the fade effect has to last. At the end of the fade-in effect the output audio will have the same volume as the input audio, at the end of the fade-out transition the output audio will be silence. Default is 44100.
Specify the start time of the fade effect. Default is 0. The value must be specified as a time duration; see the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax. If set this option is used instead of start_sample.
Specify the duration of the fade effect. See the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax. At the end of the fade-in effect the output audio will have the same volume as the input audio, at the end of the fade-out transition the output audio will be silence. By default the duration is determined by nb_samples. If set this option is used instead of nb_samples.
Set curve for fade transition.

It accepts the following values:

select triangular, linear slope (default)
select quarter of sine wave
select half of sine wave
select exponential sine wave
select logarithmic
select inverted parabola
select quadratic
select cubic
select square root
select cubic root
select parabola
select exponential
select inverted quarter of sine wave
select inverted half of sine wave
select double-exponential seat
select double-exponential sigmoid
select logistic sigmoid
sinc
select sine cardinal function
select inverted sine cardinal function
select quartic
select quartic root
select squared quarter of sine wave
select squared half of sine wave
no fade applied
Set the initial gain for fade-in or final gain for fade-out. Default value is 0.0.
Set the initial gain for fade-out or final gain for fade-in. Default value is 1.0.

Commands

This filter supports the all above options as commands.

Examples

  • Fade in first 15 seconds of audio:
    afade=t=in:ss=0:d=15
    
  • Fade out last 25 seconds of a 900 seconds audio:
    afade=t=out:st=875:d=25
    

Denoise audio samples with FFT.

A description of the accepted parameters follows.

Set the noise reduction in dB, allowed range is 0.01 to 97. Default value is 12 dB.
Set the noise floor in dB, allowed range is -80 to -20. Default value is -50 dB.
Set the noise type.

It accepts the following values:

Select white noise.
Select vinyl noise.
Select shellac noise.
Select custom noise, defined in "bn" option.

Default value is white noise.

Set custom band noise profile for every one of 15 bands. Bands are separated by ' ' or '|'.
Set the residual floor in dB, allowed range is -80 to -20. Default value is -38 dB.
Enable noise floor tracking. By default is disabled. With this enabled, noise floor is automatically adjusted.
Enable residual tracking. By default is disabled.
Set the output mode.

It accepts the following values:

Pass input unchanged.
Pass noise filtered out.
Pass only noise.

Default value is output.

Set the adaptivity factor, used how fast to adapt gains adjustments per each frequency bin. Value 0 enables instant adaptation, while higher values react much slower. Allowed range is from 0 to 1. Default value is 0.5.
Set the noise floor offset factor. This option is used to adjust offset applied to measured noise floor. It is only effective when noise floor tracking is enabled. Allowed range is from -2.0 to 2.0. Default value is 1.0.
Set the noise link used for multichannel audio.

It accepts the following values:

Use unchanged channel's noise floor.
Use measured min noise floor of all channels.
Use measured max noise floor of all channels.
Use measured average noise floor of all channels.

Default value is min.

Set the band multiplier factor, used how much to spread bands across frequency bins. Allowed range is from 0.2 to 5. Default value is 1.25.
Toggle capturing and measurement of noise profile from input audio.

It accepts the following values:

Start sample noise capture.
Stop sample noise capture and measure new noise band profile.

Default value is "none".

Set gain smooth spatial radius, used to smooth gains applied to each frequency bin. Useful to reduce random music noise artefacts. Higher values increases smoothing of gains. Allowed range is from 0 to 50. Default value is 0.

Commands

This filter supports the some above mentioned options as commands.

Examples

  • Reduce white noise by 10dB, and use previously measured noise floor of -40dB:
    afftdn=nr=10:nf=-40
    
  • Reduce white noise by 10dB, also set initial noise floor to -80dB and enable automatic tracking of noise floor so noise floor will gradually change during processing:
    afftdn=nr=10:nf=-80:tn=1
    
  • Reduce noise by 20dB, using noise floor of -40dB and using commands to take noise profile of first 0.4 seconds of input audio:
    asendcmd=0.0 afftdn sn start,asendcmd=0.4 afftdn sn stop,afftdn=nr=20:nf=-40
    

Apply arbitrary expressions to samples in frequency domain.

Set frequency domain real expression for each separate channel separated by '|'. Default is "re". If the number of input channels is greater than the number of expressions, the last specified expression is used for the remaining output channels.
Set frequency domain imaginary expression for each separate channel separated by '|'. Default is "im".

Each expression in real and imag can contain the following constants and functions:

sr
sample rate
current frequency bin number
number of available bins
channel number of the current expression
number of channels
current frame pts
current real part of frequency bin of current channel
current imaginary part of frequency bin of current channel
Return the value of real part of frequency bin at location (bin,channel)
Return the value of imaginary part of frequency bin at location (bin,channel)
Set window size. Allowed range is from 16 to 131072. Default is 4096
Set window function.

It accepts the following values:

Default is "hann".

Set window overlap. If set to 1, the recommended overlap for selected window function will be picked. Default is 0.75.

Examples

  • Leave almost only low frequencies in audio:
    afftfilt="'real=re * (1-clip((b/nb)*b,0,1))':imag='im * (1-clip((b/nb)*b,0,1))'"
    
  • Apply robotize effect:
    afftfilt="real='hypot(re,im)*sin(0)':imag='hypot(re,im)*cos(0)':win_size=512:overlap=0.75"
    
  • Apply whisper effect:
    afftfilt="real='hypot(re,im)*cos((random(0)*2-1)*2*3.14)':imag='hypot(re,im)*sin((random(1)*2-1)*2*3.14)':win_size=128:overlap=0.8"
    
  • Apply phase shift:
    afftfilt="real=re*cos(1)-im*sin(1):imag=re*sin(1)+im*cos(1)"
    

Apply an arbitrary Finite Impulse Response filter.

This filter is designed for applying long FIR filters, up to 60 seconds long.

It can be used as component for digital crossover filters, room equalization, cross talk cancellation, wavefield synthesis, auralization, ambiophonics, ambisonics and spatialization.

This filter uses the streams higher than first one as FIR coefficients. If the non-first stream holds a single channel, it will be used for all input channels in the first stream, otherwise the number of channels in the non-first stream must be same as the number of channels in the first stream.

It accepts the following parameters:

Set dry gain. This sets input gain.
Set wet gain. This sets final output gain.
Set Impulse Response filter length. Default is 1, which means whole IR is processed.
This option is deprecated, and does nothing.
Set norm to be applied to IR coefficients before filtering. Allowed range is from -1 to 2. IR coefficients are normalized with calculated vector norm set by this option. For negative values, no norm is calculated, and IR coefficients are not modified at all. Default is 1.
For multichannel IR if this option is set to true, all IR channels will be normalized with maximal measured gain of all IR channels coefficients as set by "irnorm" option. When disabled, all IR coefficients in each IR channel will be normalized independently. Default is true.
Set gain to be applied to IR coefficients before filtering. Allowed range is 0 to 1. This gain is applied after any gain applied with irnorm option.
Set format of IR stream. Can be "mono" or "input". Default is "input".
Set max allowed Impulse Response filter duration in seconds. Default is 30 seconds. Allowed range is 0.1 to 60 seconds.
This option is deprecated, and does nothing.
This option is deprecated, and does nothing.
This option is deprecated, and does nothing.
This option is deprecated, and does nothing.
Set minimal partition size used for convolution. Default is 8192. Allowed range is from 1 to 65536. Lower values decreases latency at cost of higher CPU usage.
Set maximal partition size used for convolution. Default is 8192. Allowed range is from 8 to 65536. Lower values may increase CPU usage.
Set number of input impulse responses streams which will be switchable at runtime. Allowed range is from 1 to 32. Default is 1.
Set IR stream which will be used for convolution, starting from 0, should always be lower than supplied value by "nbirs" option. Default is 0. This option can be changed at runtime via commands.
Set which precision to use when processing samples.
Auto pick internal sample format depending on other filters.
Always use single-floating point precision sample format.
Always use double-floating point precision sample format.

Default value is auto.

Set when to load IR stream. Can be "init" or "access". First one load and prepares all IRs on initialization, second one once on first access of specific IR. Default is "init".

Examples

  • Apply reverb to stream using mono IR file as second input, complete command using ffmpeg:
    ffmpeg -i input.wav -i middle_tunnel_1way_mono.wav -lavfi afir output.wav
    
  • Apply true stereo processing given input stereo stream, and two stereo impulse responses for left and right channel, the impulse response files are files with names l_ir.wav and r_ir.wav, and setting irnorm option value:
    "pan=4C|c0=FL|c1=FL|c2=FR|c3=FR[a];amovie=l_ir.wav[LIR];amovie=r_ir.wav[RIR];[LIR][RIR]amerge[ir];[a][ir]afir=irfmt=input:irnorm=1.2,pan=stereo|FL<c0+c2|FR<c1+c3"
    
  • Similar to above example, but with "irgain" explicitly set to estimated value and with "irnorm" disabled:
    "pan=4C|c0=FL|c1=FL|c2=FR|c3=FR[a];amovie=l_ir.wav[LIR];amovie=r_ir.wav[RIR];[LIR][RIR]amerge[ir];[a][ir]afir=irfmt=input:irgain=-5dB:irnom=-1,pan=stereo|FL<c0+c2|FR<c1+c3"
    

Set output format constraints for the input audio. The framework will negotiate the most appropriate format to minimize conversions.

It accepts the following parameters:

A '|'-separated list of requested sample formats.
A '|'-separated list of requested sample rates.
A '|'-separated list of requested channel layouts.

See the Channel Layout section in the ffmpeg-utils(1) manual for the required syntax.

If a parameter is omitted, all values are allowed.

Force the output to either unsigned 8-bit or signed 16-bit stereo

aformat=sample_fmts=u8|s16:channel_layouts=stereo

Apply frequency shift to input audio samples.

The filter accepts the following options:

Specify frequency shift. Allowed range is -INT_MAX to INT_MAX. Default value is 0.0.
Set output gain applied to final output. Allowed range is from 0.0 to 1.0. Default value is 1.0.
Set filter order used for filtering. Allowed range is from 1 to 16. Default value is 8.

Commands

This filter supports the all above options as commands.

Reduce broadband noise from input samples using Wavelets.

A description of the accepted options follows.

Set the noise sigma, allowed range is from 0 to 1. Default value is 0. This option controls strength of denoising applied to input samples. Most useful way to set this option is via decibels, eg. -45dB.
Set the number of wavelet levels of decomposition. Allowed range is from 1 to 12. Default value is 10. Setting this too low make denoising performance very poor.
Set wavelet type for decomposition of input frame. They are sorted by number of coefficients, from lowest to highest. More coefficients means worse filtering speed, but overall better quality. Available wavelets are:
Set percent of full denoising. Allowed range is from 0 to 100 percent. Default value is 85 percent or partial denoising.
If enabled, first input frame will be used as noise profile. If first frame samples contain non-noise performance will be very poor.
If enabled, input frames are analyzed for presence of noise. If noise is detected with high possibility then input frame profile will be used for processing following frames, until new noise frame is detected.
Set size of single frame in number of samples. Allowed range is from 512 to 65536. Default frame size is 8192 samples.
Set softness applied inside thresholding function. Allowed range is from 0 to 10. Default softness is 1.

Commands

This filter supports the all above options as commands.

A gate is mainly used to reduce lower parts of a signal. This kind of signal processing reduces disturbing noise between useful signals.

Gating is done by detecting the volume below a chosen level threshold and dividing it by the factor set with ratio. The bottom of the noise floor is set via range. Because an exact manipulation of the signal would cause distortion of the waveform the reduction can be levelled over time. This is done by setting attack and release.

attack determines how long the signal has to fall below the threshold before any reduction will occur and release sets the time the signal has to rise above the threshold to reduce the reduction again. Shorter signals than the chosen attack time will be left untouched.

Set input level before filtering. Default is 1. Allowed range is from 0.015625 to 64.
Set the mode of operation. Can be "upward" or "downward". Default is "downward". If set to "upward" mode, higher parts of signal will be amplified, expanding dynamic range in upward direction. Otherwise, in case of "downward" lower parts of signal will be reduced.
Set the level of gain reduction when the signal is below the threshold. Default is 0.06125. Allowed range is from 0 to 1. Setting this to 0 disables reduction and then filter behaves like expander.
threshold
If a signal rises above this level the gain reduction is released. Default is 0.125. Allowed range is from 0 to 1.
Set a ratio by which the signal is reduced. Default is 2. Allowed range is from 1 to 9000.
Amount of milliseconds the signal has to rise above the threshold before gain reduction stops. Default is 20 milliseconds. Allowed range is from 0.01 to 9000.
Amount of milliseconds the signal has to fall below the threshold before the reduction is increased again. Default is 250 milliseconds. Allowed range is from 0.01 to 9000.
Set amount of amplification of signal after processing. Default is 1. Allowed range is from 1 to 64.
Curve the sharp knee around the threshold to enter gain reduction more softly. Default is 2.828427125. Allowed range is from 1 to 8.
Choose if exact signal should be taken for detection or an RMS like one. Default is "rms". Can be "peak" or "rms".
Choose if the average level between all channels or the louder channel affects the reduction. Default is "average". Can be "average" or "maximum".

Commands

This filter supports the all above options as commands.

Apply an arbitrary Infinite Impulse Response filter.

It accepts the following parameters:

Set B/numerator/zeros/reflection coefficients.
Set A/denominator/poles/ladder coefficients.
Set channels gains.
Set input gain.
Set output gain.
Set coefficients format.
lattice-ladder function
analog transfer function
digital transfer function
Z-plane zeros/poles, cartesian (default)
Z-plane zeros/poles, polar radians
Z-plane zeros/poles, polar degrees
S-plane zeros/poles
Set type of processing.
direct processing
serial processing
parallel processing
Set filtering precision.
double-precision floating-point (default)
single-precision floating-point
32-bit integers
16-bit integers
Normalize filter coefficients, by default is enabled. Enabling it will normalize magnitude response at DC to 0dB.
mix
How much to use filtered signal in output. Default is 1. Range is between 0 and 1.
Show IR frequency response, magnitude(magenta), phase(green) and group delay(yellow) in additional video stream. By default it is disabled.
Set for which IR channel to display frequency response. By default is first channel displayed. This option is used only when response is enabled.
Set video stream size. This option is used only when response is enabled.

Coefficients in "tf" and "sf" format are separated by spaces and are in ascending order.

Coefficients in "zp" format are separated by spaces and order of coefficients doesn't matter. Coefficients in "zp" format are complex numbers with i imaginary unit.

Different coefficients and gains can be provided for every channel, in such case use '|' to separate coefficients or gains. Last provided coefficients will be used for all remaining channels.

Examples

  • Apply 2 pole elliptic notch at around 5000Hz for 48000 Hz sample rate:
    aiir=k=1:z=7.957584807809675810E-1 -2.575128568908332300 3.674839853930788710 -2.57512875289799137 7.957586296317130880E-1:p=1 -2.86950072432325953 3.63022088054647218 -2.28075678147272232 6.361362326477423500E-1:f=tf:r=d
    
  • Same as above but in "zp" format:
    aiir=k=0.79575848078096756:z=0.80918701+0.58773007i 0.80918701-0.58773007i 0.80884700+0.58784055i 0.80884700-0.58784055i:p=0.63892345+0.59951235i 0.63892345-0.59951235i 0.79582691+0.44198673i 0.79582691-0.44198673i:f=zp:r=s
    
  • Apply 3-rd order analog normalized Butterworth low-pass filter, using analog transfer function format:
    aiir=z=1.3057 0 0 0:p=1.3057 2.3892 2.1860 1:f=sf:r=d
    

The limiter prevents an input signal from rising over a desired threshold. This limiter uses lookahead technology to prevent your signal from distorting. It means that there is a small delay after the signal is processed. Keep in mind that the delay it produces is the attack time you set.

The filter accepts the following options:

Set input gain. Default is 1.
Set output gain. Default is 1.
Don't let signals above this level pass the limiter. Default is 1.
The limiter will reach its attenuation level in this amount of time in milliseconds. Default is 5 milliseconds.
Come back from limiting to attenuation 1.0 in this amount of milliseconds. Default is 50 milliseconds.
When gain reduction is always needed ASC takes care of releasing to an average reduction level rather than reaching a reduction of 0 in the release time.
Select how much the release time is affected by ASC, 0 means nearly no changes in release time while 1 produces higher release times.
Auto level output signal. Default is enabled. This normalizes audio back to 0dB if enabled.
Compensate the delay introduced by using the lookahead buffer set with attack parameter. Also flush the valid audio data in the lookahead buffer when the stream hits EOF.

Depending on picked setting it is recommended to upsample input 2x or 4x times with aresample before applying this filter.

Apply a two-pole all-pass filter with central frequency (in Hz) frequency, and filter-width width. An all-pass filter changes the audio's frequency to phase relationship without changing its frequency to amplitude relationship.

The filter accepts the following options:

Set frequency in Hz.
Set method to specify band-width of filter.
Hz
Q-Factor
octave
slope
kHz
Specify the band-width of a filter in width_type units.
How much to use filtered signal in output. Default is 1. Range is between 0 and 1.
Specify which channels to filter, by default all available are filtered.
Normalize biquad coefficients, by default is disabled. Enabling it will normalize magnitude response at DC to 0dB.
Set the filter order, can be 1 or 2. Default is 2.
Set transform type of IIR filter.
Set precision of filtering.
Pick automatic sample format depending on surround filters.
Always use signed 16-bit.
Always use signed 32-bit.
Always use float 32-bit.
Always use float 64-bit.

Commands

This filter supports the following commands:

Change allpass frequency. Syntax for the command is : "frequency"
Change allpass width_type. Syntax for the command is : "width_type"
Change allpass width. Syntax for the command is : "width"
Change allpass mix. Syntax for the command is : "mix"

Loop audio samples.

The filter accepts the following options:

loop
Set the number of loops. Setting this value to -1 will result in infinite loops. Default is 0.
Set maximal number of samples. Default is 0.
Set first sample of loop. Default is 0.
Set the time of loop start in seconds. Only used if option named start is set to -1.

Merge two or more audio streams into a single multi-channel stream.

The filter accepts the following options:

Set the number of inputs. Default is 2.

If the channel layouts of the inputs are disjoint, and therefore compatible, the channel layout of the output will be set accordingly and the channels will be reordered as necessary. If the channel layouts of the inputs are not disjoint, the output will have all the channels of the first input then all the channels of the second input, in that order, and the channel layout of the output will be the default value corresponding to the total number of channels.

For example, if the first input is in 2.1 (FL+FR+LF) and the second input is FC+BL+BR, then the output will be in 5.1, with the channels in the following order: a1, a2, b1, a3, b2, b3 (a1 is the first channel of the first input, b1 is the first channel of the second input).

On the other hand, if both input are in stereo, the output channels will be in the default order: a1, a2, b1, b2, and the channel layout will be arbitrarily set to 4.0, which may or may not be the expected value.

All inputs must have the same sample rate, and format.

If inputs do not have the same duration, the output will stop with the shortest.

Examples

  • Merge two mono files into a stereo stream:
    amovie=left.wav [l] ; amovie=right.mp3 [r] ; [l] [r] amerge
    
  • Multiple merges assuming 1 video stream and 6 audio streams in input.mkv:
    ffmpeg -i input.mkv -filter_complex "[0:1][0:2][0:3][0:4][0:5][0:6] amerge=inputs=6" -c:a pcm_s16le output.mkv
    

Mixes multiple audio inputs into a single output.

Note that this filter only supports float samples (the amerge and pan audio filters support many formats). If the amix input has integer samples then aresample will be automatically inserted to perform the conversion to float samples.

It accepts the following parameters:

The number of inputs. If unspecified, it defaults to 2.
How to determine the end-of-stream.
The duration of the longest input. (default)
The duration of the shortest input.
The duration of the first input.
The transition time, in seconds, for volume renormalization when an input stream ends. The default value is 2 seconds.
Specify weight of each input audio stream as a sequence of numbers separated by a space. If fewer weights are specified compared to number of inputs, the last weight is assigned to the remaining inputs. Default weight for each input is 1.
normalize
Always scale inputs instead of only doing summation of samples. Beware of heavy clipping if inputs are not normalized prior or after filtering by this filter if this option is disabled. By default is enabled.

Examples

  • This will mix 3 input audio streams to a single output with the same duration as the first input and a dropout transition time of 3 seconds:
    ffmpeg -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex amix=inputs=3:duration=first:dropout_transition=3 OUTPUT
    
  • This will mix one vocal and one music input audio stream to a single output with the same duration as the longest input. The music will have quarter the weight as the vocals, and the inputs are not normalized:
    ffmpeg -i VOCALS -i MUSIC -filter_complex amix=inputs=2:duration=longest:dropout_transition=0:weights="1 0.25":normalize=0 OUTPUT
    

Commands

This filter supports the following commands:

normalize
Syntax is same as option with same name.

Multiply first audio stream with second audio stream and store result in output audio stream. Multiplication is done by multiplying each sample from first stream with sample at same position from second stream.

With this element-wise multiplication one can create amplitude fades and amplitude modulations.

High-order parametric multiband equalizer for each channel.

It accepts the following parameters:

This option string is in format: "cchn f=cf w=w g=g t=f | ..." Each equalizer band is separated by '|'.
Set channel number to which equalization will be applied. If input doesn't have that channel the entry is ignored.
Set central frequency for band. If input doesn't have that frequency the entry is ignored.
Set band width in Hertz.
Set band gain in dB.
Set filter type for band, optional, can be:
0
Butterworth, this is default.
1
Chebyshev type 1.
2
Chebyshev type 2.
curves
With this option activated frequency response of anequalizer is displayed in video stream.
Set video stream size. Only useful if curves option is activated.
Set max gain that will be displayed. Only useful if curves option is activated. Setting this to a reasonable value makes it possible to display gain which is derived from neighbour bands which are too close to each other and thus produce higher gain when both are activated.
Set frequency scale used to draw frequency response in video output. Can be linear or logarithmic. Default is logarithmic.
Set color for each channel curve which is going to be displayed in video stream. This is list of color names separated by space or by '|'. Unrecognised or missing colors will be replaced by white color.

Examples

Lower gain by 10 of central frequency 200Hz and width 100 Hz for first 2 channels using Chebyshev type 1 filter:
anequalizer=c0 f=200 w=100 g=-10 t=1|c1 f=200 w=100 g=-10 t=1

Commands

This filter supports the following commands:

Alter existing filter parameters. Syntax for the commands is : "fN|f=freq|w=width|g=gain"

fN is existing filter number, starting from 0, if no such filter is available error is returned. freq set new frequency parameter. width set new width parameter in Hertz. gain set new gain parameter in dB.

Full filter invocation with asendcmd may look like this: asendcmd=c='4.0 anequalizer change 0|f=200|w=50|g=1',anequalizer=...

Reduce broadband noise in audio samples using Non-Local Means algorithm.

Each sample is adjusted by looking for other samples with similar contexts. This context similarity is defined by comparing their surrounding patches of size p. Patches are searched in an area of r around the sample.

The filter accepts the following options:

Set denoising strength. Allowed range is from 0.00001 to 10000. Default value is 0.00001.
Set patch radius duration. Allowed range is from 1 to 100 milliseconds. Default value is 2 milliseconds.
Set research radius duration. Allowed range is from 2 to 300 milliseconds. Default value is 6 milliseconds.
Set the output mode.

It accepts the following values:

Pass input unchanged.
Pass noise filtered out.
Pass only noise.

Default value is o.

Set smooth factor. Default value is 11. Allowed range is from 1 to 1000.

Commands

This filter supports the all above options as commands.

Apply Normalized Least-Mean-(Squares|Fourth) algorithm to the first audio stream using the second audio stream.

This adaptive filter is used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean square of the error signal (difference between the desired, 2nd input audio stream and the actual signal, the 1st input audio stream).

A description of the accepted options follows.

Set filter order.
Set filter mu.
Set the filter eps.
Set the filter leakage.
It accepts the following values:
Pass the 1st input.
Pass the 2nd input.
Pass difference between desired, 2nd input and error signal estimate.
Pass difference between input, 1st input and error signal estimate.
Pass error signal estimated samples.

Default value is o.

Set which precision to use when processing samples.
Auto pick internal sample format depending on other filters.
Always use single-floating point precision sample format.
Always use double-floating point precision sample format.

Examples

One of many usages of this filter is noise reduction, input audio is filtered with same samples that are delayed by fixed amount, one such example for stereo audio is:
asplit[a][b],[a]adelay=32S|32S[a],[b][a]anlms=order=128:leakage=0.0005:mu=.5:out_mode=o

Commands

This filter supports the same commands as options, excluding option "order".

Pass the audio source unchanged to the output.

Pad the end of an audio stream with silence.

This can be used together with ffmpeg -shortest to extend audio streams to the same length as the video stream.

A description of the accepted options follows.

Set silence packet size. Default value is 4096.
Set the number of samples of silence to add to the end. After the value is reached, the stream is terminated. This option is mutually exclusive with whole_len.
Set the minimum total number of samples in the output audio stream. If the value is longer than the input audio length, silence is added to the end, until the value is reached. This option is mutually exclusive with pad_len.
Specify the duration of samples of silence to add. See the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax. Used only if set to non-negative value.
Specify the minimum total duration in the output audio stream. See the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax. Used only if set to non-negative value. If the value is longer than the input audio length, silence is added to the end, until the value is reached. This option is mutually exclusive with pad_dur

If neither the pad_len nor the whole_len nor pad_dur nor whole_dur option is set, the filter will add silence to the end of the input stream indefinitely.

Note that for ffmpeg 4.4 and earlier a zero pad_dur or whole_dur also caused the filter to add silence indefinitely.

Examples

  • Add 1024 samples of silence to the end of the input:
    apad=pad_len=1024
    
  • Make sure the audio output will contain at least 10000 samples, pad the input with silence if required:
    apad=whole_len=10000
    
  • Use ffmpeg to pad the audio input with silence, so that the video stream will always result the shortest and will be converted until the end in the output file when using the shortest option:
    ffmpeg -i VIDEO -i AUDIO -filter_complex "[1:0]apad" -shortest OUTPUT
    

Add a phasing effect to the input audio.

A phaser filter creates series of peaks and troughs in the frequency spectrum. The position of the peaks and troughs are modulated so that they vary over time, creating a sweeping effect.

A description of the accepted parameters follows.

Set input gain. Default is 0.4.
Set output gain. Default is 0.74
Set delay in milliseconds. Default is 3.0.
Set decay. Default is 0.4.
Set modulation speed in Hz. Default is 0.5.
Set modulation type. Default is triangular.

It accepts the following values:

Apply phase shift to input audio samples.

The filter accepts the following options:

Specify phase shift. Allowed range is from -1.0 to 1.0. Default value is 0.0.
Set output gain applied to final output. Allowed range is from 0.0 to 1.0. Default value is 1.0.
Set filter order used for filtering. Allowed range is from 1 to 16. Default value is 8.

Commands

This filter supports the all above options as commands.

Measure Audio Peak Signal-to-Noise Ratio.

This filter takes two audio streams for input, and outputs first audio stream. Results are in dB per channel at end of either input.

Apply Psychoacoustic clipper to input audio stream.

The filter accepts the following options:

Set input gain. By default it is 1. Range is [0.015625 - 64].
Set output gain. By default it is 1. Range is [0.015625 - 64].
Set the clipping start value. Default value is 0dBFS or 1.
Output only difference samples, useful to hear introduced distortions. By default is disabled.
Set strength of adaptive distortion applied. Default value is 0.5. Allowed range is from 0 to 1.
Set number of iterations of psychoacoustic clipper. Allowed range is from 1 to 20. Default value is 10.
Auto level output signal. Default is disabled. This normalizes audio back to 0dBFS if enabled.

Commands

This filter supports the all above options as commands.

Audio pulsator is something between an autopanner and a tremolo. But it can produce funny stereo effects as well. Pulsator changes the volume of the left and right channel based on a LFO (low frequency oscillator) with different waveforms and shifted phases. This filter have the ability to define an offset between left and right channel. An offset of 0 means that both LFO shapes match each other. The left and right channel are altered equally - a conventional tremolo. An offset of 50% means that the shape of the right channel is exactly shifted in phase (or moved backwards about half of the frequency) - pulsator acts as an autopanner. At 1 both curves match again. Every setting in between moves the phase shift gapless between all stages and produces some "bypassing" sounds with sine and triangle waveforms. The more you set the offset near 1 (starting from the 0.5) the faster the signal passes from the left to the right speaker.

The filter accepts the following options:

Set input gain. By default it is 1. Range is [0.015625 - 64].
Set output gain. By default it is 1. Range is [0.015625 - 64].
Set waveform shape the LFO will use. Can be one of: sine, triangle, square, sawup or sawdown. Default is sine.
Set modulation. Define how much of original signal is affected by the LFO.
Set left channel offset. Default is 0. Allowed range is [0 - 1].
Set right channel offset. Default is 0.5. Allowed range is [0 - 1].
Set pulse width. Default is 1. Allowed range is [0 - 2].
Set possible timing mode. Can be one of: bpm, ms or hz. Default is hz.
Set bpm. Default is 120. Allowed range is [30 - 300]. Only used if timing is set to bpm.
Set ms. Default is 500. Allowed range is [10 - 2000]. Only used if timing is set to ms.
Set frequency in Hz. Default is 2. Allowed range is [0.01 - 100]. Only used if timing is set to hz.

Resample the input audio to the specified parameters, using the libswresample library. If none are specified then the filter will automatically convert between its input and output.

This filter is also able to stretch/squeeze the audio data to make it match the timestamps or to inject silence / cut out audio to make it match the timestamps, do a combination of both or do neither.

The filter accepts the syntax [sample_rate:]resampler_options, where sample_rate expresses a sample rate and resampler_options is a list of key=value pairs, separated by ":". See the "Resampler Options" section in the ffmpeg-resampler(1) manual for the complete list of supported options.

Examples

  • Resample the input audio to 44100Hz:
    aresample=44100
    
  • Stretch/squeeze samples to the given timestamps, with a maximum of 1000 samples per second compensation:
    aresample=async=1000
    

Reverse an audio clip.

Warning: This filter requires memory to buffer the entire clip, so trimming is suggested.

Examples

Take the first 5 seconds of a clip, and reverse it.
atrim=end=5,areverse

Apply Recursive Least Squares algorithm to the first audio stream using the second audio stream.

This adaptive filter is used to mimic a desired filter by recursively finding the filter coefficients that relate to producing the minimal weighted linear least squares cost function of the error signal (difference between the desired, 2nd input audio stream and the actual signal, the 1st input audio stream).

A description of the accepted options follows.

Set the filter order.
Set the forgetting factor.
Set the coefficient to initialize internal covariance matrix.
Set the filter output samples. It accepts the following values:
Pass the 1st input.
Pass the 2nd input.
Pass difference between desired, 2nd input and error signal estimate.
Pass difference between input, 1st input and error signal estimate.
Pass error signal estimated samples.

Default value is o.

Set which precision to use when processing samples.
Auto pick internal sample format depending on other filters.
Always use single-floating point precision sample format.
Always use double-floating point precision sample format.

Reduce noise from speech using Recurrent Neural Networks.

This filter accepts the following options:

Set train model file to load. This option is always required.
mix
Set how much to mix filtered samples into final output. Allowed range is from -1 to 1. Default value is 1. Negative values are special, they set how much to keep filtered noise in the final filter output. Set this option to -1 to hear actual noise removed from input signal.

Commands

This filter supports the all above options as commands.

Measure Audio Signal-to-Distortion Ratio.

This filter takes two audio streams for input, and outputs first audio stream. Results are in dB per channel at end of either input.

Set the number of samples per each output audio frame.

The last output packet may contain a different number of samples, as the filter will flush all the remaining samples when the input audio signals its end.

The filter accepts the following options:

Set the number of frames per each output audio frame. The number is intended as the number of samples per each channel. Default value is 1024.
If set to 1, the filter will pad the last audio frame with zeroes, so that the last frame will contain the same number of samples as the previous ones. Default value is 1.

For example, to set the number of per-frame samples to 1234 and disable padding for the last frame, use:

asetnsamples=n=1234:p=0

Set the sample rate without altering the PCM data. This will result in a change of speed and pitch.

The filter accepts the following options:

Set the output sample rate. Default is 44100 Hz.

Show a line containing various information for each input audio frame. The input audio is not modified.

The shown line contains a sequence of key/value pairs of the form key:value.

The following values are shown in the output:

The (sequential) number of the input frame, starting from 0.
The presentation timestamp of the input frame, in time base units; the time base depends on the filter input pad, and is usually 1/sample_rate.
The presentation timestamp of the input frame in seconds.
The sample format.
The channel layout.
The sample rate for the audio frame.
The number of samples (per channel) in the frame.
The Adler-32 checksum (printed in hexadecimal) of the audio data. For planar audio, the data is treated as if all the planes were concatenated.
A list of Adler-32 checksums for each data plane.

Measure Audio Scaled-Invariant Signal-to-Distortion Ratio.

This filter takes two audio streams for input, and outputs first audio stream. Results are in dB per channel at end of either input.

Apply audio soft clipping.

Soft clipping is a type of distortion effect where the amplitude of a signal is saturated along a smooth curve, rather than the abrupt shape of hard-clipping.

This filter accepts the following options:

Set type of soft-clipping.

It accepts the following values:

threshold
Set threshold from where to start clipping. Default value is 0dB or 1.
Set gain applied to output. Default value is 0dB or 1.
Set additional parameter which controls sigmoid function.
Set oversampling factor.

Commands

This filter supports the all above options as commands.

Display frequency domain statistical information about the audio channels. Statistics are calculated and stored as metadata for each audio channel and for each audio frame.

It accepts the following option:

Set the window length in samples. Default value is 2048. Allowed range is from 32 to 65536.
Set window function.

It accepts the following values:

Default is "hann".

Set window overlap. Allowed range is from 0 to 1. Default value is 0.5.
Select the parameters which are measured. The metadata keys can be used as flags, default is all which measures everything. none disables all measurement.

A list of each metadata key follows:

entropy

Automatic Speech Recognition

This filter uses PocketSphinx for speech recognition. To enable compilation of this filter, you need to configure FFmpeg with "--enable-pocketsphinx".

It accepts the following options:

Set sampling rate of input audio. Defaults is 16000. This need to match speech models, otherwise one will get poor results.
Set dictionary containing acoustic model files.
Set pronunciation dictionary.
Set language model file.
Set language model set.
Set which language model to use.
Set output for log messages.

The filter exports recognized speech as the frame metadata "lavfi.asr.text".

Display time domain statistical information about the audio channels. Statistics are calculated and displayed for each audio channel and, where applicable, an overall figure is also given.

It accepts the following option:

Short window length in seconds, used for peak and trough RMS measurement. Default is 0.05 (50 milliseconds). Allowed range is "[0 - 10]".
Set metadata injection. All the metadata keys are prefixed with "lavfi.astats.X", where "X" is channel number starting from 1 or string "Overall". Default is disabled.

Available keys for each channel are: Bit_depth Crest_factor DC_offset Dynamic_range Entropy Flat_factor Max_difference Max_level Mean_difference Min_difference Min_level Noise_floor Noise_floor_count Number_of_Infs Number_of_NaNs Number_of_denormals Peak_count Abs_Peak_count Peak_level RMS_difference RMS_peak RMS_trough Zero_crossings Zero_crossings_rate

and for "Overall": Bit_depth DC_offset Entropy Flat_factor Max_difference Max_level Mean_difference Min_difference Min_level Noise_floor Noise_floor_count Number_of_Infs Number_of_NaNs Number_of_denormals Number_of_samples Peak_count Abs_Peak_count Peak_level RMS_difference RMS_level RMS_peak RMS_trough

For example, a full key looks like "lavfi.astats.1.DC_offset" or "lavfi.astats.Overall.Peak_count".

Read below for the description of the keys.

Set the number of frames over which cumulative stats are calculated before being reset. Default is disabled.
Select the parameters which are measured per channel. The metadata keys can be used as flags, default is all which measures everything. none disables all per channel measurement.
Select the parameters which are measured overall. The metadata keys can be used as flags, default is all which measures everything. none disables all overall measurement.

A description of the measure keys follow:

no measures
all measures
overall bit depth of audio, i.e. number of bits used for each sample
standard ratio of peak to RMS level (note: not in dB)
mean amplitude displacement from zero
measured dynamic range of audio in dB
entropy measured across whole audio, entropy of value near 1.0 is typically measured for white noise
flatness (i.e. consecutive samples with the same value) of the signal at its peak levels (i.e. either Min_level or Max_level)
maximal difference between two consecutive samples
maximal sample level
mean difference between two consecutive samples, i.e. the average of each difference between two consecutive samples
minimal difference between two consecutive samples
minimal sample level
minimum local peak measured in dBFS over a short window
number of occasions (not the number of samples) that the signal attained Noise floor
number of samples with an infinite value
number of samples with a NaN (not a number) value
number of samples with a subnormal value
number of samples
number of occasions (not the number of samples) that the signal attained either Min_level or Max_level
number of occasions that the absolute samples taken from the signal attained max absolute value of Min_level and Max_level
standard peak level measured in dBFS
Root Mean Square difference between two consecutive samples
standard RMS level measured in dBFS
peak and trough values for RMS level measured over a short window, measured in dBFS.
number of points where the waveform crosses the zero level axis
rate of Zero crossings and number of audio samples

Boost subwoofer frequencies.

The filter accepts the following options:

Set dry gain, how much of original signal is kept. Allowed range is from 0 to 1. Default value is 1.0.
Set wet gain, how much of filtered signal is kept. Allowed range is from 0 to 1. Default value is 1.0.
Set max boost factor. Allowed range is from 1 to 12. Default value is 2.
Set delay line decay gain value. Allowed range is from 0 to 1. Default value is 0.0.
feedback
Set delay line feedback gain value. Allowed range is from 0 to 1. Default value is 0.9.
Set cutoff frequency in Hertz. Allowed range is 50 to 900. Default value is 100.
Set slope amount for cutoff frequency. Allowed range is 0.0001 to 1. Default value is 0.5.
Set delay. Allowed range is from 1 to 100. Default value is 20.
Set the channels to process. Default value is all available.

Commands

This filter supports the all above options as commands.

Cut subwoofer frequencies.

This filter allows to set custom, steeper roll off than highpass filter, and thus is able to more attenuate frequency content in stop-band.

The filter accepts the following options:

Set cutoff frequency in Hertz. Allowed range is 2 to 200. Default value is 20.
Set filter order. Available values are from 3 to 20. Default value is 10.
Set input gain level. Allowed range is from 0 to 1. Default value is 1.

Commands

This filter supports the all above options as commands.

Cut super frequencies.

The filter accepts the following options:

Set cutoff frequency in Hertz. Allowed range is 20000 to 192000. Default value is 20000.
Set filter order. Available values are from 3 to 20. Default value is 10.
Set input gain level. Allowed range is from 0 to 1. Default value is 1.

Commands

This filter supports the all above options as commands.

Apply high order Butterworth band-pass filter.

The filter accepts the following options:

Set center frequency in Hertz. Allowed range is 2 to 999999. Default value is 1000.
Set filter order. Available values are from 4 to 20. Default value is 4.
Set Q-factor. Allowed range is from 0.01 to 100. Default value is 1.
Set input gain level. Allowed range is from 0 to 2. Default value is 1.

Commands

This filter supports the all above options as commands.

Apply high order Butterworth band-stop filter.

The filter accepts the following options:

Set center frequency in Hertz. Allowed range is 2 to 999999. Default value is 1000.
Set filter order. Available values are from 4 to 20. Default value is 4.
Set Q-factor. Allowed range is from 0.01 to 100. Default value is 1.
Set input gain level. Allowed range is from 0 to 2. Default value is 1.

Commands

This filter supports the all above options as commands.

Adjust audio tempo.

The filter accepts exactly one parameter, the audio tempo. If not specified then the filter will assume nominal 1.0 tempo. Tempo must be in the [0.5, 100.0] range.

Note that tempo greater than 2 will skip some samples rather than blend them in. If for any reason this is a concern it is always possible to daisy-chain several instances of atempo to achieve the desired product tempo.

Examples

  • Slow down audio to 80% tempo:
    atempo=0.8
    
  • To speed up audio to 300% tempo:
    atempo=3
    
  • To speed up audio to 300% tempo by daisy-chaining two atempo instances:
    atempo=sqrt(3),atempo=sqrt(3)
    

Commands

This filter supports the following commands:

Change filter tempo scale factor. Syntax for the command is : "tempo"

Apply spectral tilt filter to audio stream.

This filter apply any spectral roll-off slope over any specified frequency band.

The filter accepts the following options:

Set central frequency of tilt in Hz. Default is 10000 Hz.
Set slope direction of tilt. Default is 0. Allowed range is from -1 to 1.
Set width of tilt. Default is 1000. Allowed range is from 100 to 10000.
Set order of tilt filter.
Set input volume level. Allowed range is from 0 to 4. Default is 1.

Commands

This filter supports the all above options as commands.

Trim the input so that the output contains one continuous subpart of the input.

It accepts the following parameters:

Timestamp (in seconds) of the start of the section to keep. I.e. the audio sample with the timestamp start will be the first sample in the output.
Specify time of the first audio sample that will be dropped, i.e. the audio sample immediately preceding the one with the timestamp end will be the last sample in the output.
Same as start, except this option sets the start timestamp in samples instead of seconds.
Same as end, except this option sets the end timestamp in samples instead of seconds.
The maximum duration of the output in seconds.
The number of the first sample that should be output.
The number of the first sample that should be dropped.

start, end, and duration are expressed as time duration specifications; see the Time duration section in the ffmpeg-utils(1) manual.

Note that the first two sets of the start/end options and the duration option look at the frame timestamp, while the _sample options simply count the samples that pass through the filter. So start/end_pts and start/end_sample will give different results when the timestamps are wrong, inexact or do not start at zero. Also note that this filter does not modify the timestamps. If you wish to have the output timestamps start at zero, insert the asetpts filter after the atrim filter.

If multiple start or end options are set, this filter tries to be greedy and keep all samples that match at least one of the specified constraints. To keep only the part that matches all the constraints at once, chain multiple atrim filters.

The defaults are such that all the input is kept. So it is possible to set e.g. just the end values to keep everything before the specified time.

Examples:

  • Drop everything except the second minute of input:
    ffmpeg -i INPUT -af atrim=60:120
    
  • Keep only the first 1000 samples:
    ffmpeg -i INPUT -af atrim=end_sample=1000
    

Calculate normalized windowed cross-correlation between two input audio streams.

Resulted samples are always between -1 and 1 inclusive. If result is 1 it means two input samples are highly correlated in that selected segment. Result 0 means they are not correlated at all. If result is -1 it means two input samples are out of phase, which means they cancel each other.

The filter accepts the following options:

Set size of segment over which cross-correlation is calculated. Default is 256. Allowed range is from 2 to 131072.
Set algorithm for cross-correlation. Can be "slow" or "fast" or "best". Default is "best". Fast algorithm assumes mean values over any given segment are always zero and thus need much less calculations to make. This is generally not true, but is valid for typical audio streams.

Examples

Calculate correlation between channels in stereo audio stream:
ffmpeg -i stereo.wav -af channelsplit,axcorrelate=size=1024:algo=fast correlation.wav

Apply a two-pole Butterworth band-pass filter with central frequency frequency, and (3dB-point) band-width width. The csg option selects a constant skirt gain (peak gain = Q) instead of the default: constant 0dB peak gain. The filter roll off at 6dB per octave (20dB per decade).

The filter accepts the following options:

Set the filter's central frequency. Default is 3000.
Constant skirt gain if set to 1. Defaults to 0.
Set method to specify band-width of filter.
Hz
Q-Factor
octave
slope
kHz
Specify the band-width of a filter in width_type units.
How much to use filtered signal in output. Default is 1. Range is between 0 and 1.
Specify which channels to filter, by default all available are filtered.
Normalize biquad coefficients, by default is disabled. Enabling it will normalize magnitude response at DC to 0dB.
Set transform type of IIR filter.
Set precision of filtering.
Pick automatic sample format depending on surround filters.
Always use signed 16-bit.
Always use signed 32-bit.
Always use float 32-bit.
Always use float 64-bit.
Set block size used for reverse IIR processing. If this value is set to high enough value (higher than impulse response length truncated when reaches near zero values) filtering will become linear phase otherwise if not big enough it will just produce nasty artifacts.

Note that filter delay will be exactly this many samples when set to non-zero value.

Commands

This filter supports the following commands:

Change bandpass frequency. Syntax for the command is : "frequency"
Change bandpass width_type. Syntax for the command is : "width_type"
Change bandpass width. Syntax for the command is : "width"
Change bandpass mix. Syntax for the command is : "mix"

Apply a two-pole Butterworth band-reject filter with central frequency frequency, and (3dB-point) band-width width. The filter roll off at 6dB per octave (20dB per decade).

The filter accepts the following options:

Set the filter's central frequency. Default is 3000.
Set method to specify band-width of filter.
Hz
Q-Factor
octave
slope
kHz
Specify the band-width of a filter in width_type units.
How much to use filtered signal in output. Default is 1. Range is between 0 and 1.
Specify which channels to filter, by default all available are filtered.
Normalize biquad coefficients, by default is disabled. Enabling it will normalize magnitude response at DC to 0dB.
Set transform type of IIR filter.
Set precision of filtering.
Pick automatic sample format depending on surround filters.
Always use signed 16-bit.
Always use signed 32-bit.
Always use float 32-bit.
Always use float 64-bit.
Set block size used for reverse IIR processing. If this value is set to high enough value (higher than impulse response length truncated when reaches near zero values) filtering will become linear phase otherwise if not big enough it will just produce nasty artifacts.

Note that filter delay will be exactly this many samples when set to non-zero value.

Commands

This filter supports the following commands:

Change bandreject frequency. Syntax for the command is : "frequency"
Change bandreject width_type. Syntax for the command is : "width_type"
Change bandreject width. Syntax for the command is : "width"
Change bandreject mix. Syntax for the command is : "mix"

Boost or cut the bass (lower) frequencies of the audio using a two-pole shelving filter with a response similar to that of a standard hi-fi's tone-controls. This is also known as shelving equalisation (EQ).

The filter accepts the following options:

Give the gain at 0 Hz. Its useful range is about -20 (for a large cut) to +20 (for a large boost). Beware of clipping when using a positive gain.
Set the filter's central frequency and so can be used to extend or reduce the frequency range to be boosted or cut. The default value is 100 Hz.
Set method to specify band-width of filter.
Hz
Q-Factor
octave
slope
kHz
Determine how steep is the filter's shelf transition.
Set number of poles. Default is 2.
How much to use filtered signal in output. Default is 1. Range is between 0 and 1.
Specify which channels to filter, by default all available are filtered.
Normalize biquad coefficients, by default is disabled. Enabling it will normalize magnitude response at DC to 0dB.
Set transform type of IIR filter.
Set precision of filtering.
Pick automatic sample format depending on surround filters.
Always use signed 16-bit.
Always use signed 32-bit.
Always use float 32-bit.
Always use float 64-bit.
Set block size used for reverse IIR processing. If this value is set to high enough value (higher than impulse response length truncated when reaches near zero values) filtering will become linear phase otherwise if not big enough it will just produce nasty artifacts.

Note that filter delay will be exactly this many samples when set to non-zero value.

Commands

This filter supports the following commands:

Change bass frequency. Syntax for the command is : "frequency"
Change bass width_type. Syntax for the command is : "width_type"
Change bass width. Syntax for the command is : "width"
Change bass gain. Syntax for the command is : "gain"
Change bass mix. Syntax for the command is : "mix"

Apply a biquad IIR filter with the given coefficients. Where b0, b1, b2 and a0, a1, a2 are the numerator and denominator coefficients respectively. and channels, c specify which channels to filter, by default all available are filtered.

Commands

This filter supports the following commands:

Change biquad parameter. Syntax for the command is : "value"
How much to use filtered signal in output. Default is 1. Range is between 0 and 1.
Specify which channels to filter, by default all available are filtered.
Normalize biquad coefficients, by default is disabled. Enabling it will normalize magnitude response at DC to 0dB.
Set transform type of IIR filter.
Set precision of filtering.
Pick automatic sample format depending on surround filters.
Always use signed 16-bit.
Always use signed 32-bit.
Always use float 32-bit.
Always use float 64-bit.
Set block size used for reverse IIR processing. If this value is set to high enough value (higher than impulse response length truncated when reaches near zero values) filtering will become linear phase otherwise if not big enough it will just produce nasty artifacts.

Note that filter delay will be exactly this many samples when set to non-zero value.

Bauer stereo to binaural transformation, which improves headphone listening of stereo audio records.

To enable compilation of this filter you need to configure FFmpeg with "--enable-libbs2b".

It accepts the following parameters:

Pre-defined crossfeed level.
Default level (fcut=700, feed=50).
Chu Moy circuit (fcut=700, feed=60).
Jan Meier circuit (fcut=650, feed=95).
Cut frequency (in Hz).
Feed level (in Hz).

Remap input channels to new locations.

It accepts the following parameters:

Map channels from input to output. The argument is a '|'-separated list of mappings, each in the "in_channel-out_channel" or "in_channel" form. in_channel can be either the name of the input channel (e.g. FL for front left) or its index in the input channel layout. out_channel is the name of the output channel or its index in the output channel layout. If out_channel is not given then it is implicitly an index, starting with zero and increasing by one for each mapping. Mixing different types of mappings is not allowed and will result in a parse error.
The channel layout of the output stream. If not specified, then filter will guess it based on the out_channel names or the number of mappings. Guessed layouts will not necessarily contain channels in the order of the mappings.

If no mapping is present, the filter will implicitly map input channels to output channels, preserving indices.

Examples

  • For example, assuming a 5.1+downmix input MOV file,
    ffmpeg -i in.mov -filter 'channelmap=map=DL-FL|DR-FR' out.wav
    

    will create an output WAV file tagged as stereo from the downmix channels of the input.

  • To fix a 5.1 WAV improperly encoded in AAC's native channel order
    ffmpeg -i in.wav -filter 'channelmap=1|2|0|5|3|4:5.1' out.wav
    

Split each channel from an input audio stream into a separate output stream.

It accepts the following parameters:

The channel layout of the input stream. The default is "stereo".
A channel layout describing the channels to be extracted as separate output streams or "all" to extract each input channel as a separate stream. The default is "all".

Choosing channels not present in channel layout in the input will result in an error.

Examples

  • For example, assuming a stereo input MP3 file,
    ffmpeg -i in.mp3 -filter_complex channelsplit out.mkv
    

    will create an output Matroska file with two audio streams, one containing only the left channel and the other the right channel.

  • Split a 5.1 WAV file into per-channel files:
    ffmpeg -i in.wav -filter_complex
    'channelsplit=channel_layout=5.1[FL][FR][FC][LFE][SL][SR]'
    -map '[FL]' front_left.wav -map '[FR]' front_right.wav -map '[FC]'
    front_center.wav -map '[LFE]' lfe.wav -map '[SL]' side_left.wav -map '[SR]'
    side_right.wav
    
  • Extract only LFE from a 5.1 WAV file:
    ffmpeg -i in.wav -filter_complex 'channelsplit=channel_layout=5.1:channels=LFE[LFE]'
    -map '[LFE]' lfe.wav
    

Add a chorus effect to the audio.

Can make a single vocal sound like a chorus, but can also be applied to instrumentation.

Chorus resembles an echo effect with a short delay, but whereas with echo the delay is constant, with chorus, it is varied using using sinusoidal or triangular modulation. The modulation depth defines the range the modulated delay is played before or after the delay. Hence the delayed sound will sound slower or faster, that is the delayed sound tuned around the original one, like in a chorus where some vocals are slightly off key.

It accepts the following parameters:

Set input gain. Default is 0.4.
Set output gain. Default is 0.4.
Set delays. A typical delay is around 40ms to 60ms.
Set decays.
Set speeds.
Set depths.

Examples

  • A single delay:
    chorus=0.7:0.9:55:0.4:0.25:2
    
  • Two delays:
    chorus=0.6:0.9:50|60:0.4|0.32:0.25|0.4:2|1.3
    
  • Fuller sounding chorus with three delays:
    chorus=0.5:0.9:50|60|40:0.4|0.32|0.3:0.25|0.4|0.3:2|2.3|1.3
    

Compress or expand the audio's dynamic range.

It accepts the following parameters:

A list of times in seconds for each channel over which the instantaneous level of the input signal is averaged to determine its volume. attacks refers to increase of volume and decays refers to decrease of volume. For most situations, the attack time (response to the audio getting louder) should be shorter than the decay time, because the human ear is more sensitive to sudden loud audio than sudden soft audio. A typical value for attack is 0.3 seconds and a typical value for decay is 0.8 seconds. If specified number of attacks & decays is lower than number of channels, the last set attack/decay will be used for all remaining channels.
A list of points for the transfer function, specified in dB relative to the maximum possible signal amplitude. Each key points list must be defined using the following syntax: "x0/y0|x1/y1|x2/y2|...." or "x0/y0 x1/y1 x2/y2 ...."

The input values must be in strictly increasing order but the transfer function does not have to be monotonically rising. The point "0/0" is assumed but may be overridden (by "0/out-dBn"). Typical values for the transfer function are "-70/-70|-60/-20|1/0".

Set the curve radius in dB for all joints. It defaults to 0.01.
Set the additional gain in dB to be applied at all points on the transfer function. This allows for easy adjustment of the overall gain. It defaults to 0.
volume
Set an initial volume, in dB, to be assumed for each channel when filtering starts. This permits the user to supply a nominal level initially, so that, for example, a very large gain is not applied to initial signal levels before the companding has begun to operate. A typical value for audio which is initially quiet is -90 dB. It defaults to 0.
Set a delay, in seconds. The input audio is analyzed immediately, but audio is delayed before being fed to the volume adjuster. Specifying a delay approximately equal to the attack/decay times allows the filter to effectively operate in predictive rather than reactive mode. It defaults to 0.

Examples

  • Make music with both quiet and loud passages suitable for listening to in a noisy environment:
    compand=.3|.3:1|1:-90/-60|-60/-40|-40/-30|-20/-20:6:0:-90:0.2
    

    Another example for audio with whisper and explosion parts:

    compand=0|0:1|1:-90/-900|-70/-70|-30/-9|0/-3:6:0:0:0
    
  • A noise gate for when the noise is at a lower level than the signal:
    compand=.1|.1:.2|.2:-900/-900|-50.1/-900|-50/-50:.01:0:-90:.1
    
  • Here is another noise gate, this time for when the noise is at a higher level than the signal (making it, in some ways, similar to squelch):
    compand=.1|.1:.1|.1:-45.1/-45.1|-45/-900|0/-900:.01:45:-90:.1
    
  • 2:1 compression starting at -6dB:
    compand=points=-80/-80|-6/-6|0/-3.8|20/3.5
    
  • 2:1 compression starting at -9dB:
    compand=points=-80/-80|-9/-9|0/-5.3|20/2.9
    
  • 2:1 compression starting at -12dB:
    compand=points=-80/-80|-12/-12|0/-6.8|20/1.9
    
  • 2:1 compression starting at -18dB:
    compand=points=-80/-80|-18/-18|0/-9.8|20/0.7
    
  • 3:1 compression starting at -15dB:
    compand=points=-80/-80|-15/-15|0/-10.8|20/-5.2
    
  • Compressor/Gate:
    compand=points=-80/-105|-62/-80|-15.4/-15.4|0/-12|20/-7.6
    
  • Expander:
    compand=attacks=0:points=-80/-169|-54/-80|-49.5/-64.6|-41.1/-41.1|-25.8/-15|-10.8/-4.5|0/0|20/8.3
    
  • Hard limiter at -6dB:
    compand=attacks=0:points=-80/-80|-6/-6|20/-6
    
  • Hard limiter at -12dB:
    compand=attacks=0:points=-80/-80|-12/-12|20/-12
    
  • Hard noise gate at -35 dB:
    compand=attacks=0:points=-80/-115|-35.1/-80|-35/-35|20/20
    
  • Soft limiter:
    compand=attacks=0:points=-80/-80|-12.4/-12.4|-6/-8|0/-6.8|20/-2.8
    

Compensation Delay Line is a metric based delay to compensate differing positions of microphones or speakers.

For example, you have recorded guitar with two microphones placed in different locations. Because the front of sound wave has fixed speed in normal conditions, the phasing of microphones can vary and depends on their location and interposition. The best sound mix can be achieved when these microphones are in phase (synchronized). Note that a distance of ~30 cm between microphones makes one microphone capture the signal in antiphase to the other microphone. That makes the final mix sound moody. This filter helps to solve phasing problems by adding different delays to each microphone track and make them synchronized.

The best result can be reached when you take one track as base and synchronize other tracks one by one with it. Remember that synchronization/delay tolerance depends on sample rate, too. Higher sample rates will give more tolerance.

The filter accepts the following parameters:

Set millimeters distance. This is compensation distance for fine tuning. Default is 0.
Set cm distance. This is compensation distance for tightening distance setup. Default is 0.
Set meters distance. This is compensation distance for hard distance setup. Default is 0.
Set dry amount. Amount of unprocessed (dry) signal. Default is 0.
Set wet amount. Amount of processed (wet) signal. Default is 1.
Set temperature in degrees Celsius. This is the temperature of the environment. Default is 20.

Commands

This filter supports the all above options as commands.

Apply headphone crossfeed filter.

Crossfeed is the process of blending the left and right channels of stereo audio recording. It is mainly used to reduce extreme stereo separation of low frequencies.

The intent is to produce more speaker like sound to the listener.

The filter accepts the following options:

Set strength of crossfeed. Default is 0.2. Allowed range is from 0 to 1. This sets gain of low shelf filter for side part of stereo image. Default is -6dB. Max allowed is -30db when strength is set to 1.
Set soundstage wideness. Default is 0.5. Allowed range is from 0 to 1. This sets cut off frequency of low shelf filter. Default is cut off near 1550 Hz. With range set to 1 cut off frequency is set to 2100 Hz.
Set curve slope of low shelf filter. Default is 0.5. Allowed range is from 0.01 to 1.
Set input gain. Default is 0.9.
Set output gain. Default is 1.
Set block size used for reverse IIR processing. If this value is set to high enough value (higher than impulse response length truncated when reaches near zero values) filtering will become linear phase otherwise if not big enough it will just produce nasty artifacts.

Note that filter delay will be exactly this many samples when set to non-zero value.

Commands

This filter supports the all above options as commands.

Simple algorithm for audio noise sharpening.

This filter linearly increases differences between each audio sample.

The filter accepts the following options:

Sets the intensity of effect (default: 2.0). Must be in range between -10.0 to 0 (unchanged sound) to 10.0 (maximum effect). To inverse filtering use negative value.
Enable clipping. By default is enabled.

Commands

This filter supports the all above options as commands.

Apply a DC shift to the audio.

This can be useful to remove a DC offset (caused perhaps by a hardware problem in the recording chain) from the audio. The effect of a DC offset is reduced headroom and hence volume. The astats filter can be used to determine if a signal has a DC offset.

Set the DC shift, allowed range is [-1, 1]. It indicates the amount to shift the audio.
Optional. It should have a value much less than 1 (e.g. 0.05 or 0.02) and is used to prevent clipping.

Apply de-essing to the audio samples.

Set intensity for triggering de-essing. Allowed range is from 0 to 1. Default is 0.
Set amount of ducking on treble part of sound. Allowed range is from 0 to 1. Default is 0.5.
How much of original frequency content to keep when de-essing. Allowed range is from 0 to 1. Default is 0.5.
Set the output mode.

It accepts the following values:

Pass input unchanged.
Pass ess filtered out.
Pass only ess.

Default value is o.

Enhance dialogue in stereo audio.

This filter accepts stereo input and produce surround (3.0) channels output. The newly produced front center channel have enhanced speech dialogue originally available in both stereo channels. This filter outputs front left and front right channels same as available in stereo input.

The filter accepts the following options:

Set the original center factor to keep in front center channel output. Allowed range is from 0 to 1. Default value is 1.
Set the dialogue enhance factor to put in front center channel output. Allowed range is from 0 to 3. Default value is 1.
Set the voice detection factor. Allowed range is from 2 to 32. Default value is 2.

Commands

This filter supports the all above options as commands.

Measure audio dynamic range.

DR values of 14 and higher is found in very dynamic material. DR of 8 to 13 is found in transition material. And anything less that 8 have very poor dynamics and is very compressed.

The filter accepts the following options:

Set window length in seconds used to split audio into segments of equal length. Default is 3 seconds.

Dynamic Audio Normalizer.

This filter applies a certain amount of gain to the input audio in order to bring its peak magnitude to a target level (e.g. 0 dBFS). However, in contrast to more "simple" normalization algorithms, the Dynamic Audio Normalizer *dynamically* re-adjusts the gain factor to the input audio. This allows for applying extra gain to the "quiet" sections of the audio while avoiding distortions or clipping the "loud" sections. In other words: The Dynamic Audio Normalizer will "even out" the volume of quiet and loud sections, in the sense that the volume of each section is brought to the same target level. Note, however, that the Dynamic Audio Normalizer achieves this goal *without* applying "dynamic range compressing". It will retain 100% of the dynamic range *within* each section of the audio file.

Set the frame length in milliseconds. In range from 10 to 8000 milliseconds. Default is 500 milliseconds. The Dynamic Audio Normalizer processes the input audio in small chunks, referred to as frames. This is required, because a peak magnitude has no meaning for just a single sample value. Instead, we need to determine the peak magnitude for a contiguous sequence of sample values. While a "standard" normalizer would simply use the peak magnitude of the complete file, the Dynamic Audio Normalizer determines the peak magnitude individually for each frame. The length of a frame is specified in milliseconds. By default, the Dynamic Audio Normalizer uses a frame length of 500 milliseconds, which has been found to give good results with most files. Note that the exact frame length, in number of samples, will be determined automatically, based on the sampling rate of the individual input audio file.
Set the Gaussian filter window size. In range from 3 to 301, must be odd number. Default is 31. Probably the most important parameter of the Dynamic Audio Normalizer is the "window size" of the Gaussian smoothing filter. The filter's window size is specified in frames, centered around the current frame. For the sake of simplicity, this must be an odd number. Consequently, the default value of 31 takes into account the current frame, as well as the 15 preceding frames and the 15 subsequent frames. Using a larger window results in a stronger smoothing effect and thus in less gain variation, i.e. slower gain adaptation. Conversely, using a smaller window results in a weaker smoothing effect and thus in more gain variation, i.e. faster gain adaptation. In other words, the more you increase this value, the more the Dynamic Audio Normalizer will behave like a "traditional" normalization filter. On the contrary, the more you decrease this value, the more the Dynamic Audio Normalizer will behave like a dynamic range compressor.
Set the target peak value. This specifies the highest permissible magnitude level for the normalized audio input. This filter will try to approach the target peak magnitude as closely as possible, but at the same time it also makes sure that the normalized signal will never exceed the peak magnitude. A frame's maximum local gain factor is imposed directly by the target peak magnitude. The default value is 0.95 and thus leaves a headroom of 5%*. It is not recommended to go above this value.
Set the maximum gain factor. In range from 1.0 to 100.0. Default is 10.0. The Dynamic Audio Normalizer determines the maximum possible (local) gain factor for each input frame, i.e. the maximum gain factor that does not result in clipping or distortion. The maximum gain factor is determined by the frame's highest magnitude sample. However, the Dynamic Audio Normalizer additionally bounds the frame's maximum gain factor by a predetermined (global) maximum gain factor. This is done in order to avoid excessive gain factors in "silent" or almost silent frames. By default, the maximum gain factor is 10.0, For most inputs the default value should be sufficient and it usually is not recommended to increase this value. Though, for input with an extremely low overall volume level, it may be necessary to allow even higher gain factors. Note, however, that the Dynamic Audio Normalizer does not simply apply a "hard" threshold (i.e. cut off values above the threshold). Instead, a "sigmoid" threshold function will be applied. This way, the gain factors will smoothly approach the threshold value, but never exceed that value.
Set the target RMS. In range from 0.0 to 1.0. Default is 0.0 - disabled. By default, the Dynamic Audio Normalizer performs "peak" normalization. This means that the maximum local gain factor for each frame is defined (only) by the frame's highest magnitude sample. This way, the samples can be amplified as much as possible without exceeding the maximum signal level, i.e. without clipping. Optionally, however, the Dynamic Audio Normalizer can also take into account the frame's root mean square, abbreviated RMS. In electrical engineering, the RMS is commonly used to determine the power of a time-varying signal. It is therefore considered that the RMS is a better approximation of the "perceived loudness" than just looking at the signal's peak magnitude. Consequently, by adjusting all frames to a constant RMS value, a uniform "perceived loudness" can be established. If a target RMS value has been specified, a frame's local gain factor is defined as the factor that would result in exactly that RMS value. Note, however, that the maximum local gain factor is still restricted by the frame's highest magnitude sample, in order to prevent clipping.
Enable channels coupling. By default is enabled. By default, the Dynamic Audio Normalizer will amplify all channels by the same amount. This means the same gain factor will be applied to all channels, i.e. the maximum possible gain factor is determined by the "loudest" channel. However, in some recordings, it may happen that the volume of the different channels is uneven, e.g. one channel may be "quieter" than the other one(s). In this case, this option can be used to disable the channel coupling. This way, the gain factor will be determined independently for each channel, depending only on the individual channel's highest magnitude sample. This allows for harmonizing the volume of the different channels.
Enable DC bias correction. By default is disabled. An audio signal (in the time domain) is a sequence of sample values. In the Dynamic Audio Normalizer these sample values are represented in the -1.0 to 1.0 range, regardless of the original input format. Normally, the audio signal, or "waveform", should be centered around the zero point. That means if we calculate the mean value of all samples in a file, or in a single frame, then the result should be 0.0 or at least very close to that value. If, however, there is a significant deviation of the mean value from 0.0, in either positive or negative direction, this is referred to as a DC bias or DC offset. Since a DC bias is clearly undesirable, the Dynamic Audio Normalizer provides optional DC bias correction. With DC bias correction enabled, the Dynamic Audio Normalizer will determine the mean value, or "DC correction" offset, of each input frame and subtract that value from all of the frame's sample values which ensures those samples are centered around 0.0 again. Also, in order to avoid "gaps" at the frame boundaries, the DC correction offset values will be interpolated smoothly between neighbouring frames.
Enable alternative boundary mode. By default is disabled. The Dynamic Audio Normalizer takes into account a certain neighbourhood around each frame. This includes the preceding frames as well as the subsequent frames. However, for the "boundary" frames, located at the very beginning and at the very end of the audio file, not all neighbouring frames are available. In particular, for the first few frames in the audio file, the preceding frames are not known. And, similarly, for the last few frames in the audio file, the subsequent frames are not known. Thus, the question arises which gain factors should be assumed for the missing frames in the "boundary" region. The Dynamic Audio Normalizer implements two modes to deal with this situation. The default boundary mode assumes a gain factor of exactly 1.0 for the missing frames, resulting in a smooth "fade in" and "fade out" at the beginning and at the end of the input, respectively.
Set the compress factor. In range from 0.0 to 30.0. Default is 0.0. By default, the Dynamic Audio Normalizer does not apply "traditional" compression. This means that signal peaks will not be pruned and thus the full dynamic range will be retained within each local neighbourhood. However, in some cases it may be desirable to combine the Dynamic Audio Normalizer's normalization algorithm with a more "traditional" compression. For this purpose, the Dynamic Audio Normalizer provides an optional compression (thresholding) function. If (and only if) the compression feature is enabled, all input frames will be processed by a soft knee thresholding function prior to the actual normalization process. Put simply, the thresholding function is going to prune all samples whose magnitude exceeds a certain threshold value. However, the Dynamic Audio Normalizer does not simply apply a fixed threshold value. Instead, the threshold value will be adjusted for each individual frame. In general, smaller parameters result in stronger compression, and vice versa. Values below 3.0 are not recommended, because audible distortion may appear.
Set the target threshold value. This specifies the lowest permissible magnitude level for the audio input which will be normalized. If input frame volume is above this value frame will be normalized. Otherwise frame may not be normalized at all. The default value is set to 0, which means all input frames will be normalized. This option is mostly useful if digital noise is not wanted to be amplified.
Specify which channels to filter, by default all available channels are filtered.
Specify overlap for frames. If set to 0 (default) no frame overlapping is done. Using >0 and <1 values will make less conservative gain adjustments, like when framelen option is set to smaller value, if framelen option value is compensated for non-zero overlap then gain adjustments will be smoother across time compared to zero overlap case.
Specify the peak mapping curve expression which is going to be used when calculating gain applied to frames. The max output frame gain will still be limited by other options mentioned previously for this filter.

The expression can contain the following constants:

current channel number
current sample number
number of channels
timestamp expressed in seconds
sr
sample rate
current frame peak value

Commands

This filter supports the all above options as commands.

Make audio easier to listen to on headphones.

This filter adds `cues' to 44.1kHz stereo (i.e. audio CD format) audio so that when listened to on headphones the stereo image is moved from inside your head (standard for headphones) to outside and in front of the listener (standard for speakers).

Ported from SoX.

Apply a two-pole peaking equalisation (EQ) filter. With this filter, the signal-level at and around a selected frequency can be increased or decreased, whilst (unlike bandpass and bandreject filters) that at all other frequencies is unchanged.

In order to produce complex equalisation curves, this filter can be given several times, each with a different central frequency.

The filter accepts the following options:

Set the filter's central frequency in Hz.
Set method to specify band-width of filter.
Hz
Q-Factor
octave
slope
kHz
Specify the band-width of a filter in width_type units.
Set the required gain or attenuation in dB. Beware of clipping when using a positive gain.
How much to use filtered signal in output. Default is 1. Range is between 0 and 1.
Specify which channels to filter, by default all available are filtered.
Normalize biquad coefficients, by default is disabled. Enabling it will normalize magnitude response at DC to 0dB.
Set transform type of IIR filter.
Set precision of filtering.
Pick automatic sample format depending on surround filters.
Always use signed 16-bit.
Always use signed 32-bit.
Always use float 32-bit.
Always use float 64-bit.
Set block size used for reverse IIR processing. If this value is set to high enough value (higher than impulse response length truncated when reaches near zero values) filtering will become linear phase otherwise if not big enough it will just produce nasty artifacts.

Note that filter delay will be exactly this many samples when set to non-zero value.

Examples

  • Attenuate 10 dB at 1000 Hz, with a bandwidth of 200 Hz:
    equalizer=f=1000:t=h:width=200:g=-10
    
  • Apply 2 dB gain at 1000 Hz with Q 1 and attenuate 5 dB at 100 Hz with Q 2:
    equalizer=f=1000:t=q:w=1:g=2,equalizer=f=100:t=q:w=2:g=-5
    

Commands

This filter supports the following commands:

Change equalizer frequency. Syntax for the command is : "frequency"
Change equalizer width_type. Syntax for the command is : "width_type"
Change equalizer width. Syntax for the command is : "width"
Change equalizer gain. Syntax for the command is : "gain"
Change equalizer mix. Syntax for the command is : "mix"

Linearly increases the difference between left and right channels which adds some sort of "live" effect to playback.

The filter accepts the following options:

Sets the difference coefficient (default: 2.5). 0.0 means mono sound (average of both channels), with 1.0 sound will be unchanged, with -1.0 left and right channels will be swapped.
Enable clipping. By default is enabled.

Commands

This filter supports the all above options as commands.

Apply FIR Equalization using arbitrary frequency response.

The filter accepts the following option:

Set gain curve equation (in dB). The expression can contain variables:
the evaluated frequency
sr
sample rate
channel number, set to 0 when multichannels evaluation is disabled
channel id, see libavutil/channel_layout.h, set to the first channel id when multichannels evaluation is disabled
number of channels
channel_layout, see libavutil/channel_layout.h

and functions:

interpolate gain on frequency f based on gain_entry
same as gain_interpolate, but smoother

This option is also available as command. Default is gain_interpolate(f).

Set gain entry for gain_interpolate function. The expression can contain functions:
store gain entry at frequency f with value g

This option is also available as command.

Set filter delay in seconds. Higher value means more accurate. Default is 0.01.
Set filter accuracy in Hz. Lower value means more accurate. Default is 5.
Set window function. Acceptable values are:
rectangular window, useful when gain curve is already smooth
hann window (default)
hamming window
blackman window
3-terms continuous 1st derivative nuttall window
minimum 3-terms discontinuous nuttall window
4-terms continuous 1st derivative nuttall window
minimum 4-terms discontinuous nuttall (blackman-nuttall) window
blackman-harris window
tukey window
If enabled, use fixed number of audio samples. This improves speed when filtering with large delay. Default is disabled.
Enable multichannels evaluation on gain. Default is disabled.
Enable zero phase mode by subtracting timestamp to compensate delay. Default is disabled.
scale
Set scale used by gain. Acceptable values are:
linear frequency, linear gain
linear frequency, logarithmic (in dB) gain (default)
logarithmic (in octave scale where 20 Hz is 0) frequency, linear gain
logarithmic frequency, logarithmic gain
Set file for dumping, suitable for gnuplot.
Set scale for dumpfile. Acceptable values are same with scale option. Default is linlog.
Enable 2-channel convolution using complex FFT. This improves speed significantly. Default is disabled.
Enable minimum phase impulse response. Default is disabled.

Examples

  • lowpass at 1000 Hz:
    firequalizer=gain='if(lt(f,1000), 0, -INF)'
    
  • lowpass at 1000 Hz with gain_entry:
    firequalizer=gain_entry='entry(1000,0); entry(1001, -INF)'
    
  • custom equalization:
    firequalizer=gain_entry='entry(100,0); entry(400, -4); entry(1000, -6); entry(2000, 0)'
    
  • higher delay with zero phase to compensate delay:
    firequalizer=delay=0.1:fixed=on:zero_phase=on
    
  • lowpass on left channel, highpass on right channel:
    firequalizer=gain='if(eq(chid,1), gain_interpolate(f), if(eq(chid,2), gain_interpolate(1e6+f), 0))'
    :gain_entry='entry(1000, 0); entry(1001,-INF); entry(1e6+1000,0)':multi=on
    

Apply a flanging effect to the audio.

The filter accepts the following options:

Set base delay in milliseconds. Range from 0 to 30. Default value is 0.
Set added sweep delay in milliseconds. Range from 0 to 10. Default value is 2.
Set percentage regeneration (delayed signal feedback). Range from -95 to 95. Default value is 0.
Set percentage of delayed signal mixed with original. Range from 0 to 100. Default value is 71.
Set sweeps per second (Hz). Range from 0.1 to 10. Default value is 0.5.
Set swept wave shape, can be triangular or sinusoidal. Default value is sinusoidal.
phase
Set swept wave percentage-shift for multi channel. Range from 0 to 100. Default value is 25.
Set delay-line interpolation, linear or quadratic. Default is linear.

Apply Haas effect to audio.

Note that this makes most sense to apply on mono signals. With this filter applied to mono signals it give some directionality and stretches its stereo image.

The filter accepts the following options:

Set input level. By default is 1, or 0dB
Set output level. By default is 1, or 0dB.
Set gain applied to side part of signal. By default is 1.
Set kind of middle source. Can be one of the following:
Pick left channel.
Pick right channel.
Pick middle part signal of stereo image.
Pick side part signal of stereo image.
Change middle phase. By default is disabled.
Set left channel delay. By default is 2.05 milliseconds.
Set left channel balance. By default is -1.
Set left channel gain. By default is 1.
Change left phase. By default is disabled.
Set right channel delay. By defaults is 2.12 milliseconds.
Set right channel balance. By default is 1.
Set right channel gain. By default is 1.
Change right phase. By default is enabled.

Decodes High Definition Compatible Digital (HDCD) data. A 16-bit PCM stream with embedded HDCD codes is expanded into a 20-bit PCM stream.

The filter supports the Peak Extend and Low-level Gain Adjustment features of HDCD, and detects the Transient Filter flag.

ffmpeg -i HDCD16.flac -af hdcd OUT24.flac

When using the filter with wav, note the default encoding for wav is 16-bit, so the resulting 20-bit stream will be truncated back to 16-bit. Use something like -acodec pcm_s24le after the filter to get 24-bit PCM output.

ffmpeg -i HDCD16.wav -af hdcd OUT16.wav
ffmpeg -i HDCD16.wav -af hdcd -c:a pcm_s24le OUT24.wav

The filter accepts the following options:

Disable any automatic format conversion or resampling in the filter graph.
Process the stereo channels together. If target_gain does not match between channels, consider it invalid and use the last valid target_gain.
Set the code detect timer period in ms.
Always extend peaks above -3dBFS even if PE isn't signaled.
Replace audio with a solid tone and adjust the amplitude to signal some specific aspect of the decoding process. The output file can be loaded in an audio editor alongside the original to aid analysis.

"analyze_mode=pe:force_pe=true" can be used to see all samples above the PE level.

Modes are:

0, off
Disabled
1, lle
Gain adjustment level at each sample
2, pe
Samples where peak extend occurs
3, cdt
Samples where the code detect timer is active
4, tgm
Samples where the target gain does not match between channels

Apply head-related transfer functions (HRTFs) to create virtual loudspeakers around the user for binaural listening via headphones. The HRIRs are provided via additional streams, for each channel one stereo input stream is needed.

The filter accepts the following options:

Set mapping of input streams for convolution. The argument is a '|'-separated list of channel names in order as they are given as additional stream inputs for filter. This also specify number of input streams. Number of input streams must be not less than number of channels in first stream plus one.
Set gain applied to audio. Value is in dB. Default is 0.
Set processing type. Can be time or freq. time is processing audio in time domain which is slow. freq is processing audio in frequency domain which is fast. Default is freq.
Set custom gain for LFE channels. Value is in dB. Default is 0.
Set size of frame in number of samples which will be processed at once. Default value is 1024. Allowed range is from 1024 to 96000.
Set format of hrir stream. Default value is stereo. Alternative value is multich. If value is set to stereo, number of additional streams should be greater or equal to number of input channels in first input stream. Also each additional stream should have stereo number of channels. If value is set to multich, number of additional streams should be exactly one. Also number of input channels of additional stream should be equal or greater than twice number of channels of first input stream.

Examples

  • Full example using wav files as coefficients with amovie filters for 7.1 downmix, each amovie filter use stereo file with IR coefficients as input. The files give coefficients for each position of virtual loudspeaker:
    ffmpeg -i input.wav
    -filter_complex "amovie=azi_270_ele_0_DFC.wav[sr];amovie=azi_90_ele_0_DFC.wav[sl];amovie=azi_225_ele_0_DFC.wav[br];amovie=azi_135_ele_0_DFC.wav[bl];amovie=azi_0_ele_0_DFC.wav,asplit[fc][lfe];amovie=azi_35_ele_0_DFC.wav[fl];amovie=azi_325_ele_0_DFC.wav[fr];[0:a][fl][fr][fc][lfe][bl][br][sl][sr]headphone=FL|FR|FC|LFE|BL|BR|SL|SR"
    output.wav
    
  • Full example using wav files as coefficients with amovie filters for 7.1 downmix, but now in multich hrir format.
    ffmpeg -i input.wav -filter_complex "amovie=minp.wav[hrirs];[0:a][hrirs]headphone=map=FL|FR|FC|LFE|BL|BR|SL|SR:hrir=multich"
    output.wav
    

Apply a high-pass filter with 3dB point frequency. The filter can be either single-pole, or double-pole (the default). The filter roll off at 6dB per pole per octave (20dB per pole per decade).

The filter accepts the following options:

Set frequency in Hz. Default is 3000.
Set number of poles. Default is 2.
Set method to specify band-width of filter.
Hz
Q-Factor
octave
slope
kHz
Specify the band-width of a filter in width_type units. Applies only to double-pole filter. The default is 0.707q and gives a Butterworth response.
How much to use filtered signal in output. Default is 1. Range is between 0 and 1.
Specify which channels to filter, by default all available are filtered.
Normalize biquad coefficients, by default is disabled. Enabling it will normalize magnitude response at DC to 0dB.
Set transform type of IIR filter.
Set precision of filtering.
Pick automatic sample format depending on surround filters.
Always use signed 16-bit.
Always use signed 32-bit.
Always use float 32-bit.
Always use float 64-bit.
Set block size used for reverse IIR processing. If this value is set to high enough value (higher than impulse response length truncated when reaches near zero values) filtering will become linear phase otherwise if not big enough it will just produce nasty artifacts.

Note that filter delay will be exactly this many samples when set to non-zero value.

Commands

This filter supports the following commands:

Change highpass frequency. Syntax for the command is : "frequency"
Change highpass width_type. Syntax for the command is : "width_type"
Change highpass width. Syntax for the command is : "width"
Change highpass mix. Syntax for the command is : "mix"

Join multiple input streams into one multi-channel stream.

It accepts the following parameters:

The number of input streams. It defaults to 2.
The desired output channel layout. It defaults to stereo.
Map channels from inputs to output. The argument is a '|'-separated list of mappings, each in the "input_idx.in_channel-out_channel" form. input_idx is the 0-based index of the input stream. in_channel can be either the name of the input channel (e.g. FL for front left) or its index in the specified input stream. out_channel is the name of the output channel.

The filter will attempt to guess the mappings when they are not specified explicitly. It does so by first trying to find an unused matching input channel and if that fails it picks the first unused input channel.

Join 3 inputs (with properly set channel layouts):

ffmpeg -i INPUT1 -i INPUT2 -i INPUT3 -filter_complex join=inputs=3 OUTPUT

Build a 5.1 output from 6 single-channel streams:

ffmpeg -i fl -i fr -i fc -i sl -i sr -i lfe -filter_complex
'join=inputs=6:channel_layout=5.1:map=0.0-FL|1.0-FR|2.0-FC|3.0-SL|4.0-SR|5.0-LFE'
out

Load a LADSPA (Linux Audio Developer's Simple Plugin API) plugin.

To enable compilation of this filter you need to configure FFmpeg with "--enable-ladspa".

Specifies the name of LADSPA plugin library to load. If the environment variable LADSPA_PATH is defined, the LADSPA plugin is searched in each one of the directories specified by the colon separated list in LADSPA_PATH, otherwise in the standard LADSPA paths, which are in this order: HOME/.ladspa/lib/, /usr/local/lib/ladspa/, /usr/lib/ladspa/.
Specifies the plugin within the library. Some libraries contain only one plugin, but others contain many of them. If this is not set filter will list all available plugins within the specified library.
Set the '|' separated list of controls which are zero or more floating point values that determine the behavior of the loaded plugin (for example delay, threshold or gain). Controls need to be defined using the following syntax: c0=value0|c1=value1|c2=value2|..., where valuei is the value set on the i-th control. Alternatively they can be also defined using the following syntax: value0|value1|value2|..., where valuei is the value set on the i-th control. If controls is set to "help", all available controls and their valid ranges are printed.
Specify the sample rate, default to 44100. Only used if plugin have zero inputs.
Set the number of samples per channel per each output frame, default is 1024. Only used if plugin have zero inputs.
Set the minimum duration of the sourced audio. See the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax. Note that the resulting duration may be greater than the specified duration, as the generated audio is always cut at the end of a complete frame. If not specified, or the expressed duration is negative, the audio is supposed to be generated forever. Only used if plugin have zero inputs.
Enable latency compensation, by default is disabled. Only used if plugin have inputs.

Examples

  • List all available plugins within amp (LADSPA example plugin) library:
    ladspa=file=amp
    
  • List all available controls and their valid ranges for "vcf_notch" plugin from "VCF" library:
    ladspa=f=vcf:p=vcf_notch:c=help
    
  • Simulate low quality audio equipment using "Computer Music Toolkit" (CMT) plugin library:
    ladspa=file=cmt:plugin=lofi:controls=c0=22|c1=12|c2=12
    
  • Add reverberation to the audio using TAP-plugins (Tom's Audio Processing plugins):
    ladspa=file=tap_reverb:tap_reverb
    
  • Generate white noise, with 0.2 amplitude:
    ladspa=file=cmt:noise_source_white:c=c0=.2
    
  • Generate 20 bpm clicks using plugin "C* Click - Metronome" from the "C* Audio Plugin Suite" (CAPS) library:
    ladspa=file=caps:Click:c=c1=20'
    
  • Apply "C* Eq10X2 - Stereo 10-band equaliser" effect:
    ladspa=caps:Eq10X2:c=c0=-48|c9=-24|c3=12|c4=2
    
  • Increase volume by 20dB using fast lookahead limiter from Steve Harris "SWH Plugins" collection:
    ladspa=fast_lookahead_limiter_1913:fastLookaheadLimiter:20|0|2
    
  • Attenuate low frequencies using Multiband EQ from Steve Harris "SWH Plugins" collection:
    ladspa=mbeq_1197:mbeq:-24|-24|-24|0|0|0|0|0|0|0|0|0|0|0|0
    
  • Reduce stereo image using "Narrower" from the "C* Audio Plugin Suite" (CAPS) library:
    ladspa=caps:Narrower
    
  • Another white noise, now using "C* Audio Plugin Suite" (CAPS) library:
    ladspa=caps:White:.2
    
  • Some fractal noise, using "C* Audio Plugin Suite" (CAPS) library:
    ladspa=caps:Fractal:c=c1=1
    
  • Dynamic volume normalization using "VLevel" plugin:
    ladspa=vlevel-ladspa:vlevel_mono
    

Commands

This filter supports the following commands:

Modify the N-th control value.

If the specified value is not valid, it is ignored and prior one is kept.

EBU R128 loudness normalization. Includes both dynamic and linear normalization modes. Support for both single pass (livestreams, files) and double pass (files) modes. This algorithm can target IL, LRA, and maximum true peak. In dynamic mode, to accurately detect true peaks, the audio stream will be upsampled to 192 kHz. Use the "-ar" option or "aresample" filter to explicitly set an output sample rate.

The filter accepts the following options:

Set integrated loudness target. Range is -70.0 - -5.0. Default value is -24.0.
Set loudness range target. Range is 1.0 - 50.0. Default value is 7.0.
Set maximum true peak. Range is -9.0 - +0.0. Default value is -2.0.
Measured IL of input file. Range is -99.0 - +0.0.
Measured LRA of input file. Range is 0.0 - 99.0.
Measured true peak of input file. Range is -99.0 - +99.0.
Measured threshold of input file. Range is -99.0 - +0.0.
Set offset gain. Gain is applied before the true-peak limiter. Range is -99.0 - +99.0. Default is +0.0.
Normalize by linearly scaling the source audio. "measured_I", "measured_LRA", "measured_TP", and "measured_thresh" must all be specified. Target LRA shouldn't be lower than source LRA and the change in integrated loudness shouldn't result in a true peak which exceeds the target TP. If any of these conditions aren't met, normalization mode will revert to dynamic. Options are "true" or "false". Default is "true".
Treat mono input files as "dual-mono". If a mono file is intended for playback on a stereo system, its EBU R128 measurement will be perceptually incorrect. If set to "true", this option will compensate for this effect. Multi-channel input files are not affected by this option. Options are true or false. Default is false.
Set print format for stats. Options are summary, json, or none. Default value is none.

Apply a low-pass filter with 3dB point frequency. The filter can be either single-pole or double-pole (the default). The filter roll off at 6dB per pole per octave (20dB per pole per decade).

The filter accepts the following options:

Set frequency in Hz. Default is 500.
Set number of poles. Default is 2.
Set method to specify band-width of filter.
Hz
Q-Factor
octave
slope
kHz
Specify the band-width of a filter in width_type units. Applies only to double-pole filter. The default is 0.707q and gives a Butterworth response.
How much to use filtered signal in output. Default is 1. Range is between 0 and 1.
Specify which channels to filter, by default all available are filtered.
Normalize biquad coefficients, by default is disabled. Enabling it will normalize magnitude response at DC to 0dB.
Set transform type of IIR filter.
Set precision of filtering.
Pick automatic sample format depending on surround filters.
Always use signed 16-bit.
Always use signed 32-bit.
Always use float 32-bit.
Always use float 64-bit.
Set block size used for reverse IIR processing. If this value is set to high enough value (higher than impulse response length truncated when reaches near zero values) filtering will become linear phase otherwise if not big enough it will just produce nasty artifacts.

Note that filter delay will be exactly this many samples when set to non-zero value.

Examples

Lowpass only LFE channel, it LFE is not present it does nothing:
lowpass=c=LFE

Commands

This filter supports the following commands:

Change lowpass frequency. Syntax for the command is : "frequency"
Change lowpass width_type. Syntax for the command is : "width_type"
Change lowpass width. Syntax for the command is : "width"
Change lowpass mix. Syntax for the command is : "mix"

Load a LV2 (LADSPA Version 2) plugin.

To enable compilation of this filter you need to configure FFmpeg with "--enable-lv2".

Specifies the plugin URI. You may need to escape ':'.
Set the '|' separated list of controls which are zero or more floating point values that determine the behavior of the loaded plugin (for example delay, threshold or gain). If controls is set to "help", all available controls and their valid ranges are printed.
Specify the sample rate, default to 44100. Only used if plugin have zero inputs.
Set the number of samples per channel per each output frame, default is 1024. Only used if plugin have zero inputs.
Set the minimum duration of the sourced audio. See the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax. Note that the resulting duration may be greater than the specified duration, as the generated audio is always cut at the end of a complete frame. If not specified, or the expressed duration is negative, the audio is supposed to be generated forever. Only used if plugin have zero inputs.

Examples

  • Apply bass enhancer plugin from Calf:
    lv2=p=http\\\\://calf.sourceforge.net/plugins/BassEnhancer:c=amount=2
    
  • Apply vinyl plugin from Calf:
    lv2=p=http\\\\://calf.sourceforge.net/plugins/Vinyl:c=drone=0.2|aging=0.5
    
  • Apply bit crusher plugin from ArtyFX:
    lv2=p=http\\\\://www.openavproductions.com/artyfx#bitta:c=crush=0.3
    

Commands

This filter supports all options that are exported by plugin as commands.

Multiband Compress or expand the audio's dynamic range.

The input audio is divided into bands using 4th order Linkwitz-Riley IIRs. This is akin to the crossover of a loudspeaker, and results in flat frequency response when absent compander action.

It accepts the following parameters:

This option syntax is: attack,decay,[attack,decay..] soft-knee points crossover_frequency [delay [initial_volume [gain]]] | attack,decay ... For explanation of each item refer to compand filter documentation.

Mix channels with specific gain levels. The filter accepts the output channel layout followed by a set of channels definitions.

This filter is also designed to efficiently remap the channels of an audio stream.

The filter accepts parameters of the form: "l|outdef|outdef|..."

output channel layout or number of channels
output channel specification, of the form: "out_name=[gain*]in_name[(+-)[gain*]in_name...]"
output channel to define, either a channel name (FL, FR, etc.) or a channel number (c0, c1, etc.)
multiplicative coefficient for the channel, 1 leaving the volume unchanged
input channel to use, see out_name for details; it is not possible to mix named and numbered input channels

If the `=' in a channel specification is replaced by `<', then the gains for that specification will be renormalized so that the total is 1, thus avoiding clipping noise.

Mixing examples

For example, if you want to down-mix from stereo to mono, but with a bigger factor for the left channel:

pan=1c|c0=0.9*c0+0.1*c1

A customized down-mix to stereo that works automatically for 3-, 4-, 5- and 7-channels surround:

pan=stereo| FL < FL + 0.5*FC + 0.6*BL + 0.6*SL | FR < FR + 0.5*FC + 0.6*BR + 0.6*SR

Note that ffmpeg integrates a default down-mix (and up-mix) system that should be preferred (see "-ac" option) unless you have very specific needs.

Remapping examples

The channel remapping will be effective if, and only if:

*<gain coefficients are zeroes or ones,>
*<only one input per channel output,>

If all these conditions are satisfied, the filter will notify the user ("Pure channel mapping detected"), and use an optimized and lossless method to do the remapping.

For example, if you have a 5.1 source and want a stereo audio stream by dropping the extra channels:

pan="stereo| c0=FL | c1=FR"

Given the same source, you can also switch front left and front right channels and keep the input channel layout:

pan="5.1| c0=c1 | c1=c0 | c2=c2 | c3=c3 | c4=c4 | c5=c5"

If the input is a stereo audio stream, you can mute the front left channel (and still keep the stereo channel layout) with:

pan="stereo|c1=c1"

Still with a stereo audio stream input, you can copy the right channel in both front left and right:

pan="stereo| c0=FR | c1=FR"

ReplayGain scanner filter. This filter takes an audio stream as an input and outputs it unchanged. At end of filtering it displays "track_gain" and "track_peak".

The filter accepts the following exported read-only options:

Exported track gain in dB at end of stream.
Exported track peak at end of stream.

Convert the audio sample format, sample rate and channel layout. It is not meant to be used directly.

Apply time-stretching and pitch-shifting with librubberband.

To enable compilation of this filter, you need to configure FFmpeg with "--enable-librubberband".

The filter accepts the following options:

Set tempo scale factor.
Set pitch scale factor.
Set transients detector. Possible values are:
Set detector. Possible values are:
phase
Set phase. Possible values are:
Set processing window size. Possible values are:
Set smoothing. Possible values are:
Enable formant preservation when shift pitching. Possible values are:
Set pitch quality. Possible values are:
Set channels. Possible values are:

Commands

This filter supports the following commands:

Change filter tempo scale factor. Syntax for the command is : "tempo"
Change filter pitch scale factor. Syntax for the command is : "pitch"

This filter acts like normal compressor but has the ability to compress detected signal using second input signal. It needs two input streams and returns one output stream. First input stream will be processed depending on second stream signal. The filtered signal then can be filtered with other filters in later stages of processing. See pan and amerge filter.

The filter accepts the following options:

Set input gain. Default is 1. Range is between 0.015625 and 64.
Set mode of compressor operation. Can be "upward" or "downward". Default is "downward".
threshold
If a signal of second stream raises above this level it will affect the gain reduction of first stream. By default is 0.125. Range is between 0.00097563 and 1.
Set a ratio about which the signal is reduced. 1:2 means that if the level raised 4dB above the threshold, it will be only 2dB above after the reduction. Default is 2. Range is between 1 and 20.
Amount of milliseconds the signal has to rise above the threshold before gain reduction starts. Default is 20. Range is between 0.01 and 2000.
Amount of milliseconds the signal has to fall below the threshold before reduction is decreased again. Default is 250. Range is between 0.01 and 9000.
Set the amount by how much signal will be amplified after processing. Default is 1. Range is from 1 to 64.
Curve the sharp knee around the threshold to enter gain reduction more softly. Default is 2.82843. Range is between 1 and 8.
Choose if the "average" level between all channels of side-chain stream or the louder("maximum") channel of side-chain stream affects the reduction. Default is "average".
Should the exact signal be taken in case of "peak" or an RMS one in case of "rms". Default is "rms" which is mainly smoother.
Set sidechain gain. Default is 1. Range is between 0.015625 and 64.
mix
How much to use compressed signal in output. Default is 1. Range is between 0 and 1.

Commands

This filter supports the all above options as commands.

Examples

Full ffmpeg example taking 2 audio inputs, 1st input to be compressed depending on the signal of 2nd input and later compressed signal to be merged with 2nd input:
ffmpeg -i main.flac -i sidechain.flac -filter_complex "[1:a]asplit=2[sc][mix];[0:a][sc]sidechaincompress[compr];[compr][mix]amerge"

A sidechain gate acts like a normal (wideband) gate but has the ability to filter the detected signal before sending it to the gain reduction stage. Normally a gate uses the full range signal to detect a level above the threshold. For example: If you cut all lower frequencies from your sidechain signal the gate will decrease the volume of your track only if not enough highs appear. With this technique you are able to reduce the resonation of a natural drum or remove "rumbling" of muted strokes from a heavily distorted guitar. It needs two input streams and returns one output stream. First input stream will be processed depending on second stream signal.

The filter accepts the following options:

Set input level before filtering. Default is 1. Allowed range is from 0.015625 to 64.
Set the mode of operation. Can be "upward" or "downward". Default is "downward". If set to "upward" mode, higher parts of signal will be amplified, expanding dynamic range in upward direction. Otherwise, in case of "downward" lower parts of signal will be reduced.
Set the level of gain reduction when the signal is below the threshold. Default is 0.06125. Allowed range is from 0 to 1. Setting this to 0 disables reduction and then filter behaves like expander.
threshold
If a signal rises above this level the gain reduction is released. Default is 0.125. Allowed range is from 0 to 1.
Set a ratio about which the signal is reduced. Default is 2. Allowed range is from 1 to 9000.
Amount of milliseconds the signal has to rise above the threshold before gain reduction stops. Default is 20 milliseconds. Allowed range is from 0.01 to 9000.
Amount of milliseconds the signal has to fall below the threshold before the reduction is increased again. Default is 250 milliseconds. Allowed range is from 0.01 to 9000.
Set amount of amplification of signal after processing. Default is 1. Allowed range is from 1 to 64.
Curve the sharp knee around the threshold to enter gain reduction more softly. Default is 2.828427125. Allowed range is from 1 to 8.
Choose if exact signal should be taken for detection or an RMS like one. Default is rms. Can be peak or rms.
Choose if the average level between all channels or the louder channel affects the reduction. Default is average. Can be average or maximum.
Set sidechain gain. Default is 1. Range is from 0.015625 to 64.

Commands

This filter supports the all above options as commands.

Detect silence in an audio stream.

This filter logs a message when it detects that the input audio volume is less or equal to a noise tolerance value for a duration greater or equal to the minimum detected noise duration.

The printed times and duration are expressed in seconds. The "lavfi.silence_start" or "lavfi.silence_start.X" metadata key is set on the first frame whose timestamp equals or exceeds the detection duration and it contains the timestamp of the first frame of the silence.

The "lavfi.silence_duration" or "lavfi.silence_duration.X" and "lavfi.silence_end" or "lavfi.silence_end.X" metadata keys are set on the first frame after the silence. If mono is enabled, and each channel is evaluated separately, the ".X" suffixed keys are used, and "X" corresponds to the channel number.

The filter accepts the following options:

Set noise tolerance. Can be specified in dB (in case "dB" is appended to the specified value) or amplitude ratio. Default is -60dB, or 0.001.
Set silence duration until notification (default is 2 seconds). See the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax.
Process each channel separately, instead of combined. By default is disabled.

Examples

  • Detect 5 seconds of silence with -50dB noise tolerance:
    silencedetect=n=-50dB:d=5
    
  • Complete example with ffmpeg to detect silence with 0.0001 noise tolerance in silence.mp3:
    ffmpeg -i silence.mp3 -af silencedetect=noise=0.0001 -f null -
    

Remove silence from the beginning, middle or end of the audio.

The filter accepts the following options:

This value is used to indicate if audio should be trimmed at beginning of the audio. A value of zero indicates no silence should be trimmed from the beginning. When specifying a non-zero value, it trims audio up until it finds non-silence. Normally, when trimming silence from beginning of audio the start_periods will be 1 but it can be increased to higher values to trim all audio up to specific count of non-silence periods. Default value is 0.
Specify the amount of time that non-silence must be detected before it stops trimming audio. By increasing the duration, bursts of noises can be treated as silence and trimmed off. Default value is 0.
This indicates what sample value should be treated as silence. For digital audio, a value of 0 may be fine but for audio recorded from analog, you may wish to increase the value to account for background noise. Can be specified in dB (in case "dB" is appended to the specified value) or amplitude ratio. Default value is 0.
Specify max duration of silence at beginning that will be kept after trimming. Default is 0, which is equal to trimming all samples detected as silence.
Specify mode of detection of silence end at start of multi-channel audio. Can be any or all. Default is any. With any, any sample from any channel that is detected as non-silence will trigger end of silence trimming at start of audio stream. With all, only if every sample from every channel is detected as non-silence will trigger end of silence trimming at start of audio stream, limited usage.
Set the count for trimming silence from the end of audio. When specifying a positive value, it trims audio after it finds specified silence period. To remove silence from the middle of a file, specify a stop_periods that is negative. This value is then treated as a positive value and is used to indicate the effect should restart processing as specified by stop_periods, making it suitable for removing periods of silence in the middle of the audio. Default value is 0.
Specify a duration of silence that must exist before audio is not copied any more. By specifying a higher duration, silence that is wanted can be left in the audio. Default value is 0.
This is the same as start_threshold but for trimming silence from the end of audio. Can be specified in dB (in case "dB" is appended to the specified value) or amplitude ratio. Default value is 0.
Specify max duration of silence at end that will be kept after trimming. Default is 0, which is equal to trimming all samples detected as silence.
Specify mode of detection of silence start after start of multi-channel audio. Can be any or all. Default is all. With any, any sample from any channel that is detected as silence will trigger start of silence trimming after start of audio stream, limited usage. With all, only if every sample from every channel is detected as silence will trigger start of silence trimming after start of audio stream.
Set how is silence detected.
Mean of absolute values of samples in moving window.
Root squared mean of absolute values of samples in moving window.
Maximum of absolute values of samples in moving window.
median
Median of absolute values of samples in moving window.
Absolute of max peak to min peak difference of samples in moving window.
Standard deviation of values of samples in moving window.

Default value is "rms".

Set duration in number of seconds used to calculate size of window in number of samples for detecting silence. Using 0 will effectively disable any windowing and use only single sample per channel for silence detection. In that case it may be needed to also set start_silence and/or stop_silence to nonzero values with also start_duration and/or stop_duration to nonzero values. Default value is 0.02. Allowed range is from 0 to 10.
Set processing mode of every audio frame output timestamp.
Full timestamps rewrite, keep only the start time for the first output frame.
copy
Non-dropped frames are left with same timestamp as input audio frame.

Defaults value is "write".

Examples

  • The following example shows how this filter can be used to start a recording that does not contain the delay at the start which usually occurs between pressing the record button and the start of the performance:
    silenceremove=start_periods=1:start_duration=5:start_threshold=0.02
    
  • Trim all silence encountered from beginning to end where there is more than 1 second of silence in audio:
    silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-90dB
    
  • Trim all digital silence samples, using peak detection, from beginning to end where there is more than 0 samples of digital silence in audio and digital silence is detected in all channels at same positions in stream:
    silenceremove=window=0:detection=peak:stop_mode=all:start_mode=all:stop_periods=-1:stop_threshold=0
    
  • Trim every 2nd encountered silence period from beginning to end where there is more than 1 second of silence per silence period in audio:
    silenceremove=stop_periods=-2:stop_duration=1:stop_threshold=-90dB
    
  • Similar as above, but keep maximum of 0.5 seconds of silence from each trimmed period:
    silenceremove=stop_periods=-2:stop_duration=1:stop_threshold=-90dB:stop_silence=0.5
    
  • Similar as above, but keep maximum of 1.5 seconds of silence from start of audio:
    silenceremove=stop_periods=-2:stop_duration=1:stop_threshold=-90dB:stop_silence=0.5:start_periods=1:start_duration=1:start_silence=1.5:stop_threshold=-90dB
    

Commands

This filter supports some above options as commands.

SOFAlizer uses head-related transfer functions (HRTFs) to create virtual loudspeakers around the user for binaural listening via headphones (audio formats up to 9 channels supported). The HRTFs are stored in SOFA files (see http://www.sofacoustics.org/ for a database). SOFAlizer is developed at the Acoustics Research Institute (ARI) of the Austrian Academy of Sciences.

To enable compilation of this filter you need to configure FFmpeg with "--enable-libmysofa".

The filter accepts the following options:

Set the SOFA file used for rendering.
Set gain applied to audio. Value is in dB. Default is 0.
Set rotation of virtual loudspeakers in deg. Default is 0.
Set elevation of virtual speakers in deg. Default is 0.
Set distance in meters between loudspeakers and the listener with near-field HRTFs. Default is 1.
Set processing type. Can be time or freq. time is processing audio in time domain which is slow. freq is processing audio in frequency domain which is fast. Default is freq.
Set custom positions of virtual loudspeakers. Syntax for this option is: <CH> <AZIM> <ELEV>[|<CH> <AZIM> <ELEV>|...]. Each virtual loudspeaker is described with short channel name following with azimuth and elevation in degrees. Each virtual loudspeaker description is separated by '|'. For example to override front left and front right channel positions use: 'speakers=FL 45 15|FR 345 15'. Descriptions with unrecognised channel names are ignored.
Set custom gain for LFE channels. Value is in dB. Default is 0.
Set custom frame size in number of samples. Default is 1024. Allowed range is from 1024 to 96000. Only used if option type is set to freq.
normalize
Should all IRs be normalized upon importing SOFA file. By default is enabled.
Should nearest IRs be interpolated with neighbor IRs if exact position does not match. By default is disabled.
Minphase all IRs upon loading of SOFA file. By default is disabled.
Set neighbor search angle step. Only used if option interpolate is enabled.
Set neighbor search radius step. Only used if option interpolate is enabled.

Examples

  • Using ClubFritz6 sofa file:
    sofalizer=sofa=/path/to/ClubFritz6.sofa:type=freq:radius=1
    
  • Using ClubFritz12 sofa file and bigger radius with small rotation:
    sofalizer=sofa=/path/to/ClubFritz12.sofa:type=freq:radius=2:rotation=5
    
  • Similar as above but with custom speaker positions for front left, front right, back left and back right and also with custom gain:
    "sofalizer=sofa=/path/to/ClubFritz6.sofa:type=freq:radius=2:speakers=FL 45|FR 315|BL 135|BR 225:gain=28"
    

Speech Normalizer.

This filter expands or compresses each half-cycle of audio samples (local set of samples all above or all below zero and between two nearest zero crossings) depending on threshold value, so audio reaches target peak value under conditions controlled by below options.

The filter accepts the following options:

Set the expansion target peak value. This specifies the highest allowed absolute amplitude level for the normalized audio input. Default value is 0.95. Allowed range is from 0.0 to 1.0.
Set the maximum expansion factor. Allowed range is from 1.0 to 50.0. Default value is 2.0. This option controls maximum local half-cycle of samples expansion. The maximum expansion would be such that local peak value reaches target peak value but never to surpass it and that ratio between new and previous peak value does not surpass this option value.
Set the maximum compression factor. Allowed range is from 1.0 to 50.0. Default value is 2.0. This option controls maximum local half-cycle of samples compression. This option is used only if threshold option is set to value greater than 0.0, then in such cases when local peak is lower or same as value set by threshold all samples belonging to that peak's half-cycle will be compressed by current compression factor.
Set the threshold value. Default value is 0.0. Allowed range is from 0.0 to 1.0. This option specifies which half-cycles of samples will be compressed and which will be expanded. Any half-cycle samples with their local peak value below or same as this option value will be compressed by current compression factor, otherwise, if greater than threshold value they will be expanded with expansion factor so that it could reach peak target value but never surpass it.
Set the expansion raising amount per each half-cycle of samples. Default value is 0.001. Allowed range is from 0.0 to 1.0. This controls how fast expansion factor is raised per each new half-cycle until it reaches expansion value. Setting this options too high may lead to distortions.
Set the compression raising amount per each half-cycle of samples. Default value is 0.001. Allowed range is from 0.0 to 1.0. This controls how fast compression factor is raised per each new half-cycle until it reaches compression value.
Specify which channels to filter, by default all available channels are filtered.
Enable inverted filtering, by default is disabled. This inverts interpretation of threshold option. When enabled any half-cycle of samples with their local peak value below or same as threshold option will be expanded otherwise it will be compressed.
Link channels when calculating gain applied to each filtered channel sample, by default is disabled. When disabled each filtered channel gain calculation is independent, otherwise when this option is enabled the minimum of all possible gains for each filtered channel is used.
Set the expansion target RMS value. This specifies the highest allowed RMS level for the normalized audio input. Default value is 0.0, thus disabled. Allowed range is from 0.0 to 1.0.

Commands

This filter supports the all above options as commands.

Examples

  • Weak and slow amplification:
    speechnorm=e=3:r=0.00001:l=1
    
  • Moderate and slow amplification:
    speechnorm=e=6.25:r=0.00001:l=1
    
  • Strong and fast amplification:
    speechnorm=e=12.5:r=0.0001:l=1
    
  • Very strong and fast amplification:
    speechnorm=e=25:r=0.0001:l=1
    
  • Extreme and fast amplification:
    speechnorm=e=50:r=0.0001:l=1
    

This filter has some handy utilities to manage stereo signals, for converting M/S stereo recordings to L/R signal while having control over the parameters or spreading the stereo image of master track.

The filter accepts the following options:

Set input level before filtering for both channels. Defaults is 1. Allowed range is from 0.015625 to 64.
Set output level after filtering for both channels. Defaults is 1. Allowed range is from 0.015625 to 64.
Set input balance between both channels. Default is 0. Allowed range is from -1 to 1.
Set output balance between both channels. Default is 0. Allowed range is from -1 to 1.
Enable softclipping. Results in analog distortion instead of harsh digital 0dB clipping. Disabled by default.
Mute the left channel. Disabled by default.
Mute the right channel. Disabled by default.
Change the phase of the left channel. Disabled by default.
Change the phase of the right channel. Disabled by default.
Set stereo mode. Available values are:
Left/Right to Left/Right, this is default.
Left/Right to Mid/Side.
Mid/Side to Left/Right.
Left/Right to Left/Left.
Left/Right to Right/Right.
Left/Right to Left + Right.
Left/Right to Right/Left.
Mid/Side to Left/Left.
Mid/Side to Right/Right.
Mid/Side to Right/Left.
Left/Right to Left - Right.
Set level of side signal. Default is 1. Allowed range is from 0.015625 to 64.
Set balance of side signal. Default is 0. Allowed range is from -1 to 1.
Set level of the middle signal. Default is 1. Allowed range is from 0.015625 to 64.
Set middle signal pan. Default is 0. Allowed range is from -1 to 1.
Set stereo base between mono and inversed channels. Default is 0. Allowed range is from -1 to 1.
Set delay in milliseconds how much to delay left from right channel and vice versa. Default is 0. Allowed range is from -20 to 20.
Set S/C level. Default is 1. Allowed range is from 1 to 100.
phase
Set the stereo phase in degrees. Default is 0. Allowed range is from 0 to 360.
Set balance mode for balance_in/balance_out option.

Can be one of the following:

Classic balance mode. Attenuate one channel at time. Gain is raised up to 1.
Similar as classic mode above but gain is raised up to 2.
Equal power distribution, from -6dB to +6dB range.

Commands

This filter supports the all above options as commands.

Examples

  • Apply karaoke like effect:
    stereotools=mlev=0.015625
    
  • Convert M/S signal to L/R:
    "stereotools=mode=ms>lr"
    

This filter enhance the stereo effect by suppressing signal common to both channels and by delaying the signal of left into right and vice versa, thereby widening the stereo effect.

The filter accepts the following options:

Time in milliseconds of the delay of left signal into right and vice versa. Default is 20 milliseconds.
feedback
Amount of gain in delayed signal into right and vice versa. Gives a delay effect of left signal in right output and vice versa which gives widening effect. Default is 0.3.
crossfeed
Cross feed of left into right with inverted phase. This helps in suppressing the mono. If the value is 1 it will cancel all the signal common to both channels. Default is 0.3.
Set level of input signal of original channel. Default is 0.8.

Commands

This filter supports the all above options except "delay" as commands.

Apply 18 band equalizer.

The filter accepts the following options:

1b
Set 65Hz band gain.
2b
Set 92Hz band gain.
3b
Set 131Hz band gain.
4b
Set 185Hz band gain.
5b
Set 262Hz band gain.
6b
Set 370Hz band gain.
7b
Set 523Hz band gain.
8b
Set 740Hz band gain.
9b
Set 1047Hz band gain.
10b
Set 1480Hz band gain.
11b
Set 2093Hz band gain.
12b
Set 2960Hz band gain.
13b
Set 4186Hz band gain.
14b
Set 5920Hz band gain.
15b
Set 8372Hz band gain.
16b
Set 11840Hz band gain.
17b
Set 16744Hz band gain.
18b
Set 20000Hz band gain.

Apply audio surround upmix filter.

This filter allows to produce multichannel output from audio stream.

The filter accepts the following options:

Set output channel layout. By default, this is 5.1.

See the Channel Layout section in the ffmpeg-utils(1) manual for the required syntax.

Set input channel layout. By default, this is stereo.

See the Channel Layout section in the ffmpeg-utils(1) manual for the required syntax.

Set input volume level. By default, this is 1.
Set output volume level. By default, this is 1.
Enable LFE channel output if output channel layout has it. By default, this is enabled.
Set LFE low cut off frequency. By default, this is 128 Hz.
Set LFE high cut off frequency. By default, this is 256 Hz.
Set LFE mode, can be add or sub. Default is add. In add mode, LFE channel is created from input audio and added to output. In sub mode, LFE channel is created from input audio and added to output but also all non-LFE output channels are subtracted with output LFE channel.
Set temporal smoothness strength, used to gradually change factors when transforming stereo sound in time. Allowed range is from 0.0 to 1.0. Useful to improve output quality with focus option values greater than 0.0. Default is 0.0. Only values inside this range and without edges are effective.
Set angle of stereo surround transform, Allowed range is from 0 to 360. Default is 90.
Set focus of stereo surround transform, Allowed range is from -1 to 1. Default is 0.
Set front center input volume. By default, this is 1.
Set front center output volume. By default, this is 1.
Set front left input volume. By default, this is 1.
Set front left output volume. By default, this is 1.
Set front right input volume. By default, this is 1.
Set front right output volume. By default, this is 1.
Set side left input volume. By default, this is 1.
Set side left output volume. By default, this is 1.
Set side right input volume. By default, this is 1.
Set side right output volume. By default, this is 1.
Set back left input volume. By default, this is 1.
Set back left output volume. By default, this is 1.
Set back right input volume. By default, this is 1.
Set back right output volume. By default, this is 1.
Set back center input volume. By default, this is 1.
Set back center output volume. By default, this is 1.
Set LFE input volume. By default, this is 1.
Set LFE output volume. By default, this is 1.
Set spread usage of stereo image across X axis for all channels. Allowed range is from -1 to 15. By default this value is negative -1, and thus unused.
Set spread usage of stereo image across Y axis for all channels. Allowed range is from -1 to 15. By default this value is negative -1, and thus unused.
Set spread usage of stereo image across X axis for each channel. Allowed range is from 0.06 to 15. By default this value is 0.5.
Set spread usage of stereo image across Y axis for each channel. Allowed range is from 0.06 to 15. By default this value is 0.5.
Set window size. Allowed range is from 1024 to 65536. Default size is 4096.
Set window function.

It accepts the following values:

Default is "hann".

Set window overlap. If set to 1, the recommended overlap for selected window function will be picked. Default is 0.5.

Boost or cut the lower frequencies and cut or boost higher frequencies of the audio using a two-pole shelving filter with a response similar to that of a standard hi-fi's tone-controls. This is also known as shelving equalisation (EQ).

The filter accepts the following options:

Give the gain at 0 Hz. Its useful range is about -20 (for a large cut) to +20 (for a large boost). Beware of clipping when using a positive gain.
Set the filter's central frequency and so can be used to extend or reduce the frequency range to be boosted or cut. The default value is 3000 Hz.
Set method to specify band-width of filter.
Hz
Q-Factor
octave
slope
kHz
Determine how steep is the filter's shelf transition.
Set number of poles. Default is 2.
How much to use filtered signal in output. Default is 1. Range is between 0 and 1.
Specify which channels to filter, by default all available are filtered.
Normalize biquad coefficients, by default is disabled. Enabling it will normalize magnitude response at DC to 0dB.
Set transform type of IIR filter.
Set precision of filtering.
Pick automatic sample format depending on surround filters.
Always use signed 16-bit.
Always use signed 32-bit.
Always use float 32-bit.
Always use float 64-bit.
Set block size used for reverse IIR processing. If this value is set to high enough value (higher than impulse response length truncated when reaches near zero values) filtering will become linear phase otherwise if not big enough it will just produce nasty artifacts.

Note that filter delay will be exactly this many samples when set to non-zero value.

Commands

This filter supports some options as commands.

Boost or cut treble (upper) frequencies of the audio using a two-pole shelving filter with a response similar to that of a standard hi-fi's tone-controls. This is also known as shelving equalisation (EQ).

The filter accepts the following options:

Give the gain at whichever is the lower of ~22 kHz and the Nyquist frequency. Its useful range is about -20 (for a large cut) to +20 (for a large boost). Beware of clipping when using a positive gain.
Set the filter's central frequency and so can be used to extend or reduce the frequency range to be boosted or cut. The default value is 3000 Hz.
Set method to specify band-width of filter.
Hz
Q-Factor
octave
slope
kHz
Determine how steep is the filter's shelf transition.
Set number of poles. Default is 2.
How much to use filtered signal in output. Default is 1. Range is between 0 and 1.
Specify which channels to filter, by default all available are filtered.
Normalize biquad coefficients, by default is disabled. Enabling it will normalize magnitude response at DC to 0dB.
Set transform type of IIR filter.
Set precision of filtering.
Pick automatic sample format depending on surround filters.
Always use signed 16-bit.
Always use signed 32-bit.
Always use float 32-bit.
Always use float 64-bit.
Set block size used for reverse IIR processing. If this value is set to high enough value (higher than impulse response length truncated when reaches near zero values) filtering will become linear phase otherwise if not big enough it will just produce nasty artifacts.

Note that filter delay will be exactly this many samples when set to non-zero value.

Commands

This filter supports the following commands:

Change treble frequency. Syntax for the command is : "frequency"
Change treble width_type. Syntax for the command is : "width_type"
Change treble width. Syntax for the command is : "width"
Change treble gain. Syntax for the command is : "gain"
Change treble mix. Syntax for the command is : "mix"

Sinusoidal amplitude modulation.

The filter accepts the following options:

Modulation frequency in Hertz. Modulation frequencies in the subharmonic range (20 Hz or lower) will result in a tremolo effect. This filter may also be used as a ring modulator by specifying a modulation frequency higher than 20 Hz. Range is 0.1 - 20000.0. Default value is 5.0 Hz.
Depth of modulation as a percentage. Range is 0.0 - 1.0. Default value is 0.5.

Sinusoidal phase modulation.

The filter accepts the following options:

Modulation frequency in Hertz. Range is 0.1 - 20000.0. Default value is 5.0 Hz.
Depth of modulation as a percentage. Range is 0.0 - 1.0. Default value is 0.5.

Apply audio Virtual Bass filter.

This filter accepts stereo input and produce stereo with LFE (2.1) channels output. The newly produced LFE channel have enhanced virtual bass originally obtained from both stereo channels. This filter outputs front left and front right channels unchanged as available in stereo input.

The filter accepts the following options:

Set the virtual bass cutoff frequency. Default value is 250 Hz. Allowed range is from 100 to 500 Hz.
Set the virtual bass strength. Allowed range is from 0.5 to 3. Default value is 3.

Adjust the input audio volume.

It accepts the following parameters:

volume
Set audio volume expression.

Output values are clipped to the maximum value.

The output audio volume is given by the relation:

<output_volume> = <volume> * <input_volume>

The default value for volume is "1.0".

This parameter represents the mathematical precision.

It determines which input sample formats will be allowed, which affects the precision of the volume scaling.

8-bit fixed-point; this limits input sample format to U8, S16, and S32.
32-bit floating-point; this limits input sample format to FLT. (default)
64-bit floating-point; this limits input sample format to DBL.
replaygain
Choose the behaviour on encountering ReplayGain side data in input frames.
Remove ReplayGain side data, ignoring its contents (the default).
Ignore ReplayGain side data, but leave it in the frame.
Prefer the track gain, if present.
Prefer the album gain, if present.
Pre-amplification gain in dB to apply to the selected replaygain gain.

Default value for replaygain_preamp is 0.0.

Prevent clipping by limiting the gain applied.

Default value for replaygain_noclip is 1.

Set when the volume expression is evaluated.

It accepts the following values:

only evaluate expression once during the filter initialization, or when the volume command is sent
evaluate expression for each incoming frame

Default value is once.

The volume expression can contain the following parameters.

frame number (starting at zero)
number of channels
number of samples consumed by the filter
number of samples in the current frame
original frame position in the file; deprecated, do not use
frame PTS
sample rate
PTS at start of stream
time at start of stream
frame time
timestamp timebase
volume
last set volume value

Note that when eval is set to once only the sample_rate and tb variables are available, all other variables will evaluate to NAN.

Commands

This filter supports the following commands:

volume
Modify the volume expression. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Examples

  • Halve the input audio volume:
    volume=volume=0.5
    volume=volume=1/2
    volume=volume=-6.0206dB
    

    In all the above example the named key for volume can be omitted, for example like in:

    volume=0.5
    
  • Increase input audio power by 6 decibels using fixed-point precision:
    volume=volume=6dB:precision=fixed
    
  • Fade volume after time 10 with an annihilation period of 5 seconds:
    volume='if(lt(t,10),1,max(1-(t-10)/5,0))':eval=frame
    

Detect the volume of the input video.

The filter has no parameters. It supports only 16-bit signed integer samples, so the input will be converted when needed. Statistics about the volume will be printed in the log when the input stream end is reached.

In particular it will show the mean volume (root mean square), maximum volume (on a per-sample basis), and the beginning of a histogram of the registered volume values (from the maximum value to a cumulated 1/1000 of the samples).

All volumes are in decibels relative to the maximum PCM value.

Examples

Here is an excerpt of the output:

[Parsed_volumedetect_0  0xa23120] mean_volume: -27 dB
[Parsed_volumedetect_0  0xa23120] max_volume: -4 dB
[Parsed_volumedetect_0  0xa23120] histogram_4db: 6
[Parsed_volumedetect_0  0xa23120] histogram_5db: 62
[Parsed_volumedetect_0  0xa23120] histogram_6db: 286
[Parsed_volumedetect_0  0xa23120] histogram_7db: 1042
[Parsed_volumedetect_0  0xa23120] histogram_8db: 2551
[Parsed_volumedetect_0  0xa23120] histogram_9db: 4609
[Parsed_volumedetect_0  0xa23120] histogram_10db: 8409

It means that:

  • The mean square energy is approximately -27 dB, or 10^-2.7.
  • The largest sample is at -4 dB, or more precisely between -4 dB and -5 dB.
  • There are 6 samples at -4 dB, 62 at -5 dB, 286 at -6 dB, etc.

In other words, raising the volume by +4 dB does not cause any clipping, raising it by +5 dB causes clipping for 6 samples, etc.

Below is a description of the currently available audio sources.

Buffer audio frames, and make them available to the filter chain.

This source is mainly intended for a programmatic use, in particular through the interface defined in libavfilter/buffersrc.h.

It accepts the following parameters:

The timebase which will be used for timestamps of submitted frames. It must be either a floating-point number or in numerator/denominator form.
The sample rate of the incoming audio buffers.
The sample format of the incoming audio buffers. Either a sample format name or its corresponding integer representation from the enum AVSampleFormat in libavutil/samplefmt.h
The channel layout of the incoming audio buffers. Either a channel layout name from channel_layout_map in libavutil/channel_layout.c or its corresponding integer representation from the AV_CH_LAYOUT_* macros in libavutil/channel_layout.h
The number of channels of the incoming audio buffers. If both channels and channel_layout are specified, then they must be consistent.

Examples

abuffer=sample_rate=44100:sample_fmt=s16p:channel_layout=stereo

will instruct the source to accept planar 16bit signed stereo at 44100Hz. Since the sample format with name "s16p" corresponds to the number 6 and the "stereo" channel layout corresponds to the value 0x3, this is equivalent to:

abuffer=sample_rate=44100:sample_fmt=6:channel_layout=0x3

Generate an audio signal specified by an expression.

This source accepts in input one or more expressions (one for each channel), which are evaluated and used to generate a corresponding audio signal.

This source accepts the following options:

Set the '|'-separated expressions list for each separate channel. In case the channel_layout option is not specified, the selected channel layout depends on the number of provided expressions. Otherwise the last specified expression is applied to the remaining output channels.
Set the channel layout. The number of channels in the specified layout must be equal to the number of specified expressions.
Set the minimum duration of the sourced audio. See the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax. Note that the resulting duration may be greater than the specified duration, as the generated audio is always cut at the end of a complete frame.

If not specified, or the expressed duration is negative, the audio is supposed to be generated forever.

Set the number of samples per channel per each output frame, default to 1024.
Specify the sample rate, default to 44100.

Each expression in exprs can contain the following constants:

number of the evaluated sample, starting from 0
time of the evaluated sample expressed in seconds, starting from 0
sample rate

Examples

  • Generate silence:
    aevalsrc=0
    
  • Generate a sin signal with frequency of 440 Hz, set sample rate to 8000 Hz:
    aevalsrc="sin(440*2*PI*t):s=8000"
    
  • Generate a two channels signal, specify the channel layout (Front Center + Back Center) explicitly:
    aevalsrc="sin(420*2*PI*t)|cos(430*2*PI*t):c=FC|BC"
    
  • Generate white noise:
    aevalsrc="-2+random(0)"
    
  • Generate an amplitude modulated signal:
    aevalsrc="sin(10*2*PI*t)*sin(880*2*PI*t)"
    
  • Generate 2.5 Hz binaural beats on a 360 Hz carrier:
    aevalsrc="0.1*sin(2*PI*(360-2.5/2)*t) | 0.1*sin(2*PI*(360+2.5/2)*t)"
    

Generate a fractional delay FIR coefficients.

The resulting stream can be used with afir filter for filtering the audio signal.

The filter accepts the following options:

Set the fractional delay. Default is 0.
Set the sample rate, default is 44100.
Set the number of samples per each frame. Default is 1024.
Set the number of filter coefficients in output audio stream. Default value is 0.
Specifies the channel layout, and can be a string representing a channel layout. The default value of channel_layout is "stereo".

Generate a FIR equalizer coefficients.

The resulting stream can be used with afir filter for filtering the audio signal.

The filter accepts the following options:

Set equalizer preset. Default preset is "flat".

Available presets are:

Set custom gains for each band. Only used if the preset option is set to "custom". Gains are separated by white spaces and each gain is set in dBFS. Default is "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0".
Set the custom bands from where custon equalizer gains are set. This must be in strictly increasing order. Only used if the preset option is set to "custom". Bands are separated by white spaces and each band represent frequency in Hz. Default is "25 40 63 100 160 250 400 630 1000 1600 2500 4000 6300 10000 16000 24000".
Set number of filter coefficients in output audio stream. Default value is 4096.
Set sample rate of output audio stream, default is 44100.
Set number of samples per each frame in output audio stream. Default is 1024.
Set interpolation method for FIR equalizer coefficients. Can be "linear" or "cubic".
Set phase type of FIR filter. Can be "linear" or "min": minimum-phase. Default is minimum-phase filter.

Generate a FIR coefficients using frequency sampling method.

The resulting stream can be used with afir filter for filtering the audio signal.

The filter accepts the following options:

Set number of filter coefficients in output audio stream. Default value is 1025.
Set frequency points from where magnitude and phase are set. This must be in non decreasing order, and first element must be 0, while last element must be 1. Elements are separated by white spaces.
Set magnitude value for every frequency point set by frequency. Number of values must be same as number of frequency points. Values are separated by white spaces.
Set phase value for every frequency point set by frequency. Number of values must be same as number of frequency points. Values are separated by white spaces.
Set sample rate, default is 44100.
Set number of samples per each frame. Default is 1024.
Set window function. Default is blackman.

The null audio source, return unprocessed audio frames. It is mainly useful as a template and to be employed in analysis / debugging tools, or as the source for filters which ignore the input data (for example the sox synth filter).

This source accepts the following options:

Specifies the channel layout, and can be either an integer or a string representing a channel layout. The default value of channel_layout is "stereo".

Check the channel_layout_map definition in libavutil/channel_layout.c for the mapping between strings and channel layout values.

Specifies the sample rate, and defaults to 44100.
Set the number of samples per requested frames.
Set the duration of the sourced audio. See the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax.

If not specified, or the expressed duration is negative, the audio is supposed to be generated forever.

Examples

  • Set the sample rate to 48000 Hz and the channel layout to AV_CH_LAYOUT_MONO.
    anullsrc=r=48000:cl=4
    
  • Do the same operation with a more obvious syntax:
    anullsrc=r=48000:cl=mono
    

All the parameters need to be explicitly defined.

Synthesize a voice utterance using the libflite library.

To enable compilation of this filter you need to configure FFmpeg with "--enable-libflite".

Note that versions of the flite library prior to 2.0 are not thread-safe.

The filter accepts the following options:

If set to 1, list the names of the available voices and exit immediately. Default value is 0.
Set the maximum number of samples per frame. Default value is 512.
Set the filename containing the text to speak.
Set the text to speak.
Set the voice to use for the speech synthesis. Default value is "kal". See also the list_voices option.

Examples

  • Read from file speech.txt, and synthesize the text using the standard flite voice:
    flite=textfile=speech.txt
    
  • Read the specified text selecting the "slt" voice:
    flite=text='So fare thee well, poor devil of a Sub-Sub, whose commentator I am':voice=slt
    
  • Input text to ffmpeg:
    ffmpeg -f lavfi -i flite=text='So fare thee well, poor devil of a Sub-Sub, whose commentator I am':voice=slt
    
  • Make ffplay speak the specified text, using "flite" and the "lavfi" device:
    ffplay -f lavfi flite=text='No more be grieved for which that thou hast done.'
    

For more information about libflite, check: http://www.festvox.org/flite/

Generate a noise audio signal.

The filter accepts the following options:

Specify the sample rate. Default value is 48000 Hz.
Specify the amplitude (0.0 - 1.0) of the generated audio stream. Default value is 1.0.
Specify the duration of the generated audio stream. Not specifying this option results in noise with an infinite length.
Specify the color of noise. Available noise colors are white, pink, brown, blue, violet and velvet. Default color is white.
Specify a value used to seed the PRNG.
Set the number of samples per each output frame, default is 1024.
Set the density (0.0 - 1.0) for the velvet noise generator, default is 0.05.

Examples

Generate 60 seconds of pink noise, with a 44.1 kHz sampling rate and an amplitude of 0.5:
anoisesrc=d=60:c=pink:r=44100:a=0.5

Generate odd-tap Hilbert transform FIR coefficients.

The resulting stream can be used with afir filter for phase-shifting the signal by 90 degrees.

This is used in many matrix coding schemes and for analytic signal generation. The process is often written as a multiplication by i (or j), the imaginary unit.

The filter accepts the following options:

Set sample rate, default is 44100.
Set length of FIR filter, default is 22051.
Set number of samples per each frame.
Set window function to be used when generating FIR coefficients.

Generate a sinc kaiser-windowed low-pass, high-pass, band-pass, or band-reject FIR coefficients.

The resulting stream can be used with afir filter for filtering the audio signal.

The filter accepts the following options:

Set sample rate, default is 44100.
Set number of samples per each frame. Default is 1024.
Set high-pass frequency. Default is 0.
Set low-pass frequency. Default is 0. If high-pass frequency is lower than low-pass frequency and low-pass frequency is higher than 0 then filter will create band-pass filter coefficients, otherwise band-reject filter coefficients.
phase
Set filter phase response. Default is 50. Allowed range is from 0 to 100.
Set Kaiser window beta.
Set stop-band attenuation. Default is 120dB, allowed range is from 40 to 180 dB.
Enable rounding, by default is disabled.
Set number of taps for high-pass filter.
Set number of taps for low-pass filter.

Generate an audio signal made of a sine wave with amplitude 1/8.

The audio signal is bit-exact.

The filter accepts the following options:

Set the carrier frequency. Default is 440 Hz.
Enable a periodic beep every second with frequency beep_factor times the carrier frequency. Default is 0, meaning the beep is disabled.
Specify the sample rate, default is 44100.
Specify the duration of the generated audio stream.
Set the number of samples per output frame.

The expression can contain the following constants:

The (sequential) number of the output audio frame, starting from 0.
The PTS (Presentation TimeStamp) of the output audio frame, expressed in TB units.
The PTS of the output audio frame, expressed in seconds.
The timebase of the output audio frames.

Default is 1024.

Examples

  • Generate a simple 440 Hz sine wave:
    sine
    
  • Generate a 220 Hz sine wave with a 880 Hz beep each second, for 5 seconds:
    sine=220:4:d=5
    sine=f=220:b=4:d=5
    sine=frequency=220:beep_factor=4:duration=5
    
  • Generate a 1 kHz sine wave following "1602,1601,1602,1601,1602" NTSC pattern:
    sine=1000:samples_per_frame='st(0,mod(n,5)); 1602-not(not(eq(ld(0),1)+eq(ld(0),3)))'
    

Below is a description of the currently available audio sinks.

Buffer audio frames, and make them available to the end of filter chain.

This sink is mainly intended for programmatic use, in particular through the interface defined in libavfilter/buffersink.h or the options system.

It accepts a pointer to an AVABufferSinkContext structure, which defines the incoming buffers' formats, to be passed as the opaque parameter to "avfilter_init_filter" for initialization.

Null audio sink; do absolutely nothing with the input audio. It is mainly useful as a template and for use in analysis / debugging tools.

When you configure your FFmpeg build, you can disable any of the existing filters using "--disable-filters". The configure output will show the video filters included in your build.

Below is a description of the currently available video filters.

Mark a region of interest in a video frame.

The frame data is passed through unchanged, but metadata is attached to the frame indicating regions of interest which can affect the behaviour of later encoding. Multiple regions can be marked by applying the filter multiple times.

Region distance in pixels from the left edge of the frame.
Region distance in pixels from the top edge of the frame.
Region width in pixels.
Region height in pixels.

The parameters x, y, w and h are expressions, and may contain the following variables:

Width of the input frame.
Height of the input frame.
Quantisation offset to apply within the region.

This must be a real value in the range -1 to +1. A value of zero indicates no quality change. A negative value asks for better quality (less quantisation), while a positive value asks for worse quality (greater quantisation).

The range is calibrated so that the extreme values indicate the largest possible offset - if the rest of the frame is encoded with the worst possible quality, an offset of -1 indicates that this region should be encoded with the best possible quality anyway. Intermediate values are then interpolated in some codec-dependent way.

For example, in 10-bit H.264 the quantisation parameter varies between -12 and 51. A typical qoffset value of -1/10 therefore indicates that this region should be encoded with a QP around one-tenth of the full range better than the rest of the frame. So, if most of the frame were to be encoded with a QP of around 30, this region would get a QP of around 24 (an offset of approximately -1/10 * (51 - -12) = -6.3). An extreme value of -1 would indicate that this region should be encoded with the best possible quality regardless of the treatment of the rest of the frame - that is, should be encoded at a QP of -12.

If set to true, remove any existing regions of interest marked on the frame before adding the new one.

Examples

  • Mark the centre quarter of the frame as interesting.
    addroi=iw/4:ih/4:iw/2:ih/2:-1/10
    
  • Mark the 100-pixel-wide region on the left edge of the frame as very uninteresting (to be encoded at much lower quality than the rest of the frame).
    addroi=0:0:100:ih:+1/5
    

Extract the alpha component from the input as a grayscale video. This is especially useful with the alphamerge filter.

Add or replace the alpha component of the primary input with the grayscale value of a second input. This is intended for use with alphaextract to allow the transmission or storage of frame sequences that have alpha in a format that doesn't support an alpha channel.

For example, to reconstruct full frames from a normal YUV-encoded video and a separate video created with alphaextract, you might use:

movie=in_alpha.mkv [alpha]; [in][alpha] alphamerge [out]

Amplify differences between current pixel and pixels of adjacent frames in same pixel location.

This filter accepts the following options:

Set frame radius. Default is 2. Allowed range is from 1 to 63. For example radius of 3 will instruct filter to calculate average of 7 frames.
Set factor to amplify difference. Default is 2. Allowed range is from 0 to 65535.
threshold
Set threshold for difference amplification. Any difference greater or equal to this value will not alter source pixel. Default is 10. Allowed range is from 0 to 65535.
Set tolerance for difference amplification. Any difference lower to this value will not alter source pixel. Default is 0. Allowed range is from 0 to 65535.
Set lower limit for changing source pixel. Default is 65535. Allowed range is from 0 to 65535. This option controls maximum possible value that will decrease source pixel value.
Set high limit for changing source pixel. Default is 65535. Allowed range is from 0 to 65535. This option controls maximum possible value that will increase source pixel value.
Set which planes to filter. Default is all. Allowed range is from 0 to 15.

Commands

This filter supports the following commands that corresponds to option of same name:

threshold

Same as the subtitles filter, except that it doesn't require libavcodec and libavformat to work. On the other hand, it is limited to ASS (Advanced Substation Alpha) subtitles files.

This filter accepts the following option in addition to the common options from the subtitles filter:

Set the shaping engine

Available values are:

The default libass shaping engine, which is the best available.
Fast, font-agnostic shaper that can do only substitutions
Slower shaper using OpenType for substitutions and positioning

The default is "auto".

Apply an Adaptive Temporal Averaging Denoiser to the video input.

The filter accepts the following options:

0a
Set threshold A for 1st plane. Default is 0.02. Valid range is 0 to 0.3.
0b
Set threshold B for 1st plane. Default is 0.04. Valid range is 0 to 5.
1a
Set threshold A for 2nd plane. Default is 0.02. Valid range is 0 to 0.3.
1b
Set threshold B for 2nd plane. Default is 0.04. Valid range is 0 to 5.
2a
Set threshold A for 3rd plane. Default is 0.02. Valid range is 0 to 0.3.
2b
Set threshold B for 3rd plane. Default is 0.04. Valid range is 0 to 5.

Threshold A is designed to react on abrupt changes in the input signal and threshold B is designed to react on continuous changes in the input signal.

Set number of frames filter will use for averaging. Default is 9. Must be odd number in range [5, 129].
Set what planes of frame filter will use for averaging. Default is all.
Set what variant of algorithm filter will use for averaging. Default is "p" parallel. Alternatively can be set to "s" serial.

Parallel can be faster then serial, while other way around is never true. Parallel will abort early on first change being greater then thresholds, while serial will continue processing other side of frames if they are equal or below thresholds.

0s
1s
2s
Set sigma for 1st plane, 2nd plane or 3rd plane. Default is 32767. Valid range is from 0 to 32767. This options controls weight for each pixel in radius defined by size. Default value means every pixel have same weight. Setting this option to 0 effectively disables filtering.

Commands

This filter supports same commands as options except option "s". The command accepts the same syntax of the corresponding option.

Apply average blur filter.

The filter accepts the following options:

Set horizontal radius size.
Set which planes to filter. By default all planes are filtered.
Set vertical radius size, if zero it will be same as "sizeX". Default is 0.

Commands

This filter supports same commands as options. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Turns a static background into transparency.

The filter accepts the following option:

threshold
Threshold for scene change detection.
Similarity percentage with the background.
blend
Set the blend amount for pixels that are not similar.

Commands

This filter supports the all above options as commands.

Compute the bounding box for the non-black pixels in the input frame luma plane.

This filter computes the bounding box containing all the pixels with a luma value greater than the minimum allowed value. The parameters describing the bounding box are printed on the filter log.

The filter accepts the following option:

Set the minimal luma value. Default is 16.

Commands

This filter supports the all above options as commands.

Apply bilateral filter, spatial smoothing while preserving edges.

The filter accepts the following options:

Set sigma of gaussian function to calculate spatial weight. Allowed range is 0 to 512. Default is 0.1.
Set sigma of gaussian function to calculate range weight. Allowed range is 0 to 1. Default is 0.1.
Set planes to filter. Default is first only.

Commands

This filter supports the all above options as commands.

CUDA accelerated bilateral filter, an edge preserving filter. This filter is mathematically accurate thanks to the use of GPU acceleration. For best output quality, use one to one chroma subsampling, i.e. yuv444p format.

The filter accepts the following options:

Set sigma of gaussian function to calculate spatial weight, also called sigma space. Allowed range is 0.1 to 512. Default is 0.1.
Set sigma of gaussian function to calculate color range weight, also called sigma color. Allowed range is 0.1 to 512. Default is 0.1.
Set window size of the bilateral function to determine the number of neighbours to loop on. If the number entered is even, one will be added automatically. Allowed range is 1 to 255. Default is 1.

Examples

Apply the bilateral filter on a video.
./ffmpeg -v verbose \
-hwaccel cuda -hwaccel_output_format cuda -i input.mp4  \
-init_hw_device cuda \
-filter_complex \
" \
[0:v]scale_cuda=format=yuv444p[scaled_video];
[scaled_video]bilateral_cuda=window_size=9:sigmaS=3.0:sigmaR=50.0" \
-an -sn -c:v h264_nvenc -cq 20 out.mp4

Show and measure bit plane noise.

The filter accepts the following options:

Set which plane to analyze. Default is 1.
Filter out noisy pixels from "bitplane" set above. Default is disabled.

Detect video intervals that are (almost) completely black. Can be useful to detect chapter transitions, commercials, or invalid recordings.

The filter outputs its detection analysis to both the log as well as frame metadata. If a black segment of at least the specified minimum duration is found, a line with the start and end timestamps as well as duration is printed to the log with level "info". In addition, a log line with level "debug" is printed per frame showing the black amount detected for that frame.

The filter also attaches metadata to the first frame of a black segment with key "lavfi.black_start" and to the first frame after the black segment ends with key "lavfi.black_end". The value is the frame's timestamp. This metadata is added regardless of the minimum duration specified.

The filter accepts the following options:

Set the minimum detected black duration expressed in seconds. It must be a non-negative floating point number.

Default value is 2.0.

Set the threshold for considering a picture "black". Express the minimum value for the ratio:
<nb_black_pixels> / <nb_pixels>

for which a picture is considered black. Default value is 0.98.

Set the threshold for considering a pixel "black".

The threshold expresses the maximum pixel luma value for which a pixel is considered "black". The provided value is scaled according to the following equation:

<absolute_threshold> = <luma_minimum_value> + <pixel_black_th> * <luma_range_size>

luma_range_size and luma_minimum_value depend on the input video format, the range is [0-255] for YUV full-range formats and [16-235] for YUV non full-range formats.

Default value is 0.10.

The following example sets the maximum pixel threshold to the minimum value, and detects only black intervals of 2 or more seconds:

blackdetect=d=2:pix_th=0.00

Detect frames that are (almost) completely black. Can be useful to detect chapter transitions or commercials. Output lines consist of the frame number of the detected frame, the percentage of blackness, the position in the file if known or -1 and the timestamp in seconds.

In order to display the output lines, you need to set the loglevel at least to the AV_LOG_INFO value.

This filter exports frame metadata "lavfi.blackframe.pblack". The value represents the percentage of pixels in the picture that are below the threshold value.

It accepts the following parameters:

The percentage of the pixels that have to be below the threshold; it defaults to 98.
The threshold below which a pixel value is considered black; it defaults to 32.

Blend two video frames into each other.

The "blend" filter takes two input streams and outputs one stream, the first input is the "top" layer and second input is "bottom" layer. By default, the output terminates when the longest input terminates.

The "tblend" (time blend) filter takes two consecutive frames from one single stream, and outputs the result obtained by blending the new frame on top of the old frame.

A description of the accepted options follows.

Set blend mode for specific pixel component or all pixel components in case of all_mode. Default value is "normal".

Available values for component modes are:

Set blend opacity for specific pixel component or all pixel components in case of all_opacity. Only used in combination with pixel component blend modes.
Set blend expression for specific pixel component or all pixel components in case of all_expr. Note that related mode options will be ignored if those are set.

The expressions can use the following variables:

The sequential number of the filtered frame, starting from 0.
the coordinates of the current sample
the width and height of currently filtered plane
Width and height scale for the plane being filtered. It is the ratio between the dimensions of the current plane to the luma plane, e.g. for a "yuv420p" frame, the values are "1,1" for the luma plane and "0.5,0.5" for the chroma planes.
Time of the current frame, expressed in seconds.
Value of pixel component at current location for first video frame (top layer).
Value of pixel component at current location for second video frame (bottom layer).

The "blend" filter also supports the framesync options.

Examples

  • Apply transition from bottom layer to top layer in first 10 seconds:
    blend=all_expr='A*(if(gte(T,10),1,T/10))+B*(1-(if(gte(T,10),1,T/10)))'
    
  • Apply linear horizontal transition from top layer to bottom layer:
    blend=all_expr='A*(X/W)+B*(1-X/W)'
    
  • Apply 1x1 checkerboard effect:
    blend=all_expr='if(eq(mod(X,2),mod(Y,2)),A,B)'
    
  • Apply uncover left effect:
    blend=all_expr='if(gte(N*SW+X,W),A,B)'
    
  • Apply uncover down effect:
    blend=all_expr='if(gte(Y-N*SH,0),A,B)'
    
  • Apply uncover up-left effect:
    blend=all_expr='if(gte(T*SH*40+Y,H)*gte((T*40*SW+X)*W/H,W),A,B)'
    
  • Split diagonally video and shows top and bottom layer on each side:
    blend=all_expr='if(gt(X,Y*(W/H)),A,B)'
    
  • Display differences between the current and the previous frame:
    tblend=all_mode=grainextract
    

Commands

This filter supports same commands as options.

Determines blockiness of frames without altering the input frames.

Based on Remco Muijs and Ihor Kirenko: "A no-reference blocking artifact measure for adaptive video processing." 2005 13th European signal processing conference.

The filter accepts the following options:

Set minimum and maximum values for determining pixel grids (periods). Default values are [3,24].
Set planes to filter. Default is first only.

Examples

Determine blockiness for the first plane and search for periods within [8,32]:
blockdetect=period_min=8:period_max=32:planes=1

Determines blurriness of frames without altering the input frames.

Based on Marziliano, Pina, et al. "A no-reference perceptual blur metric." Allows for a block-based abbreviation.

The filter accepts the following options:

Set low and high threshold values used by the Canny thresholding algorithm.

The high threshold selects the "strong" edge pixels, which are then connected through 8-connectivity with the "weak" edge pixels selected by the low threshold.

low and high threshold values must be chosen in the range [0,1], and low should be lesser or equal to high.

Default value for low is "20/255", and default value for high is "50/255".

Define the radius to search around an edge pixel for local maxima.
Determine blurriness only for the most significant blocks, given in percentage.
Determine blurriness for blocks of width block_width. If set to any value smaller 1, no blocks are used and the whole image is processed as one no matter of block_height.
Determine blurriness for blocks of height block_height. If set to any value smaller 1, no blocks are used and the whole image is processed as one no matter of block_width.
Set planes to filter. Default is first only.

Examples

Determine blur for 80% of most significant 32x32 blocks:
blurdetect=block_width=32:block_height=32:block_pct=80

Denoise frames using Block-Matching 3D algorithm.

The filter accepts the following options.

Set denoising strength. Default value is 1. Allowed range is from 0 to 999.9. The denoising algorithm is very sensitive to sigma, so adjust it according to the source.
Set local patch size. This sets dimensions in 2D.
Set sliding step for processing blocks. Default value is 4. Allowed range is from 1 to 64. Smaller values allows processing more reference blocks and is slower.
Set maximal number of similar blocks for 3rd dimension. Default value is 1. When set to 1, no block matching is done. Larger values allows more blocks in single group. Allowed range is from 1 to 256.
Set radius for search block matching. Default is 9. Allowed range is from 1 to INT32_MAX.
Set step between two search locations for block matching. Default is 1. Allowed range is from 1 to 64. Smaller is slower.
Set threshold of mean square error for block matching. Valid range is 0 to INT32_MAX.
Set thresholding parameter for hard thresholding in 3D transformed domain. Larger values results in stronger hard-thresholding filtering in frequency domain.
Set filtering estimation mode. Can be "basic" or "final". Default is "basic".
If enabled, filter will use 2nd stream for block matching. Default is disabled for "basic" value of estim option, and always enabled if value of estim is "final".
Set planes to filter. Default is all available except alpha.

Examples

  • Basic filtering with bm3d:
    bm3d=sigma=3:block=4:bstep=2:group=1:estim=basic
    
  • Same as above, but filtering only luma:
    bm3d=sigma=3:block=4:bstep=2:group=1:estim=basic:planes=1
    
  • Same as above, but with both estimation modes:
    split[a][b],[a]bm3d=sigma=3:block=4:bstep=2:group=1:estim=basic[a],[b][a]bm3d=sigma=3:block=4:bstep=2:group=16:estim=final:ref=1
    
  • Same as above, but prefilter with nlmeans filter instead:
    split[a][b],[a]nlmeans=s=3:r=7:p=3[a],[b][a]bm3d=sigma=3:block=4:bstep=2:group=16:estim=final:ref=1
    

Apply a boxblur algorithm to the input video.

It accepts the following parameters:

A description of the accepted options follows.

Set an expression for the box radius in pixels used for blurring the corresponding input plane.

The radius value must be a non-negative number, and must not be greater than the value of the expression "min(w,h)/2" for the luma and alpha planes, and of "min(cw,ch)/2" for the chroma planes.

Default value for luma_radius is "2". If not specified, chroma_radius and alpha_radius default to the corresponding value set for luma_radius.

The expressions can contain the following constants:

The input width and height in pixels.
The input chroma image width and height in pixels.
The horizontal and vertical chroma subsample values. For example, for the pixel format "yuv422p", hsub is 2 and vsub is 1.
Specify how many times the boxblur filter is applied to the corresponding plane.

Default value for luma_power is 2. If not specified, chroma_power and alpha_power default to the corresponding value set for luma_power.

A value of 0 will disable the effect.

Examples

  • Apply a boxblur filter with the luma, chroma, and alpha radii set to 2:
    boxblur=luma_radius=2:luma_power=1
    boxblur=2:1
    
  • Set the luma radius to 2, and alpha and chroma radius to 0:
    boxblur=2:1:cr=0:ar=0
    
  • Set the luma and chroma radii to a fraction of the video dimension:
    boxblur=luma_radius=min(h\,w)/10:luma_power=1:chroma_radius=min(cw\,ch)/10:chroma_power=1
    

Deinterlace the input video ("bwdif" stands for "Bob Weaver Deinterlacing Filter").

Motion adaptive deinterlacing based on yadif with the use of w3fdif and cubic interpolation algorithms. It accepts the following parameters:

The interlacing mode to adopt. It accepts one of the following values:
0, send_frame
Output one frame for each frame.
1, send_field
Output one frame for each field.

The default value is "send_field".

The picture field parity assumed for the input interlaced video. It accepts one of the following values:
0, tff
Assume the top field is first.
1, bff
Assume the bottom field is first.
-1, auto
Enable automatic detection of field parity.

The default value is "auto". If the interlacing is unknown or the decoder does not export this information, top field first will be assumed.

Specify which frames to deinterlace. Accepts one of the following values:
0, all
Deinterlace all frames.
1, interlaced
Only deinterlace frames marked as interlaced.

The default value is "all".

Deinterlace the input video using the bwdif algorithm, but implemented in CUDA so that it can work as part of a GPU accelerated pipeline with nvdec and/or nvenc.

It accepts the following parameters:

The interlacing mode to adopt. It accepts one of the following values:
0, send_frame
Output one frame for each frame.
1, send_field
Output one frame for each field.

The default value is "send_field".

The picture field parity assumed for the input interlaced video. It accepts one of the following values:
0, tff
Assume the top field is first.
1, bff
Assume the bottom field is first.
-1, auto
Enable automatic detection of field parity.

The default value is "auto". If the interlacing is unknown or the decoder does not export this information, top field first will be assumed.

Specify which frames to deinterlace. Accepts one of the following values:
0, all
Deinterlace all frames.
1, interlaced
Only deinterlace frames marked as interlaced.

The default value is "all".

Repack CEA-708 closed captioning side data

This filter fixes various issues seen with commerical encoders related to upstream malformed CEA-708 payloads, specifically incorrect number of tuples (wrong cc_count for the target FPS), and incorrect ordering of tuples (i.e. the CEA-608 tuples are not at the first entries in the payload).

Apply Contrast Adaptive Sharpen filter to video stream.

The filter accepts the following options:

Set the sharpening strength. Default value is 0.
Set planes to filter. Default value is to filter all planes except alpha plane.

Commands

This filter supports same commands as options.

Remove all color information for all colors except for certain one.

The filter accepts the following options:

The color which will not be replaced with neutral chroma.
Similarity percentage with the above color. 0.01 matches only the exact key color, while 1.0 matches everything.
blend
Blend percentage. 0.0 makes pixels either fully gray, or not gray at all. Higher values result in more preserved color.
Signals that the color passed is already in YUV instead of RGB.

Literal colors like "green" or "red" don't make sense with this enabled anymore. This can be used to pass exact YUV values as hexadecimal numbers.

Commands

This filter supports same commands as options. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

YUV colorspace color/chroma keying.

The filter accepts the following options:

The color which will be replaced with transparency.
Similarity percentage with the key color.

0.01 matches only the exact key color, while 1.0 matches everything.

blend
Blend percentage.

0.0 makes pixels either fully transparent, or not transparent at all.

Higher values result in semi-transparent pixels, with a higher transparency the more similar the pixels color is to the key color.

Signals that the color passed is already in YUV instead of RGB.

Literal colors like "green" or "red" don't make sense with this enabled anymore. This can be used to pass exact YUV values as hexadecimal numbers.

Commands

This filter supports same commands as options. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Examples

  • Make every green pixel in the input image transparent:
    ffmpeg -i input.png -vf chromakey=green out.png
    
  • Overlay a greenscreen-video on top of a static black background.
    ffmpeg -f lavfi -i color=c=black:s=1280x720 -i video.mp4 -shortest -filter_complex "[1:v]chromakey=0x70de77:0.1:0.2[ckout];[0:v][ckout]overlay[out]" -map "[out]" output.mkv
    

CUDA accelerated YUV colorspace color/chroma keying.

This filter works like normal chromakey filter but operates on CUDA frames. for more details and parameters see chromakey.

Examples

  • Make all the green pixels in the input video transparent and use it as an overlay for another video:
    ./ffmpeg \
        -hwaccel cuda -hwaccel_output_format cuda -i input_green.mp4  \
        -hwaccel cuda -hwaccel_output_format cuda -i base_video.mp4 \
        -init_hw_device cuda \
        -filter_complex \
        " \
            [0:v]chromakey_cuda=0x25302D:0.1:0.12:1[overlay_video]; \
            [1:v]scale_cuda=format=yuv420p[base]; \
            [base][overlay_video]overlay_cuda" \
        -an -sn -c:v h264_nvenc -cq 20 output.mp4
    
  • Process two software sources, explicitly uploading the frames:
    ./ffmpeg -init_hw_device cuda=cuda -filter_hw_device cuda \
        -f lavfi -i color=size=800x600:color=white,format=yuv420p \
        -f lavfi -i yuvtestsrc=size=200x200,format=yuv420p \
        -filter_complex \
        " \
            [0]hwupload[under]; \
            [1]hwupload,chromakey_cuda=green:0.1:0.12[over]; \
            [under][over]overlay_cuda" \
        -c:v hevc_nvenc -cq 18 -preset slow output.mp4
    

Reduce chrominance noise.

The filter accepts the following options:

Set threshold for averaging chrominance values. Sum of absolute difference of Y, U and V pixel components of current pixel and neighbour pixels lower than this threshold will be used in averaging. Luma component is left unchanged and is copied to output. Default value is 30. Allowed range is from 1 to 200.
Set horizontal radius of rectangle used for averaging. Allowed range is from 1 to 100. Default value is 5.
Set vertical radius of rectangle used for averaging. Allowed range is from 1 to 100. Default value is 5.
Set horizontal step when averaging. Default value is 1. Allowed range is from 1 to 50. Mostly useful to speed-up filtering.
Set vertical step when averaging. Default value is 1. Allowed range is from 1 to 50. Mostly useful to speed-up filtering.
Set Y threshold for averaging chrominance values. Set finer control for max allowed difference between Y components of current pixel and neigbour pixels. Default value is 200. Allowed range is from 1 to 200.
Set U threshold for averaging chrominance values. Set finer control for max allowed difference between U components of current pixel and neigbour pixels. Default value is 200. Allowed range is from 1 to 200.
Set V threshold for averaging chrominance values. Set finer control for max allowed difference between V components of current pixel and neigbour pixels. Default value is 200. Allowed range is from 1 to 200.
Set distance type used in calculations.
Absolute difference.
Difference squared.

Default distance type is manhattan.

Commands

This filter supports same commands as options. The command accepts the same syntax of the corresponding option.

Shift chroma pixels horizontally and/or vertically.

The filter accepts the following options:

Set amount to shift chroma-blue horizontally.
Set amount to shift chroma-blue vertically.
Set amount to shift chroma-red horizontally.
Set amount to shift chroma-red vertically.
Set edge mode, can be smear, default, or warp.

Commands

This filter supports the all above options as commands.

Display CIE color diagram with pixels overlaid onto it.

The filter accepts the following options:

Set color system.
Set CIE system.
Set what gamuts to draw.

See "system" option for available values.

Set ciescope size, by default set to 512.
Set intensity used to map input pixel values to CIE diagram.
Set contrast used to draw tongue colors that are out of active color system gamut.
Correct gamma displayed on scope, by default enabled.
Show white point on CIE diagram, by default disabled.
Set input gamma. Used only with XYZ input color space.
Fill with CIE colors. By default is enabled.

Visualize information exported by some codecs.

Some codecs can export information through frames using side-data or other means. For example, some MPEG based codecs export motion vectors through the export_mvs flag in the codec flags2 option.

The filter accepts the following option:

Display block partition structure using the luma plane.
Set motion vectors to visualize.

Available flags for mv are:

forward predicted MVs of P-frames
forward predicted MVs of B-frames
backward predicted MVs of B-frames
qp
Display quantization parameters using the chroma planes.
Set motion vectors type to visualize. Includes MVs from all frames unless specified by frame_type option.

Available flags for mv_type are:

forward predicted MVs
backward predicted MVs
Set frame type to visualize motion vectors of.

Available flags for frame_type are:

intra-coded frames (I-frames)
predicted frames (P-frames)
bi-directionally predicted frames (B-frames)

Examples

  • Visualize forward predicted MVs of all frames using ffplay:
    ffplay -flags2 +export_mvs input.mp4 -vf codecview=mv_type=fp
    
  • Visualize multi-directionals MVs of P and B-Frames using ffplay:
    ffplay -flags2 +export_mvs input.mp4 -vf codecview=mv=pf+bf+bb
    

Modify intensity of primary colors (red, green and blue) of input frames.

The filter allows an input frame to be adjusted in the shadows, midtones or highlights regions for the red-cyan, green-magenta or blue-yellow balance.

A positive adjustment value shifts the balance towards the primary color, a negative value towards the complementary color.

The filter accepts the following options:

Adjust red, green and blue shadows (darkest pixels).
Adjust red, green and blue midtones (medium pixels).
Adjust red, green and blue highlights (brightest pixels).

Allowed ranges for options are "[-1.0, 1.0]". Defaults are 0.

Preserve lightness when changing color balance. Default is disabled.

Examples

Add red color cast to shadows:
colorbalance=rs=.3

Commands

This filter supports the all above options as commands.

Adjust color contrast between RGB components.

The filter accepts the following options:

Set the red-cyan contrast. Defaults is 0.0. Allowed range is from -1.0 to 1.0.
Set the green-magenta contrast. Defaults is 0.0. Allowed range is from -1.0 to 1.0.
Set the blue-yellow contrast. Defaults is 0.0. Allowed range is from -1.0 to 1.0.
Set the weight of each "rc", "gm", "by" option value. Default value is 0.0. Allowed range is from 0.0 to 1.0. If all weights are 0.0 filtering is disabled.
Set the amount of preserving lightness. Default value is 0.0. Allowed range is from 0.0 to 1.0.

Commands

This filter supports the all above options as commands.

Adjust color white balance selectively for blacks and whites. This filter operates in YUV colorspace.

The filter accepts the following options:

Set the red shadow spot. Allowed range is from -1.0 to 1.0. Default value is 0.
Set the blue shadow spot. Allowed range is from -1.0 to 1.0. Default value is 0.
Set the red highlight spot. Allowed range is from -1.0 to 1.0. Default value is 0.
Set the blue highlight spot. Allowed range is from -1.0 to 1.0. Default value is 0.
Set the amount of saturation. Allowed range is from -3.0 to 3.0. Default value is 1.
If set to anything other than "manual" it will analyze every frame and use derived parameters for filtering output frame.

Possible values are:

Default value is "manual".

Commands

This filter supports the all above options as commands.

Adjust video input frames by re-mixing color channels.

This filter modifies a color channel by adding the values associated to the other channels of the same pixels. For example if the value to modify is red, the output value will be:

<red>=<red>*<rr> + <blue>*<rb> + <green>*<rg> + <alpha>*<ra>

The filter accepts the following options:

Adjust contribution of input red, green, blue and alpha channels for output red channel. Default is 1 for rr, and 0 for rg, rb and ra.
Adjust contribution of input red, green, blue and alpha channels for output green channel. Default is 1 for gg, and 0 for gr, gb and ga.
Adjust contribution of input red, green, blue and alpha channels for output blue channel. Default is 1 for bb, and 0 for br, bg and ba.
aa
Adjust contribution of input red, green, blue and alpha channels for output alpha channel. Default is 1 for aa, and 0 for ar, ag and ab.

Allowed ranges for options are "[-2.0, 2.0]".

Set preserve color mode. The accepted values are:
Disable color preserving, this is default.
Preserve luminance.
Preserve max value of RGB triplet.
Preserve average value of RGB triplet.
Preserve sum value of RGB triplet.
Preserve normalized value of RGB triplet.
Preserve power value of RGB triplet.
Set the preserve color amount when changing colors. Allowed range is from "[0.0, 1.0]". Default is 0.0, thus disabled.

Examples

  • Convert source to grayscale:
    colorchannelmixer=.3:.4:.3:0:.3:.4:.3:0:.3:.4:.3
    
  • Simulate sepia tones:
    colorchannelmixer=.393:.769:.189:0:.349:.686:.168:0:.272:.534:.131
    

Commands

This filter supports the all above options as commands.

Overlay a solid color on the video stream.

The filter accepts the following options:

hue
Set the color hue. Allowed range is from 0 to 360. Default value is 0.
Set the color saturation. Allowed range is from 0 to 1. Default value is 0.5.
Set the color lightness. Allowed range is from 0 to 1. Default value is 0.5.
mix
Set the mix of source lightness. By default is set to 1.0. Allowed range is from 0.0 to 1.0.

Commands

This filter supports the all above options as commands.

RGB colorspace color keying. This filter operates on 8-bit RGB format frames by setting the alpha component of each pixel which falls within the similarity radius of the key color to 0. The alpha value for pixels outside the similarity radius depends on the value of the blend option.

The filter accepts the following options:

Set the color for which alpha will be set to 0 (full transparency). See "Color" section in the ffmpeg-utils manual. Default is "black".
Set the radius from the key color within which other colors also have full transparency. The computed distance is related to the unit fractional distance in 3D space between the RGB values of the key color and the pixel's color. Range is 0.01 to 1.0. 0.01 matches within a very small radius around the exact key color, while 1.0 matches everything. Default is 0.01.
blend
Set how the alpha value for pixels that fall outside the similarity radius is computed. 0.0 makes pixels either fully transparent or fully opaque. Higher values result in semi-transparent pixels, with greater transparency the more similar the pixel color is to the key color. Range is 0.0 to 1.0. Default is 0.0.

Examples

  • Make every green pixel in the input image transparent:
    ffmpeg -i input.png -vf colorkey=green out.png
    
  • Overlay a greenscreen-video on top of a static background image.
    ffmpeg -i background.png -i video.mp4 -filter_complex "[1:v]colorkey=0x3BBD1E:0.3:0.2[ckout];[0:v][ckout]overlay[out]" -map "[out]" output.flv
    

Commands

This filter supports same commands as options. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Remove all color information for all RGB colors except for certain one.

The filter accepts the following options:

The color which will not be replaced with neutral gray.
Similarity percentage with the above color. 0.01 matches only the exact key color, while 1.0 matches everything.
blend
Blend percentage. 0.0 makes pixels fully gray. Higher values result in more preserved color.

Commands

This filter supports same commands as options. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Adjust video input frames using levels.

The filter accepts the following options:

Adjust red, green, blue and alpha input black point. Allowed ranges for options are "[-1.0, 1.0]". Defaults are 0.
Adjust red, green, blue and alpha input white point. Allowed ranges for options are "[-1.0, 1.0]". Defaults are 1.

Input levels are used to lighten highlights (bright tones), darken shadows (dark tones), change the balance of bright and dark tones.

Adjust red, green, blue and alpha output black point. Allowed ranges for options are "[0, 1.0]". Defaults are 0.
Adjust red, green, blue and alpha output white point. Allowed ranges for options are "[0, 1.0]". Defaults are 1.

Output levels allows manual selection of a constrained output level range.

Set preserve color mode. The accepted values are:
Disable color preserving, this is default.
Preserve luminance.
Preserve max value of RGB triplet.
Preserve average value of RGB triplet.
Preserve sum value of RGB triplet.
Preserve normalized value of RGB triplet.
Preserve power value of RGB triplet.

Examples

  • Make video output darker:
    colorlevels=rimin=0.058:gimin=0.058:bimin=0.058
    
  • Increase contrast:
    colorlevels=rimin=0.039:gimin=0.039:bimin=0.039:rimax=0.96:gimax=0.96:bimax=0.96
    
  • Make video output lighter:
    colorlevels=rimax=0.902:gimax=0.902:bimax=0.902
    
  • Increase brightness:
    colorlevels=romin=0.5:gomin=0.5:bomin=0.5
    

Commands

This filter supports the all above options as commands.

Apply custom color maps to video stream.

This filter needs three input video streams. First stream is video stream that is going to be filtered out. Second and third video stream specify color patches for source color to target color mapping.

The filter accepts the following options:

Set the source and target video stream patch size in pixels.
Set the max number of used patches from source and target video stream. Default value is number of patches available in additional video streams. Max allowed number of patches is 64.
Set the adjustments used for target colors. Can be "relative" or "absolute". Defaults is "absolute".
Set the kernel used to measure color differences between mapped colors.

The accepted values are:

Default is "euclidean".

Convert color matrix.

The filter accepts the following options:

Specify the source and destination color matrix. Both values must be specified.

The accepted values are:

BT.709
FCC
BT.601
BT.470
BT.470BG
SMPTE-170M
SMPTE-240M
BT.2020

For example to convert from BT.601 to SMPTE-240M, use the command:

colormatrix=bt601:smpte240m

Convert colorspace, transfer characteristics or color primaries. Input video needs to have an even size.

The filter accepts the following options:

Specify all color properties at once.

The accepted values are:

BT.470M
BT.470BG
BT.601-6 525
BT.601-6 625
BT.709
SMPTE-170M
SMPTE-240M
BT.2020
Specify output colorspace.

The accepted values are:

BT.709
FCC
BT.470BG or BT.601-6 625
SMPTE-170M or BT.601-6 525
SMPTE-240M
YCgCo
BT.2020 with non-constant luminance
Specify output transfer characteristics.

The accepted values are:

BT.709
BT.470M
BT.470BG
Constant gamma of 2.2
Constant gamma of 2.8
SMPTE-170M, BT.601-6 625 or BT.601-6 525
SMPTE-240M
SRGB
iec61966-2-1
iec61966-2-4
xvycc
BT.2020 for 10-bits content
BT.2020 for 12-bits content
Specify output color primaries.

The accepted values are:

BT.709
BT.470M
BT.470BG or BT.601-6 625
SMPTE-170M or BT.601-6 525
SMPTE-240M
film
SMPTE-431
SMPTE-432
BT.2020
JEDEC P22 phosphors
Specify output color range.

The accepted values are:

TV (restricted) range
MPEG (restricted) range
PC (full) range
JPEG (full) range
format
Specify output color format.

The accepted values are:

YUV 4:2:0 planar 8-bits
YUV 4:2:0 planar 10-bits
YUV 4:2:0 planar 12-bits
YUV 4:2:2 planar 8-bits
YUV 4:2:2 planar 10-bits
YUV 4:2:2 planar 12-bits
YUV 4:4:4 planar 8-bits
YUV 4:4:4 planar 10-bits
YUV 4:4:4 planar 12-bits
Do a fast conversion, which skips gamma/primary correction. This will take significantly less CPU, but will be mathematically incorrect. To get output compatible with that produced by the colormatrix filter, use fast=1.
Specify dithering mode.

The accepted values are:

No dithering
Floyd-Steinberg dithering
Whitepoint adaptation mode.

The accepted values are:

Bradford whitepoint adaptation
von Kries whitepoint adaptation
identity
identity whitepoint adaptation (i.e. no whitepoint adaptation)
Override all input properties at once. Same accepted values as all.
Override input colorspace. Same accepted values as space.
Override input color primaries. Same accepted values as primaries.
Override input transfer characteristics. Same accepted values as trc.
Override input color range. Same accepted values as range.

The filter converts the transfer characteristics, color space and color primaries to the specified user values. The output value, if not specified, is set to a default value based on the "all" property. If that property is also not specified, the filter will log an error. The output color range and format default to the same value as the input color range and format. The input transfer characteristics, color space, color primaries and color range should be set on the input data. If any of these are missing, the filter will log an error and no conversion will take place.

For example to convert the input to SMPTE-240M, use the command:

colorspace=smpte240m

CUDA accelerated implementation of the colorspace filter.

It is by no means feature complete compared to the software colorspace filter, and at the current time only supports color range conversion between jpeg/full and mpeg/limited range.

The filter accepts the following options:

Specify output color range.

The accepted values are:

TV (restricted) range
MPEG (restricted) range
PC (full) range
JPEG (full) range

Adjust color temperature in video to simulate variations in ambient color temperature.

The filter accepts the following options:

Set the temperature in Kelvin. Allowed range is from 1000 to 40000. Default value is 6500 K.
mix
Set mixing with filtered output. Allowed range is from 0 to 1. Default value is 1.
Set the amount of preserving lightness. Allowed range is from 0 to 1. Default value is 0.

Commands

This filter supports same commands as options.

Apply convolution of 3x3, 5x5, 7x7 or horizontal/vertical up to 49 elements.

The filter accepts the following options:

0m
1m
2m
3m
Set matrix for each plane. Matrix is sequence of 9, 25 or 49 signed integers in square mode, and from 1 to 49 odd number of signed integers in row mode.
0rdiv
1rdiv
2rdiv
3rdiv
Set multiplier for calculated value for each plane. If unset or 0, it will be 1/sum of all matrix elements.
0bias
1bias
2bias
3bias
Set bias for each plane. This value is added to the result of the multiplication. Useful for making the overall image brighter or darker. Default is 0.0.
0mode
1mode
2mode
3mode
Set matrix mode for each plane. Can be square, row or column. Default is square.

Commands

This filter supports the all above options as commands.

Examples

  • Apply sharpen:
    convolution="0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0"
    
  • Apply blur:
    convolution="1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1/9:1/9:1/9:1/9"
    
  • Apply edge enhance:
    convolution="0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:5:1:1:1:0:128:128:128"
    
  • Apply edge detect:
    convolution="0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:5:5:5:1:0:128:128:128"
    
  • Apply laplacian edge detector which includes diagonals:
    convolution="1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:5:5:5:1:0:128:128:0"
    
  • Apply emboss:
    convolution="-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2"
    

Apply 2D convolution of video stream in frequency domain using second stream as impulse.

The filter accepts the following options:

Set which planes to process.
Set which impulse video frames will be processed, can be first or all. Default is all.

The "convolve" filter also supports the framesync options.

Copy the input video source unchanged to the output. This is mainly useful for testing purposes.

Video filtering on GPU using Apple's CoreImage API on OSX.

Hardware acceleration is based on an OpenGL context. Usually, this means it is processed by video hardware. However, software-based OpenGL implementations exist which means there is no guarantee for hardware processing. It depends on the respective OSX.

There are many filters and image generators provided by Apple that come with a large variety of options. The filter has to be referenced by its name along with its options.

The coreimage filter accepts the following options:

List all available filters and generators along with all their respective options as well as possible minimum and maximum values along with the default values.
list_filters=true
Specify all filters by their respective name and options. Use list_filters to determine all valid filter names and options. Numerical options are specified by a float value and are automatically clamped to their respective value range. Vector and color options have to be specified by a list of space separated float values. Character escaping has to be done. A special option name "default" is available to use default options for a filter.

It is required to specify either "default" or at least one of the filter options. All omitted options are used with their default values. The syntax of the filter string is as follows:

filter=<NAME>@<OPTION>=<VALUE>[@<OPTION>=<VALUE>][@...][#<NAME>@<OPTION>=<VALUE>[@<OPTION>=<VALUE>][@...]][#...]
Specify a rectangle where the output of the filter chain is copied into the input image. It is given by a list of space separated float values:
output_rect=x\ y\ width\ height

If not given, the output rectangle equals the dimensions of the input image. The output rectangle is automatically cropped at the borders of the input image. Negative values are valid for each component.

output_rect=25\ 25\ 100\ 100

Several filters can be chained for successive processing without GPU-HOST transfers allowing for fast processing of complex filter chains. Currently, only filters with zero (generators) or exactly one (filters) input image and one output image are supported. Also, transition filters are not yet usable as intended.

Some filters generate output images with additional padding depending on the respective filter kernel. The padding is automatically removed to ensure the filter output has the same size as the input image.

For image generators, the size of the output image is determined by the previous output image of the filter chain or the input image of the whole filterchain, respectively. The generators do not use the pixel information of this image to generate their output. However, the generated output is blended onto this image, resulting in partial or complete coverage of the output image.

The coreimagesrc video source can be used for generating input images which are directly fed into the filter chain. By using it, providing input images by another video source or an input video is not required.

Examples

  • List all filters available:
    coreimage=list_filters=true
    
  • Use the CIBoxBlur filter with default options to blur an image:
    coreimage=filter=CIBoxBlur@default
    
  • Use a filter chain with CISepiaTone at default values and CIVignetteEffect with its center at 100x100 and a radius of 50 pixels:
    coreimage=filter=CIBoxBlur@default#CIVignetteEffect@inputCenter=100\ 100@inputRadius=50
    
  • Use nullsrc and CIQRCodeGenerator to create a QR code for the FFmpeg homepage, given as complete and escaped command-line for Apple's standard bash shell:
    ffmpeg -f lavfi -i nullsrc=s=100x100,coreimage=filter=CIQRCodeGenerator@inputMessage=https\\\\\://FFmpeg.org/@inputCorrectionLevel=H -frames:v 1 QRCode.png
    

Obtain the correlation between two input videos.

This filter takes two input videos.

Both input videos must have the same resolution and pixel format for this filter to work correctly. Also it assumes that both inputs have the same number of frames, which are compared one by one.

The obtained per component, average, min and max correlation is printed through the logging system.

The filter stores the calculated correlation of each frame in frame metadata.

This filter also supports the framesync options.

In the below example the input file main.mpg being processed is compared with the reference file ref.mpg.

ffmpeg -i main.mpg -i ref.mpg -lavfi corr -f null -

Cover a rectangular object

It accepts the following options:

Filepath of the optional cover image, needs to be in yuv420.
Set covering mode.

It accepts the following values:

cover it by the supplied image
cover it by interpolating the surrounding pixels

Default value is blur.

Examples

Cover a rectangular object by the supplied image of a given video using ffmpeg:
ffmpeg -i file.ts -vf find_rect=newref.pgm,cover_rect=cover.jpg:mode=cover new.mkv

Crop the input video to given dimensions.

It accepts the following parameters:

The width of the output video. It defaults to "iw". This expression is evaluated only once during the filter configuration, or when the w or out_w command is sent.
The height of the output video. It defaults to "ih". This expression is evaluated only once during the filter configuration, or when the h or out_h command is sent.
The horizontal position, in the input video, of the left edge of the output video. It defaults to "(in_w-out_w)/2". This expression is evaluated per-frame.
The vertical position, in the input video, of the top edge of the output video. It defaults to "(in_h-out_h)/2". This expression is evaluated per-frame.
If set to 1 will force the output display aspect ratio to be the same of the input, by changing the output sample aspect ratio. It defaults to 0.
Enable exact cropping. If enabled, subsampled videos will be cropped at exact width/height/x/y as specified and will not be rounded to nearest smaller value. It defaults to 0.

The out_w, out_h, x, y parameters are expressions containing the following constants:

The computed values for x and y. They are evaluated for each new frame.
The input width and height.
These are the same as in_w and in_h.
The output (cropped) width and height.
These are the same as out_w and out_h.
same as iw / ih
input sample aspect ratio
input display aspect ratio, it is the same as (iw / ih) * sar
horizontal and vertical chroma subsample values. For example for the pixel format "yuv422p" hsub is 2 and vsub is 1.
The number of the input frame, starting from 0.
the position in the file of the input frame, NAN if unknown; deprecated, do not use
The timestamp expressed in seconds. It's NAN if the input timestamp is unknown.

The expression for out_w may depend on the value of out_h, and the expression for out_h may depend on out_w, but they cannot depend on x and y, as x and y are evaluated after out_w and out_h.

The x and y parameters specify the expressions for the position of the top-left corner of the output (non-cropped) area. They are evaluated for each frame. If the evaluated value is not valid, it is approximated to the nearest valid value.

The expression for x may depend on y, and the expression for y may depend on x.

Examples

  • Crop area with size 100x100 at position (12,34).
    crop=100:100:12:34
    

    Using named options, the example above becomes:

    crop=w=100:h=100:x=12:y=34
    
  • Crop the central input area with size 100x100:
    crop=100:100
    
  • Crop the central input area with size 2/3 of the input video:
    crop=2/3*in_w:2/3*in_h
    
  • Crop the input video central square:
    crop=out_w=in_h
    crop=in_h
    
  • Delimit the rectangle with the top-left corner placed at position 100:100 and the right-bottom corner corresponding to the right-bottom corner of the input image.
    crop=in_w-100:in_h-100:100:100
    
  • Crop 10 pixels from the left and right borders, and 20 pixels from the top and bottom borders
    crop=in_w-2*10:in_h-2*20
    
  • Keep only the bottom right quarter of the input image:
    crop=in_w/2:in_h/2:in_w/2:in_h/2
    
  • Crop height for getting Greek harmony:
    crop=in_w:1/PHI*in_w
    
  • Apply trembling effect:
    crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(n/10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(n/7)
    
  • Apply erratic camera effect depending on timestamp:
    crop=in_w/2:in_h/2:(in_w-out_w)/2+((in_w-out_w)/2)*sin(t*10):(in_h-out_h)/2 +((in_h-out_h)/2)*sin(t*13)
    
  • Set x depending on the value of y:
    crop=in_w/2:in_h/2:y:10+10*sin(n/10)
    

Commands

This filter supports the following commands:

Set width/height of the output video and the horizontal/vertical position in the input video. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Auto-detect the crop size.

It calculates the necessary cropping parameters and prints the recommended parameters via the logging system. The detected dimensions correspond to the non-black or video area of the input video according to mode.

It accepts the following parameters:

Depending on mode crop detection is based on either the mere black value of surrounding pixels or a combination of motion vectors and edge pixels.
Detect black pixels surrounding the playing video. For fine control use option limit.
Detect the playing video by the motion vectors inside the video and scanning for edge pixels typically forming the border of a playing video.
Set higher black value threshold, which can be optionally specified from nothing (0) to everything (255 for 8-bit based formats). An intensity value greater to the set value is considered non-black. It defaults to 24. You can also specify a value between 0.0 and 1.0 which will be scaled depending on the bitdepth of the pixel format.
The value which the width/height should be divisible by. It defaults to 16. The offset is automatically adjusted to center the video. Use 2 to get only even dimensions (needed for 4:2:2 video). 16 is best when encoding to most video codecs.
Set the number of initial frames for which evaluation is skipped. Default is 2. Range is 0 to INT_MAX.
Set the counter that determines after how many frames cropdetect will reset the previously detected largest video area and start over to detect the current optimal crop area. Default value is 0.

This can be useful when channel logos distort the video area. 0 indicates 'never reset', and returns the largest area encountered during playback.

Set motion in pixel units as threshold for motion detection. It defaults to 8.
Set low and high threshold values used by the Canny thresholding algorithm.

The high threshold selects the "strong" edge pixels, which are then connected through 8-connectivity with the "weak" edge pixels selected by the low threshold.

low and high threshold values must be chosen in the range [0,1], and low should be lesser or equal to high.

Default value for low is "5/255", and default value for high is "15/255".

Examples

  • Find video area surrounded by black borders:
    ffmpeg -i file.mp4 -vf cropdetect,metadata=mode=print -f null -
    
  • Find an embedded video area, generate motion vectors beforehand:
    ffmpeg -i file.mp4 -vf mestimate,cropdetect=mode=mvedges,metadata=mode=print -f null -
    
  • Find an embedded video area, use motion vectors from decoder:
    ffmpeg -flags2 +export_mvs -i file.mp4 -vf cropdetect=mode=mvedges,metadata=mode=print -f null -
    

Commands

This filter supports the following commands:

The command accepts the same syntax of the corresponding option. If the specified expression is not valid, it is kept at its current value.

Delay video filtering until a given wallclock timestamp. The filter first passes on preroll amount of frames, then it buffers at most buffer amount of frames and waits for the cue. After reaching the cue it forwards the buffered frames and also any subsequent frames coming in its input.

The filter can be used synchronize the output of multiple ffmpeg processes for realtime output devices like decklink. By putting the delay in the filtering chain and pre-buffering frames the process can pass on data to output almost immediately after the target wallclock timestamp is reached.

Perfect frame accuracy cannot be guaranteed, but the result is good enough for some use cases.

cue
The cue timestamp expressed in a UNIX timestamp in microseconds. Default is 0.
The duration of content to pass on as preroll expressed in seconds. Default is 0.
buffer
The maximum duration of content to buffer before waiting for the cue expressed in seconds. Default is 0.

Apply color adjustments using curves.

This filter is similar to the Adobe Photoshop and GIMP curves tools. Each component (red, green and blue) has its values defined by N key points tied from each other using a smooth curve. The x-axis represents the pixel values from the input frame, and the y-axis the new pixel values to be set for the output frame.

By default, a component curve is defined by the two points (0;0) and (1;1). This creates a straight line where each original pixel value is "adjusted" to its own value, which means no change to the image.

The filter allows you to redefine these two points and add some more. A new curve will be defined to pass smoothly through all these new coordinates. The new defined points need to be strictly increasing over the x-axis, and their x and y values must be in the [0;1] interval. The curve is formed by using a natural or monotonic cubic spline interpolation, depending on the interp option (default: "natural"). The "natural" spline produces a smoother curve in general while the monotonic ("pchip") spline guarantees the transitions between the specified points to be monotonic. If the computed curves happened to go outside the vector spaces, the values will be clipped accordingly.

The filter accepts the following options:

Select one of the available color presets. This option can be used in addition to the r, g, b parameters; in this case, the later options takes priority on the preset values. Available presets are:

Default is "none".

Set the master key points. These points will define a second pass mapping. It is sometimes called a "luminance" or "value" mapping. It can be used with r, g, b or all since it acts like a post-processing LUT.
Set the key points for the red component.
Set the key points for the green component.
Set the key points for the blue component.
Set the key points for all components (not including master). Can be used in addition to the other key points component options. In this case, the unset component(s) will fallback on this all setting.
Specify a Photoshop curves file (".acv") to import the settings from.
Save Gnuplot script of the curves in specified file.
Specify the kind of interpolation. Available algorithms are:
Natural cubic spline using a piece-wise cubic polynomial that is twice continuously differentiable.
Monotonic cubic spline using a piecewise cubic Hermite interpolating polynomial (PCHIP).

To avoid some filtergraph syntax conflicts, each key points list need to be defined using the following syntax: "x0/y0 x1/y1 x2/y2 ...".

Commands

This filter supports same commands as options.

Examples

  • Increase slightly the middle level of blue:
    curves=blue='0/0 0.5/0.58 1/1'
    
  • Vintage effect:
    curves=r='0/0.11 .42/.51 1/0.95':g='0/0 0.50/0.48 1/1':b='0/0.22 .49/.44 1/0.8'
    

    Here we obtain the following coordinates for each components:

"(0;0.11) (0.42;0.51) (1;0.95)"
"(0;0) (0.50;0.48) (1;1)"
"(0;0.22) (0.49;0.44) (1;0.80)"
  • The previous example can also be achieved with the associated built-in preset:
    curves=preset=vintage
    
  • Or simply:
    curves=vintage
    
  • Use a Photoshop preset and redefine the points of the green component:
    curves=psfile='MyCurvesPresets/purple.acv':green='0/0 0.45/0.53 1/1'
    
  • Check out the curves of the "cross_process" profile using ffmpeg and gnuplot:
    ffmpeg -f lavfi -i color -vf curves=cross_process:plot=/tmp/curves.plt -frames:v 1 -f null -
    gnuplot -p /tmp/curves.plt
    

Video data analysis filter.

This filter shows hexadecimal pixel values of part of video.

The filter accepts the following options:

Set output video size.
Set x offset from where to pick pixels.
Set y offset from where to pick pixels.
Set scope mode, can be one of the following:
Draw hexadecimal pixel values with white color on black background.
Draw hexadecimal pixel values with input video pixel color on black background.
Draw hexadecimal pixel values on color background picked from input video, the text color is picked in such way so its always visible.
Draw rows and columns numbers on left and top of video.
Set background opacity.
format
Set display number format. Can be "hex", or "dec". Default is "hex".
Set pixel components to display. By default all pixel components are displayed.

Commands

This filter supports same commands as options excluding "size" option.

Apply Directional blur filter.

The filter accepts the following options:

Set angle of directional blur. Default is 45.
Set radius of directional blur. Default is 5.
Set which planes to filter. By default all planes are filtered.

Commands

This filter supports same commands as options. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Denoise frames using 2D DCT (frequency domain filtering).

This filter is not designed for real time.

The filter accepts the following options:

Set the noise sigma constant.

This sigma defines a hard threshold of "3 * sigma"; every DCT coefficient (absolute value) below this threshold with be dropped.

If you need a more advanced filtering, see expr.

Default is 0.

Set number overlapping pixels for each block. Since the filter can be slow, you may want to reduce this value, at the cost of a less effective filter and the risk of various artefacts.

If the overlapping value doesn't permit processing the whole input width or height, a warning will be displayed and according borders won't be denoised.

Default value is blocksize-1, which is the best possible setting.

Set the coefficient factor expression.

For each coefficient of a DCT block, this expression will be evaluated as a multiplier value for the coefficient.

If this is option is set, the sigma option will be ignored.

The absolute value of the coefficient can be accessed through the c variable.

Set the blocksize using the number of bits. "1<<n" defines the blocksize, which is the width and height of the processed blocks.

The default value is 3 (8x8) and can be raised to 4 for a blocksize of 16x16. Note that changing this setting has huge consequences on the speed processing. Also, a larger block size does not necessarily means a better de-noising.

Examples

Apply a denoise with a sigma of 4.5:

dctdnoiz=4.5

The same operation can be achieved using the expression system:

dctdnoiz=e='gte(c, 4.5*3)'

Violent denoise using a block size of "16x16":

dctdnoiz=15:n=4

Remove banding artifacts from input video. It works by replacing banded pixels with average value of referenced pixels.

The filter accepts the following options:

1thr
2thr
3thr
4thr
Set banding detection threshold for each plane. Default is 0.02. Valid range is 0.00003 to 0.5. If difference between current pixel and reference pixel is less than threshold, it will be considered as banded.
Banding detection range in pixels. Default is 16. If positive, random number in range 0 to set value will be used. If negative, exact absolute value will be used. The range defines square of four pixels around current pixel.
Set direction in radians from which four pixel will be compared. If positive, random direction from 0 to set direction will be picked. If negative, exact of absolute value will be picked. For example direction 0, -PI or -2*PI radians will pick only pixels on same row and -PI/2 will pick only pixels on same column.
If enabled, current pixel is compared with average value of all four surrounding pixels. The default is enabled. If disabled current pixel is compared with all four surrounding pixels. The pixel is considered banded if only all four differences with surrounding pixels are less than threshold.
If enabled, current pixel is changed if and only if all pixel components are banded, e.g. banding detection threshold is triggered for all color components. The default is disabled.

Commands

This filter supports the all above options as commands.

Remove blocking artifacts from input video.

The filter accepts the following options:

Set filter type, can be weak or strong. Default is strong. This controls what kind of deblocking is applied.
Set size of block, allowed range is from 4 to 512. Default is 8.
Set blocking detection thresholds. Allowed range is 0 to 1. Defaults are: 0.098 for alpha and 0.05 for the rest. Using higher threshold gives more deblocking strength. Setting alpha controls threshold detection at exact edge of block. Remaining options controls threshold detection near the edge. Each one for below/above or left/right. Setting any of those to 0 disables deblocking.
Set planes to filter. Default is to filter all available planes.

Examples

  • Deblock using weak filter and block size of 4 pixels.
    deblock=filter=weak:block=4
    
  • Deblock using strong filter, block size of 4 pixels and custom thresholds for deblocking more edges.
    deblock=filter=strong:block=4:alpha=0.12:beta=0.07:gamma=0.06:delta=0.05
    
  • Similar as above, but filter only first plane.
    deblock=filter=strong:block=4:alpha=0.12:beta=0.07:gamma=0.06:delta=0.05:planes=1
    
  • Similar as above, but filter only second and third plane.
    deblock=filter=strong:block=4:alpha=0.12:beta=0.07:gamma=0.06:delta=0.05:planes=6
    

Commands

This filter supports the all above options as commands.

Drop duplicated frames at regular intervals.

The filter accepts the following options:

Set the number of frames from which one will be dropped. Setting this to N means one frame in every batch of N frames will be dropped. Default is 5.
Set the threshold for duplicate detection. If the difference metric for a frame is less than or equal to this value, then it is declared as duplicate. Default is 1.1
Set scene change threshold. Default is 15.
Set the size of the x and y-axis blocks used during metric calculations. Larger blocks give better noise suppression, but also give worse detection of small movements. Must be a power of two. Default is 32.
Mark main input as a pre-processed input and activate clean source input stream. This allows the input to be pre-processed with various filters to help the metrics calculation while keeping the frame selection lossless. When set to 1, the first stream is for the pre-processed input, and the second stream is the clean source from where the kept frames are chosen. Default is 0.
Set whether or not chroma is considered in the metric calculations. Default is 1.
Set whether or not the input only partially contains content to be decimated. Default is "false". If enabled video output stream will be in variable frame rate.

Apply 2D deconvolution of video stream in frequency domain using second stream as impulse.

The filter accepts the following options:

Set which planes to process.
Set which impulse video frames will be processed, can be first or all. Default is all.
noise
Set noise when doing divisions. Default is 0.0000001. Useful when width and height are not same and not power of 2 or if stream prior to convolving had noise.

The "deconvolve" filter also supports the framesync options.

Reduce cross-luminance (dot-crawl) and cross-color (rainbows) from video.

It accepts the following options:

Set mode of operation. Can be combination of dotcrawl for cross-luminance reduction and/or rainbows for cross-color reduction.
Set spatial luma threshold. Lower values increases reduction of cross-luminance.
Set tolerance for temporal luma. Higher values increases reduction of cross-luminance.
Set tolerance for chroma temporal variation. Higher values increases reduction of cross-color.
Set temporal chroma threshold. Lower values increases reduction of cross-color.

Apply deflate effect to the video.

This filter replaces the pixel by the local(3x3) average by taking into account only values lower than the pixel.

It accepts the following options:

Limit the maximum change for each plane, default is 65535. If 0, plane will remain unchanged.

Commands

This filter supports the all above options as commands.

Remove temporal frame luminance variations.

It accepts the following options:

Set moving-average filter size in frames. Default is 5. Allowed range is 2 - 129.
Set averaging mode to smooth temporal luminance variations.

Available values are:

Arithmetic mean
Geometric mean
Harmonic mean
Quadratic mean
Cubic mean
Power mean
median
Median
Do not actually modify frame. Useful when one only wants metadata.

Remove judder produced by partially interlaced telecined content.

Judder can be introduced, for instance, by pullup filter. If the original source was partially telecined content then the output of "pullup,dejudder" will have a variable frame rate. May change the recorded frame rate of the container. Aside from that change, this filter will not affect constant frame rate video.

The option available in this filter is:

Specify the length of the window over which the judder repeats.

Accepts any integer greater than 1. Useful values are:

4
If the original was telecined from 24 to 30 fps (Film to NTSC).
5
If the original was telecined from 25 to 30 fps (PAL to NTSC).
20
If a mixture of the two.

The default is 4.

Suppress a TV station logo by a simple interpolation of the surrounding pixels. Just set a rectangle covering the logo and watch it disappear (and sometimes something even uglier appear - your mileage may vary).

It accepts the following parameters:

Specify the top left corner coordinates of the logo. They must be specified.
Specify the width and height of the logo to clear. They must be specified.
When set to 1, a green rectangle is drawn on the screen to simplify finding the right x, y, w, and h parameters. The default value is 0.

The rectangle is drawn on the outermost pixels which will be (partly) replaced with interpolated values. The values of the next pixels immediately outside this rectangle in each direction will be used to compute the interpolated pixel values inside the rectangle.

Examples

Set a rectangle covering the area with top left corner coordinates 0,0 and size 100x77:
delogo=x=0:y=0:w=100:h=77

Remove the rain in the input image/video by applying the derain methods based on convolutional neural networks. Supported models:

Recurrent Squeeze-and-Excitation Context Aggregation Net (RESCAN). See http://openaccess.thecvf.com/content_ECCV_2018/papers/Xia_Li_Recurrent_Squeeze-and-Excitation_Context_ECCV_2018_paper.pdf.

Training as well as model generation scripts are provided in the repository at https://github.com/XueweiMeng/derain_filter.git.

The filter accepts the following options:

Specify which filter to use. This option accepts the following values:
derain
Derain filter. To conduct derain filter, you need to use a derain model.
Dehaze filter. To conduct dehaze filter, you need to use a dehaze model.

Default value is derain.

Specify which DNN backend to use for model loading and execution. This option accepts the following values:
TensorFlow backend. To enable this backend you need to install the TensorFlow for C library (see https://www.tensorflow.org/install/lang_c) and configure FFmpeg with "--enable-libtensorflow"
Set path to model file specifying network architecture and its parameters. Note that different backends use different file formats. TensorFlow can load files for only its format.

To get full functionality (such as async execution), please use the dnn_processing filter.

Attempt to fix small changes in horizontal and/or vertical shift. This filter helps remove camera shake from hand-holding a camera, bumping a tripod, moving on a vehicle, etc.

The filter accepts the following options:

Specify a rectangular area where to limit the search for motion vectors. If desired the search for motion vectors can be limited to a rectangular area of the frame defined by its top left corner, width and height. These parameters have the same meaning as the drawbox filter which can be used to visualise the position of the bounding box.

This is useful when simultaneous movement of subjects within the frame might be confused for camera motion by the motion vector search.

If any or all of x, y, w and h are set to -1 then the full frame is used. This allows later options to be set without specifying the bounding box for the motion vector search.

Default - search the whole frame.

Specify the maximum extent of movement in x and y directions in the range 0-64 pixels. Default 16.
Specify how to generate pixels to fill blanks at the edge of the frame. Available values are:
Fill zeroes at blank locations
Original image at blank locations
Extruded edge value at blank locations
Mirrored edge at blank locations

Default value is mirror.

Specify the blocksize to use for motion search. Range 4-128 pixels, default 8.
Specify the contrast threshold for blocks. Only blocks with more than the specified contrast (difference between darkest and lightest pixels) will be considered. Range 1-255, default 125.
Specify the search strategy. Available values are:
Set exhaustive search
Set less exhaustive search.

Default value is exhaustive.

If set then a detailed log of the motion search is written to the specified file.

Remove unwanted contamination of foreground colors, caused by reflected color of greenscreen or bluescreen.

This filter accepts the following options:

Set what type of despill to use.
mix
Set how spillmap will be generated.
Set how much to get rid of still remaining spill.
Controls amount of red in spill area.
Controls amount of green in spill area. Should be -1 for greenscreen.
Controls amount of blue in spill area. Should be -1 for bluescreen.
Controls brightness of spill area, preserving colors.
Modify alpha from generated spillmap.

Commands

This filter supports the all above options as commands.

Apply an exact inverse of the telecine operation. It requires a predefined pattern specified using the pattern option which must be the same as that passed to the telecine filter.

This filter accepts the following options:

top field first
bottom field first The default value is "top".
A string of numbers representing the pulldown pattern you wish to apply. The default value is 23.
A number representing position of the first frame with respect to the telecine pattern. This is to be used if the stream is cut. The default value is 0.

Apply dilation effect to the video.

This filter replaces the pixel by the local(3x3) maximum.

It accepts the following options:

Limit the maximum change for each plane, default is 65535. If 0, plane will remain unchanged.
Flag which specifies the pixel to refer to. Default is 255 i.e. all eight pixels are used.

Flags to local 3x3 coordinates maps like this:

1 2 3
4   5
6 7 8

Commands

This filter supports the all above options as commands.

Displace pixels as indicated by second and third input stream.

It takes three input streams and outputs one stream, the first input is the source, and second and third input are displacement maps.

The second input specifies how much to displace pixels along the x-axis, while the third input specifies how much to displace pixels along the y-axis. If one of displacement map streams terminates, last frame from that displacement map will be used.

Note that once generated, displacements maps can be reused over and over again.

A description of the accepted options follows.

Set displace behavior for pixels that are out of range.

Available values are:

Missing pixels are replaced by black pixels.
Adjacent pixels will spread out to replace missing pixels.
Out of range pixels are wrapped so they point to pixels of other side.
Out of range pixels will be replaced with mirrored pixels.

Default is smear.

Examples

  • Add ripple effect to rgb input of video size hd720:
    ffmpeg -i INPUT -f lavfi -i nullsrc=s=hd720,lutrgb=128:128:128 -f lavfi -i nullsrc=s=hd720,geq='r=128+30*sin(2*PI*X/400+T):g=128+30*sin(2*PI*X/400+T):b=128+30*sin(2*PI*X/400+T)' -lavfi '[0][1][2]displace' OUTPUT
    
  • Add wave effect to rgb input of video size hd720:
    ffmpeg -i INPUT -f lavfi -i nullsrc=hd720,geq='r=128+80*(sin(sqrt((X-W/2)*(X-W/2)+(Y-H/2)*(Y-H/2))/220*2*PI+T)):g=128+80*(sin(sqrt((X-W/2)*(X-W/2)+(Y-H/2)*(Y-H/2))/220*2*PI+T)):b=128+80*(sin(sqrt((X-W/2)*(X-W/2)+(Y-H/2)*(Y-H/2))/220*2*PI+T))' -lavfi '[1]split[x][y],[0][x][y]displace' OUTPUT
    

Do classification with deep neural networks based on bounding boxes.

The filter accepts the following options:

Specify which DNN backend to use for model loading and execution. This option accepts only openvino now, tensorflow backends will be added.
Set path to model file specifying network architecture and its parameters. Note that different backends use different file formats.
Set the input name of the dnn network.
Set the output name of the dnn network.
Set the confidence threshold (default: 0.5).
Set path to label file specifying the mapping between label id and name. Each label name is written in one line, tailing spaces and empty lines are skipped. The first line is the name of label id 0, and the second line is the name of label id 1, etc. The label id is considered as name if the label file is not provided.
Set the configs to be passed into backend

For tensorflow backend, you can set its configs with sess_config options, please use tools/python/tf_sess_config.py to get the configs for your system.

Do object detection with deep neural networks.

The filter accepts the following options:

Specify which DNN backend to use for model loading and execution. This option accepts only openvino now, tensorflow backends will be added.
Set path to model file specifying network architecture and its parameters. Note that different backends use different file formats.
Set the input name of the dnn network.
Set the output name of the dnn network.
Set the confidence threshold (default: 0.5).
Set path to label file specifying the mapping between label id and name. Each label name is written in one line, tailing spaces and empty lines are skipped. The first line is the name of label id 0 (usually it is 'background'), and the second line is the name of label id 1, etc. The label id is considered as name if the label file is not provided.
Set the configs to be passed into backend. To use async execution, set async (default: set). Roll back to sync execution if the backend does not support async.

Do image processing with deep neural networks. It works together with another filter which converts the pixel format of the Frame to what the dnn network requires.

The filter accepts the following options:

Specify which DNN backend to use for model loading and execution. This option accepts the following values:
TensorFlow backend. To enable this backend you need to install the TensorFlow for C library (see https://www.tensorflow.org/install/lang_c) and configure FFmpeg with "--enable-libtensorflow"
OpenVINO backend. To enable this backend you need to build and install the OpenVINO for C library (see https://github.com/openvinotoolkit/openvino/blob/master/build-instruction.md) and configure FFmpeg with "--enable-libopenvino" (--extra-cflags=-I... --extra-ldflags=-L... might be needed if the header files and libraries are not installed into system path)
Libtorch backend. To enable this backend you need to build and install Libtroch for C++ library. Please download cxx11 ABI version (see https://pytorch.org/get-started/locally) and configure FFmpeg with "--enable-libtorch --extra-cflags=-I/libtorch_root/libtorch/include --extra-cflags=-I/libtorch_root/libtorch/include/torch/csrc/api/include --extra-ldflags=-L/libtorch_root/libtorch/lib/"
Set path to model file specifying network architecture and its parameters. Note that different backends use different file formats. TensorFlow, OpenVINO and Libtorch backend can load files for only its format.
Set the input name of the dnn network.
Set the output name of the dnn network.
Set the configs to be passed into backend. To use async execution, set async (default: set). Roll back to sync execution if the backend does not support async.

For tensorflow backend, you can set its configs with sess_config options, please use tools/python/tf_sess_config.py to get the configs of TensorFlow backend for your system.

Examples

  • Remove rain in rgb24 frame with can.pb (see derain filter):
    ./ffmpeg -i rain.jpg -vf format=rgb24,dnn_processing=dnn_backend=tensorflow:model=can.pb:input=x:output=y derain.jpg
    
  • Handle the Y channel with srcnn.pb (see sr filter) for frame with yuv420p (planar YUV formats supported):
    ./ffmpeg -i 480p.jpg -vf format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=tensorflow:model=srcnn.pb:input=x:output=y -y srcnn.jpg
    
  • Handle the Y channel with espcn.pb (see sr filter), which changes frame size, for format yuv420p (planar YUV formats supported), please use tools/python/tf_sess_config.py to get the configs of TensorFlow backend for your system.
    ./ffmpeg -i 480p.jpg -vf format=yuv420p,dnn_processing=dnn_backend=tensorflow:model=espcn.pb:input=x:output=y:backend_configs=sess_config=0x10022805320e09cdccccccccccec3f20012a01303801 -y tmp.espcn.jpg
    

Draw a colored box on the input image.

It accepts the following parameters:

The expressions which specify the top left corner coordinates of the box. It defaults to 0.
The expressions which specify the width and height of the box; if 0 they are interpreted as the input width and height. It defaults to 0.
Specify the color of the box to write. For the general syntax of this option, check the "Color" section in the ffmpeg-utils manual. If the special value "invert" is used, the box edge color is the same as the video with inverted luma.
The expression which sets the thickness of the box edge. A value of "fill" will create a filled box. Default value is 3.

See below for the list of accepted constants.

Applicable if the input has alpha. With value 1, the pixels of the painted box will overwrite the video's color and alpha pixels. Default is 0, which composites the box onto the input, leaving the video's alpha intact.

The parameters for x, y, w and h and t are expressions containing the following constants:

The input display aspect ratio, it is the same as (w / h) * sar.
horizontal and vertical chroma subsample values. For example for the pixel format "yuv422p" hsub is 2 and vsub is 1.
The input width and height.
The input sample aspect ratio.
The x and y offset coordinates where the box is drawn.
The width and height of the drawn box.
Box source can be set as side_data_detection_bboxes if you want to use box data in detection bboxes of side data.

If box_source is set, the x, y, width and height will be ignored and still use box data in detection bboxes of side data. So please do not use this parameter if you were not sure about the box source.

The thickness of the drawn box.

These constants allow the x, y, w, h and t expressions to refer to each other, so you may for example specify "y=x/dar" or "h=w/dar".

Examples

  • Draw a black box around the edge of the input image:
    drawbox
    
  • Draw a box with color red and an opacity of 50%:
    drawbox=10:20:200:60:red@0.5
    

    The previous example can be specified as:

    drawbox=x=10:y=20:w=200:h=60:color=red@0.5
    
  • Fill the box with pink color:
    drawbox=x=10:y=10:w=100:h=100:color=pink@0.5:t=fill
    
  • Draw a 2-pixel red 2.40:1 mask:
    drawbox=x=-t:y=0.5*(ih-iw/2.4)-t:w=iw+t*2:h=iw/2.4+t*2:t=2:c=red
    

Commands

This filter supports same commands as options. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Draw a graph using input video metadata.

It accepts the following parameters:

Set 1st frame metadata key from which metadata values will be used to draw a graph.
Set 1st foreground color expression.
Set 2nd frame metadata key from which metadata values will be used to draw a graph.
Set 2nd foreground color expression.
Set 3rd frame metadata key from which metadata values will be used to draw a graph.
Set 3rd foreground color expression.
Set 4th frame metadata key from which metadata values will be used to draw a graph.
Set 4th foreground color expression.
Set minimal value of metadata value.
Set maximal value of metadata value.
Set graph background color. Default is white.
Set graph mode.

Available values for mode is:

Default is "line".

Set slide mode.

Available values for slide is:

Draw new frame when right border is reached.
Replace old columns with new ones.
scroll
Scroll from right to left.
Scroll from left to right.
Draw single picture.

Default is "frame".

Set size of graph video. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. The default value is "900x256".
Set the output frame rate. Default value is 25.

The foreground color expressions can use the following variables:

Minimal value of metadata value.
Maximal value of metadata value.
Current metadata key value.

The color is defined as 0xAABBGGRR.

Example using metadata from signalstats filter:

signalstats,drawgraph=lavfi.signalstats.YAVG:min=0:max=255

Example using metadata from ebur128 filter:

ebur128=metadata=1,adrawgraph=lavfi.r128.M:min=-120:max=5

Draw a grid on the input image.

It accepts the following parameters:

The expressions which specify the coordinates of some point of grid intersection (meant to configure offset). Both default to 0.
The expressions which specify the width and height of the grid cell, if 0 they are interpreted as the input width and height, respectively, minus "thickness", so image gets framed. Default to 0.
Specify the color of the grid. For the general syntax of this option, check the "Color" section in the ffmpeg-utils manual. If the special value "invert" is used, the grid color is the same as the video with inverted luma.
The expression which sets the thickness of the grid line. Default value is 1.

See below for the list of accepted constants.

Applicable if the input has alpha. With 1 the pixels of the painted grid will overwrite the video's color and alpha pixels. Default is 0, which composites the grid onto the input, leaving the video's alpha intact.

The parameters for x, y, w and h and t are expressions containing the following constants:

The input display aspect ratio, it is the same as (w / h) * sar.
horizontal and vertical chroma subsample values. For example for the pixel format "yuv422p" hsub is 2 and vsub is 1.
The input grid cell width and height.
The input sample aspect ratio.
The x and y coordinates of some point of grid intersection (meant to configure offset).
The width and height of the drawn cell.
The thickness of the drawn cell.

These constants allow the x, y, w, h and t expressions to refer to each other, so you may for example specify "y=x/dar" or "h=w/dar".

Examples

  • Draw a grid with cell 100x100 pixels, thickness 2 pixels, with color red and an opacity of 50%:
    drawgrid=width=100:height=100:thickness=2:color=red@0.5
    
  • Draw a white 3x3 grid with an opacity of 50%:
    drawgrid=w=iw/3:h=ih/3:t=2:c=white@0.5
    

Commands

This filter supports same commands as options. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Draw a text string or text from a specified file on top of a video, using the libfreetype library.

To enable compilation of this filter, you need to configure FFmpeg with "--enable-libfreetype" and "--enable-libharfbuzz". To enable default font fallback and the font option you need to configure FFmpeg with "--enable-libfontconfig". To enable the text_shaping option, you need to configure FFmpeg with "--enable-libfribidi".

Syntax

It accepts the following parameters:

Used to draw a box around text using the background color. The value must be either 1 (enable) or 0 (disable). The default value of box is 0.
Set the width of the border to be drawn around the box using boxcolor. The value must be specified using one of the following formats:
*<"boxborderw=10" set the width of all the borders to 10>
*<"boxborderw=10|20" set the width of the top and bottom borders to 10>
and the width of the left and right borders to 20
*<"boxborderw=10|20|30" set the width of the top border to 10, the width>
of the bottom border to 30 and the width of the left and right borders to 20
*<"boxborderw=10|20|30|40" set the borders width to 10 (top), 20 (right),>
30 (bottom), 40 (left)

The default value of boxborderw is "0".

The color to be used for drawing box around text. For the syntax of this option, check the "Color" section in the ffmpeg-utils manual.

The default value of boxcolor is "white".

Set the line spacing in pixels. The default value of line_spacing is 0.
Set the vertical and horizontal alignment of the text with respect to the box boundaries. The value is combination of flags, one for the vertical alignment (T=top, M=middle, B=bottom) and one for the horizontal alignment (L=left, C=center, R=right). Please note that tab characters are only supported with the left horizontal alignment.
Specify what the y value is referred to. Possible values are:
*<"text" the top of the highest glyph of the first text line is placed at y>
*<"baseline" the baseline of the first text line is placed at y>
*<"font" the baseline of the first text line is placed at y plus the>
ascent (in pixels) defined in the font metrics

The default value of y_align is "text" for backward compatibility.

Set the width of the border to be drawn around the text using bordercolor. The default value of borderw is 0.
Set the color to be used for drawing border around text. For the syntax of this option, check the "Color" section in the ffmpeg-utils manual.

The default value of bordercolor is "black".

Select how the text is expanded. Can be either "none", "strftime" (deprecated) or "normal" (default). See the drawtext_expansion, Text expansion section below for details.
Set a start time for the count. Value is in microseconds. Only applied in the deprecated "strftime" expansion mode. To emulate in normal expansion mode use the "pts" function, supplying the start time (in seconds) as the second argument.
If true, check and fix text coords to avoid clipping.
The color to be used for drawing fonts. For the syntax of this option, check the "Color" section in the ffmpeg-utils manual.

The default value of fontcolor is "black".

String which is expanded the same way as text to obtain dynamic fontcolor value. By default this option has empty value and is not processed. When this option is set, it overrides fontcolor option.
The font family to be used for drawing text. By default Sans.
The font file to be used for drawing text. The path must be included. This parameter is mandatory if the fontconfig support is disabled.
Draw the text applying alpha blending. The value can be a number between 0.0 and 1.0. The expression accepts the same variables x, y as well. The default value is 1. Please see fontcolor_expr.
The font size to be used for drawing text. The default value of fontsize is 16.
If set to 1, attempt to shape the text (for example, reverse the order of right-to-left text and join Arabic characters) before drawing it. Otherwise, just draw the text exactly as given. By default 1 (if supported).
The flags to be used for loading the fonts.

The flags map the corresponding flags supported by libfreetype, and are a combination of the following values:

Default value is "default".

For more information consult the documentation for the FT_LOAD_* libfreetype flags.

The color to be used for drawing a shadow behind the drawn text. For the syntax of this option, check the "Color" section in the ffmpeg-utils manual.

The default value of shadowcolor is "black".

Set the width of the box to be drawn around text. The default value of boxw is computed automatically to match the text width
Set the height of the box to be drawn around text. The default value of boxh is computed automatically to match the text height
The x and y offsets for the text shadow position with respect to the position of the text. They can be either positive or negative values. The default value for both is "0".
The starting frame number for the n/frame_num variable. The default value is "0".
The size in number of spaces to use for rendering the tab. Default value is 4.
Set the initial timecode representation in "hh:mm:ss[:;.]ff" format. It can be used with or without text parameter. timecode_rate option must be specified.
Set the timecode frame rate (timecode only). Value will be rounded to nearest integer. Minimum value is "1". Drop-frame timecode is supported for frame rates 30 & 60.
If set to 1, the output of the timecode option will wrap around at 24 hours. Default is 0 (disabled).
The text string to be drawn. The text must be a sequence of UTF-8 encoded characters. This parameter is mandatory if no file is specified with the parameter textfile.
A text file containing text to be drawn. The text must be a sequence of UTF-8 encoded characters.

This parameter is mandatory if no text string is specified with the parameter text.

If both text and textfile are specified, an error is thrown.

Text source should be set as side_data_detection_bboxes if you want to use text data in detection bboxes of side data.

If text source is set, text and textfile will be ignored and still use text data in detection bboxes of side data. So please do not use this parameter if you are not sure about the text source.

The textfile will be reloaded at specified frame interval. Be sure to update textfile atomically, or it may be read partially, or even fail. Range is 0 to INT_MAX. Default is 0.
The expressions which specify the offsets where text will be drawn within the video frame. They are relative to the top/left border of the output image.

The default value of x and y is "0".

See below for the list of accepted constants and functions.

The parameters for x and y are expressions containing the following constants and functions:

input display aspect ratio, it is the same as (w / h) * sar
horizontal and vertical chroma subsample values. For example for the pixel format "yuv422p" hsub is 2 and vsub is 1.
the height of each text line
the input height
the input width
the maximum distance from the baseline to the highest/upper grid coordinate used to place a glyph outline point, for all the rendered glyphs. It is a positive value, due to the grid's orientation with the Y axis upwards.
the maximum distance from the baseline to the lowest grid coordinate used to place a glyph outline point, for all the rendered glyphs. This is a negative value, due to the grid's orientation, with the Y axis upwards.
maximum glyph height, that is the maximum height for all the glyphs contained in the rendered text, it is equivalent to ascent - descent.
maximum glyph width, that is the maximum width for all the glyphs contained in the rendered text
the ascent size defined in the font metrics
the descent size defined in the font metrics
the maximum ascender of the glyphs of the first text line
the maximum descender of the glyphs of the last text line
the number of input frame, starting from 0
return a random number included between min and max
The input sample aspect ratio.
timestamp expressed in seconds, NAN if the input timestamp is unknown
the height of the rendered text
the width of the rendered text
the x and y offset coordinates where the text is drawn.

These parameters allow the x and y expressions to refer to each other, so you can for example specify "y=x/dar".

A one character description of the current frame's picture type.
The current packet's position in the input file or stream (in bytes, from the start of the input). A value of -1 indicates this info is not available.
The current packet's duration, in seconds.
The current packet's size (in bytes).

Text expansion

If expansion is set to "strftime", the filter recognizes sequences accepted by the "strftime" C function in the provided text and expands them accordingly. Check the documentation of "strftime". This feature is deprecated in favor of "normal" expansion with the "gmtime" or "localtime" expansion functions.

If expansion is set to "none", the text is printed verbatim.

If expansion is set to "normal" (which is the default), the following expansion mechanism is used.

The backslash character \, followed by any character, always expands to the second character.

Sequences of the form "%{...}" are expanded. The text between the braces is a function name, possibly followed by arguments separated by ':'. If the arguments contain special characters or delimiters (':' or '}'), they should be escaped.

Note that they probably must also be escaped as the value for the text option in the filter argument string and as the filter argument in the filtergraph description, and possibly also for the shell, that makes up to four levels of escaping; using a text file with the textfile option avoids these problems.

The following functions are available:

The expression evaluation result.

It must take one argument specifying the expression to be evaluated, which accepts the same constants and functions as the x and y values. Note that not all constants should be used, for example the text size is not known when evaluating the expression, so the constants text_w and text_h will have an undefined value.

Evaluate the expression's value and output as formatted integer.

The first argument is the expression to be evaluated, just as for the expr function. The second argument specifies the output format. Allowed values are x, X, d and u. They are treated exactly as in the "printf" function. The third parameter is optional and sets the number of positions taken by the output. It can be used to add padding with zeros from the left.

The time at which the filter is running, expressed in UTC. It can accept an argument: a "strftime" C function format string. The format string is extended to support the variable %[1-6]N which prints fractions of the second with optionally specified number of digits.
The time at which the filter is running, expressed in the local time zone. It can accept an argument: a "strftime" C function format string. The format string is extended to support the variable %[1-6]N which prints fractions of the second with optionally specified number of digits.
Frame metadata. Takes one or two arguments.

The first argument is mandatory and specifies the metadata key.

The second argument is optional and specifies a default value, used when the metadata key is not found or empty.

Available metadata can be identified by inspecting entries starting with TAG included within each frame section printed by running "ffprobe -show_frames".

String metadata generated in filters leading to the drawtext filter are also available.

The frame number, starting from 0.
A one character description of the current picture type.
The timestamp of the current frame. It can take up to three arguments.

The first argument is the format of the timestamp; it defaults to "flt" for seconds as a decimal number with microsecond accuracy; "hms" stands for a formatted [-]HH:MM:SS.mmm timestamp with millisecond accuracy. "gmtime" stands for the timestamp of the frame formatted as UTC time; "localtime" stands for the timestamp of the frame formatted as local time zone time.

The second argument is an offset added to the timestamp.

If the format is set to "hms", a third argument "24HH" may be supplied to present the hour part of the formatted timestamp in 24h format (00-23).

If the format is set to "localtime" or "gmtime", a third argument may be supplied: a "strftime" C function format string. By default, YYYY-MM-DD HH:MM:SS format will be used.

Commands

This filter supports altering parameters via commands:

Alter existing filter parameters.

Syntax for the argument is the same as for filter invocation, e.g.

fontsize=56:fontcolor=green:text='Hello World'

Full filter invocation with sendcmd would look like this:

sendcmd=c='56.0 drawtext reinit fontsize=56\:fontcolor=green\:text=Hello\\ World'

If the entire argument can't be parsed or applied as valid values then the filter will continue with its existing parameters.

The following options are also supported as commands:

*<x>
*<y>
*<alpha>
*<fontsize>
*<fontcolor>
*<boxcolor>
*<bordercolor>
*<shadowcolor>
*<box>
*<boxw>
*<boxh>
*<boxborderw>
*<line_spacing>
*<text_align>
*<shadowx>
*<shadowy>
*<borderw>

Examples

  • Draw "Test Text" with font FreeSerif, using the default values for the optional parameters.
    drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text'"
    
  • Draw 'Test Text' with font FreeSerif of size 24 at position x=100 and y=50 (counting from the top-left corner of the screen), text is yellow with a red box around it. Both the text and the box have an opacity of 20%.
    drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf: text='Test Text':\
              x=100: y=50: fontsize=24: fontcolor=yellow@0.2: box=1: boxcolor=red@0.2"
    

    Note that the double quotes are not necessary if spaces are not used within the parameter list.

  • Show the text at the center of the video frame:
    drawtext="fontsize=30:fontfile=FreeSerif.ttf:text='hello world':x=(w-text_w)/2:y=(h-text_h)/2"
    
  • Show the text at a random position, switching to a new position every 30 seconds:
    drawtext="fontsize=30:fontfile=FreeSerif.ttf:text='hello world':x=if(eq(mod(t\,30)\,0)\,rand(0\,(w-text_w))\,x):y=if(eq(mod(t\,30)\,0)\,rand(0\,(h-text_h))\,y)"
    
  • Show a text line sliding from right to left in the last row of the video frame. The file LONG_LINE is assumed to contain a single line with no newlines.
    drawtext="fontsize=15:fontfile=FreeSerif.ttf:text=LONG_LINE:y=h-line_h:x=-50*t"
    
  • Show the content of file CREDITS off the bottom of the frame and scroll up.
    drawtext="fontsize=20:fontfile=FreeSerif.ttf:textfile=CREDITS:y=h-20*t"
    
  • Draw a single green letter "g", at the center of the input video. The glyph baseline is placed at half screen height.
    drawtext="fontsize=60:fontfile=FreeSerif.ttf:fontcolor=green:text=g:x=(w-max_glyph_w)/2:y=h/2-ascent"
    
  • Show text for 1 second every 3 seconds:
    drawtext="fontfile=FreeSerif.ttf:fontcolor=white:x=100:y=x/dar:enable=lt(mod(t\,3)\,1):text='blink'"
    
  • Use fontconfig to set the font. Note that the colons need to be escaped.
    drawtext='fontfile=Linux Libertine O-40\\:style=Semibold:text=FFmpeg'
    
  • Draw "Test Text" with font size dependent on height of the video.
    drawtext="text='Test Text': fontsize=h/30: x=(w-text_w)/2: y=(h-text_h*2)"
    
  • Print the date of a real-time encoding (see documentation for the "strftime" C function):
    drawtext='fontfile=FreeSans.ttf:text=%{localtime\:%a %b %d %Y}'
    
  • Show text fading in and out (appearing/disappearing):
    #!/bin/sh
    DS=1.0 # display start
    DE=10.0 # display end
    FID=1.5 # fade in duration
    FOD=5 # fade out duration
    ffplay -f lavfi "color,drawtext=text=TEST:fontsize=50:fontfile=FreeSerif.ttf:fontcolor_expr=ff0000%{eif\\\\: clip(255*(1*between(t\\, $DS + $FID\\, $DE - $FOD) + ((t - $DS)/$FID)*between(t\\, $DS\\, $DS + $FID) + (-(t - $DE)/$FOD)*between(t\\, $DE - $FOD\\, $DE) )\\, 0\\, 255) \\\\: x\\\\: 2 }"
    
  • Horizontally align multiple separate texts. Note that max_glyph_a and the fontsize value are included in the y offset.
    drawtext=fontfile=FreeSans.ttf:text=DOG:fontsize=24:x=10:y=20+24-max_glyph_a,
    drawtext=fontfile=FreeSans.ttf:text=cow:fontsize=24:x=80:y=20+24-max_glyph_a
    
  • Plot special lavf.image2dec.source_basename metadata onto each frame if such metadata exists. Otherwise, plot the string "NA". Note that image2 demuxer must have option -export_path_metadata 1 for the special metadata fields to be available for filters.
    drawtext="fontsize=20:fontcolor=white:fontfile=FreeSans.ttf:text='%{metadata\:lavf.image2dec.source_basename\:NA}':x=10:y=10"
    

For more information about libfreetype, check: http://www.freetype.org/.

For more information about fontconfig, check: http://freedesktop.org/software/fontconfig/fontconfig-user.html.

For more information about libfribidi, check: http://fribidi.org/.

For more information about libharfbuzz, check: https://github.com/harfbuzz/harfbuzz.

Detect and draw edges. The filter uses the Canny Edge Detection algorithm.

The filter accepts the following options:

Set low and high threshold values used by the Canny thresholding algorithm.

The high threshold selects the "strong" edge pixels, which are then connected through 8-connectivity with the "weak" edge pixels selected by the low threshold.

low and high threshold values must be chosen in the range [0,1], and low should be lesser or equal to high.

Default value for low is "20/255", and default value for high is "50/255".

Define the drawing mode.
Draw white/gray wires on black background.
Mix the colors to create a paint/cartoon effect.
Apply Canny edge detector on all selected planes.

Default value is wires.

Select planes for filtering. By default all available planes are filtered.

Examples

  • Standard edge detection with custom values for the hysteresis thresholding:
    edgedetect=low=0.1:high=0.4
    
  • Painting effect without thresholding:
    edgedetect=mode=colormix:high=0
    

Apply a posterize effect using the ELBG (Enhanced LBG) algorithm.

For each input image, the filter will compute the optimal mapping from the input to the output given the codebook length, that is the number of distinct output colors.

This filter accepts the following options.

Set codebook length. The value must be a positive integer, and represents the number of distinct output colors. Default value is 256.
Set the maximum number of iterations to apply for computing the optimal mapping. The higher the value the better the result and the higher the computation time. Default value is 1.
Set a random seed, must be an integer included between 0 and UINT32_MAX. If not specified, or if explicitly set to -1, the filter will try to use a good random seed on a best effort basis.
Set pal8 output pixel format. This option does not work with codebook length greater than 256. Default is disabled.
Include alpha values in the quantization calculation. Allows creating palettized output images (e.g. PNG8) with multiple alpha smooth blending.

Measure graylevel entropy in histogram of color channels of video frames.

It accepts the following parameters:

Can be either normal or diff. Default is normal.

diff mode measures entropy of histogram delta values, absolute differences between neighbour histogram values.

Apply the EPX magnification filter which is designed for pixel art.

It accepts the following option:

Set the scaling dimension: 2 for "2xEPX", 3 for "3xEPX". Default is 3.

Set brightness, contrast, saturation and approximate gamma adjustment.

The filter accepts the following options:

Set the contrast expression. The value must be a float value in range -1000.0 to 1000.0. The default value is "1".
Set the brightness expression. The value must be a float value in range -1.0 to 1.0. The default value is "0".
Set the saturation expression. The value must be a float in range 0.0 to 3.0. The default value is "1".
Set the gamma expression. The value must be a float in range 0.1 to 10.0. The default value is "1".
Set the gamma expression for red. The value must be a float in range 0.1 to 10.0. The default value is "1".
Set the gamma expression for green. The value must be a float in range 0.1 to 10.0. The default value is "1".
Set the gamma expression for blue. The value must be a float in range 0.1 to 10.0. The default value is "1".
Set the gamma weight expression. It can be used to reduce the effect of a high gamma value on bright image areas, e.g. keep them from getting overamplified and just plain white. The value must be a float in range 0.0 to 1.0. A value of 0.0 turns the gamma correction all the way down while 1.0 leaves it at its full strength. Default is "1".
Set when the expressions for brightness, contrast, saturation and gamma expressions are evaluated.

It accepts the following values:

only evaluate expressions once during the filter initialization or when a command is processed
evaluate expressions for each incoming frame

Default value is init.

The expressions accept the following parameters:

frame count of the input frame starting from 0
byte position of the corresponding packet in the input file, NAN if unspecified; deprecated, do not use
frame rate of the input video, NAN if the input frame rate is unknown
timestamp expressed in seconds, NAN if the input timestamp is unknown

Commands

The filter supports the following commands:

Set the contrast expression.
Set the brightness expression.
Set the saturation expression.
Set the gamma expression.
Set the gamma_r expression.
Set gamma_g expression.
Set gamma_b expression.
Set gamma_weight expression.

The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Apply erosion effect to the video.

This filter replaces the pixel by the local(3x3) minimum.

It accepts the following options:

Limit the maximum change for each plane, default is 65535. If 0, plane will remain unchanged.
Flag which specifies the pixel to refer to. Default is 255 i.e. all eight pixels are used.

Flags to local 3x3 coordinates maps like this:

1 2 3
4   5
6 7 8

Commands

This filter supports the all above options as commands.

Deinterlace the input video ("estdif" stands for "Edge Slope Tracing Deinterlacing Filter").

Spatial only filter that uses edge slope tracing algorithm to interpolate missing lines. It accepts the following parameters:

The interlacing mode to adopt. It accepts one of the following values:
Output one frame for each frame.
field
Output one frame for each field.

The default value is "field".

The picture field parity assumed for the input interlaced video. It accepts one of the following values:
Assume the top field is first.
Assume the bottom field is first.
Enable automatic detection of field parity.

The default value is "auto". If the interlacing is unknown or the decoder does not export this information, top field first will be assumed.

Specify which frames to deinterlace. Accepts one of the following values:
Deinterlace all frames.
Only deinterlace frames marked as interlaced.

The default value is "all".

Specify the search radius for edge slope tracing. Default value is 1. Allowed range is from 1 to 15.
Specify the search radius for best edge matching. Default value is 2. Allowed range is from 0 to 15.
Specify the edge cost for edge matching. Default value is 2. Allowed range is from 0 to 50.
Specify the middle cost for edge matching. Default value is 1. Allowed range is from 0 to 50.
Specify the distance cost for edge matching. Default value is 1. Allowed range is from 0 to 50.
Specify the interpolation used. Default is 4-point interpolation. It accepts one of the following values:
2p
Two-point interpolation.
4p
Four-point interpolation.
6p
Six-point interpolation.

Commands

This filter supports same commands as options.

Adjust exposure of the video stream.

The filter accepts the following options:

exposure
Set the exposure correction in EV. Allowed range is from -3.0 to 3.0 EV Default value is 0 EV.
Set the black level correction. Allowed range is from -1.0 to 1.0. Default value is 0.

Commands

This filter supports same commands as options.

Extract color channel components from input video stream into separate grayscale video streams.

The filter accepts the following option:

Set plane(s) to extract.

Available values for planes are:

Choosing planes not available in the input will result in an error. That means you cannot select "r", "g", "b" planes with "y", "u", "v" planes at same time.

Examples

Extract luma, u and v color channel component from input video frame into 3 grayscale outputs:
ffmpeg -i video.avi -filter_complex 'extractplanes=y+u+v[y][u][v]' -map '[y]' y.avi -map '[u]' u.avi -map '[v]' v.avi

Apply a fade-in/out effect to the input video.

It accepts the following parameters:

The effect type can be either "in" for a fade-in, or "out" for a fade-out effect. Default is "in".
Specify the number of the frame to start applying the fade effect at. Default is 0.
The number of frames that the fade effect lasts. At the end of the fade-in effect, the output video will have the same intensity as the input video. At the end of the fade-out transition, the output video will be filled with the selected color. Default is 25.
If set to 1, fade only alpha channel, if one exists on the input. Default value is 0.
Specify the timestamp (in seconds) of the frame to start to apply the fade effect. If both start_frame and start_time are specified, the fade will start at whichever comes last. Default is 0.
The number of seconds for which the fade effect has to last. At the end of the fade-in effect the output video will have the same intensity as the input video, at the end of the fade-out transition the output video will be filled with the selected color. If both duration and nb_frames are specified, duration is used. Default is 0 (nb_frames is used by default).
Specify the color of the fade. Default is "black".

Examples

  • Fade in the first 30 frames of video:
    fade=in:0:30
    

    The command above is equivalent to:

    fade=t=in:s=0:n=30
    
  • Fade out the last 45 frames of a 200-frame video:
    fade=out:155:45
    fade=type=out:start_frame=155:nb_frames=45
    
  • Fade in the first 25 frames and fade out the last 25 frames of a 1000-frame video:
    fade=in:0:25, fade=out:975:25
    
  • Make the first 5 frames yellow, then fade in from frame 5-24:
    fade=in:5:20:color=yellow
    
  • Fade in alpha over first 25 frames of video:
    fade=in:0:25:alpha=1
    
  • Make the first 5.5 seconds black, then fade in for 0.5 seconds:
    fade=t=in:st=5.5:d=0.5
    

Apply feedback video filter.

This filter pass cropped input frames to 2nd output. From there it can be filtered with other video filters. After filter receives frame from 2nd input, that frame is combined on top of original frame from 1st input and passed to 1st output.

The typical usage is filter only part of frame.

The filter accepts the following options:

Set the top left crop position.
Set the crop size.

Examples

  • Blur only top left rectangular part of video frame size 100x100 with gblur filter.
    [in][blurin]feedback=x=0:y=0:w=100:h=100[out][blurout];[blurout]gblur=8[blurin]
    
  • Draw black box on top left part of video frame of size 100x100 with drawbox filter.
    [in][blurin]feedback=x=0:y=0:w=100:h=100[out][blurout];[blurout]drawbox=x=0:y=0:w=100:h=100:t=100[blurin]
    
  • Pixelize rectangular part of video frame of size 100x100 with pixelize filter.
    [in][blurin]feedback=x=320:y=240:w=100:h=100[out][blurout];[blurout]pixelize[blurin]
    

Denoise frames using 3D FFT (frequency domain filtering).

The filter accepts the following options:

Set the noise sigma constant. This sets denoising strength. Default value is 1. Allowed range is from 0 to 30. Using very high sigma with low overlap may give blocking artifacts.
Set amount of denoising. By default all detected noise is reduced. Default value is 1. Allowed range is from 0 to 1.
Set size of block in pixels, Default is 32, can be 8 to 256.
Set block overlap. Default is 0.5. Allowed range is from 0.2 to 0.8.
Set denoising method. Default is "wiener", can also be "hard".
Set number of previous frames to use for denoising. By default is set to 0.
Set number of next frames to to use for denoising. By default is set to 0.
Set planes which will be filtered, by default are all available filtered except alpha.

Apply arbitrary expressions to samples in frequency domain

Adjust the dc value (gain) of the luma plane of the image. The filter accepts an integer value in range 0 to 1000. The default value is set to 0.
Adjust the dc value (gain) of the 1st chroma plane of the image. The filter accepts an integer value in range 0 to 1000. The default value is set to 0.
Adjust the dc value (gain) of the 2nd chroma plane of the image. The filter accepts an integer value in range 0 to 1000. The default value is set to 0.
Set the frequency domain weight expression for the luma plane.
Set the frequency domain weight expression for the 1st chroma plane.
Set the frequency domain weight expression for the 2nd chroma plane.
Set when the expressions are evaluated.

It accepts the following values:

Only evaluate expressions once during the filter initialization.
Evaluate expressions for each incoming frame.

Default value is init.

The filter accepts the following variables:

The coordinates of the current sample.
The width and height of the image.
The number of input frame, starting from 0.
The size of FFT array for horizontal and vertical processing.

Examples

  • High-pass:
    fftfilt=dc_Y=128:weight_Y='squish(1-(Y+X)/100)'
    
  • Low-pass:
    fftfilt=dc_Y=0:weight_Y='squish((Y+X)/100-1)'
    
  • Sharpen:
    fftfilt=dc_Y=0:weight_Y='1+squish(1-(Y+X)/100)'
    
  • Blur:
    fftfilt=dc_Y=0:weight_Y='exp(-4 * ((Y+X)/(W+H)))'
    

Extract a single field from an interlaced image using stride arithmetic to avoid wasting CPU time. The output frames are marked as non-interlaced.

The filter accepts the following options:

Specify whether to extract the top (if the value is 0 or "top") or the bottom field (if the value is 1 or "bottom").

Create new frames by copying the top and bottom fields from surrounding frames supplied as numbers by the hint file.

Set file containing hints: absolute/relative frame numbers.

There must be one line for each frame in a clip. Each line must contain two numbers separated by the comma, optionally followed by "-" or "+". Numbers supplied on each line of file can not be out of [N-1,N+1] where N is current frame number for "absolute" mode or out of [-1, 1] range for "relative" mode. First number tells from which frame to pick up top field and second number tells from which frame to pick up bottom field.

If optionally followed by "+" output frame will be marked as interlaced, else if followed by "-" output frame will be marked as progressive, else it will be marked same as input frame. If optionally followed by "t" output frame will use only top field, or in case of "b" it will use only bottom field. If line starts with "#" or ";" that line is skipped.

Can be item "absolute" or "relative" or "pattern". Default is "absolute". The "pattern" mode is same as "relative" mode, except at last entry of file if there are more frames to process than "hint" file is seek back to start.

Example of first several lines of "hint" file for "relative" mode:

0,0 - # first frame
1,0 - # second frame, use third's frame top field and second's frame bottom field
1,0 - # third frame, use fourth's frame top field and third's frame bottom field
1,0 -
0,0 -
0,0 -
1,0 -
1,0 -
1,0 -
0,0 -
0,0 -
1,0 -
1,0 -
1,0 -
0,0 -

Field matching filter for inverse telecine. It is meant to reconstruct the progressive frames from a telecined stream. The filter does not drop duplicated frames, so to achieve a complete inverse telecine "fieldmatch" needs to be followed by a decimation filter such as decimate in the filtergraph.

The separation of the field matching and the decimation is notably motivated by the possibility of inserting a de-interlacing filter fallback between the two. If the source has mixed telecined and real interlaced content, "fieldmatch" will not be able to match fields for the interlaced parts. But these remaining combed frames will be marked as interlaced, and thus can be de-interlaced by a later filter such as yadif before decimation.

In addition to the various configuration options, "fieldmatch" can take an optional second stream, activated through the ppsrc option. If enabled, the frames reconstruction will be based on the fields and frames from this second stream. This allows the first input to be pre-processed in order to help the various algorithms of the filter, while keeping the output lossless (assuming the fields are matched properly). Typically, a field-aware denoiser, or brightness/contrast adjustments can help.

Note that this filter uses the same algorithms as TIVTC/TFM (AviSynth project) and VIVTC/VFM (VapourSynth project). The later is a light clone of TFM from which "fieldmatch" is based on. While the semantic and usage are very close, some behaviour and options names can differ.

The decimate filter currently only works for constant frame rate input. If your input has mixed telecined (30fps) and progressive content with a lower framerate like 24fps use the following filterchain to produce the necessary cfr stream: "dejudder,fps=30000/1001,fieldmatch,decimate".

The filter accepts the following options:

Specify the assumed field order of the input stream. Available values are:
Auto detect parity (use FFmpeg's internal parity value).
Assume bottom field first.
Assume top field first.

Note that it is sometimes recommended not to trust the parity announced by the stream.

Default value is auto.

Set the matching mode or strategy to use. pc mode is the safest in the sense that it won't risk creating jerkiness due to duplicate frames when possible, but if there are bad edits or blended fields it will end up outputting combed frames when a good match might actually exist. On the other hand, pcn_ub mode is the most risky in terms of creating jerkiness, but will almost always find a good frame if there is one. The other values are all somewhere in between pc and pcn_ub in terms of risking jerkiness and creating duplicate frames versus finding good matches in sections with bad edits, orphaned fields, blended fields, etc.

More details about p/c/n/u/b are available in p/c/n/u/b meaning section.

Available values are:

2-way matching (p/c)
2-way matching, and trying 3rd match if still combed (p/c + n)
2-way matching, and trying 3rd match (same order) if still combed (p/c + u)
2-way matching, trying 3rd match if still combed, and trying 4th/5th matches if still combed (p/c + n + u/b)
3-way matching (p/c/n)
3-way matching, and trying 4th/5th matches if all 3 of the original matches are detected as combed (p/c/n + u/b)

The parenthesis at the end indicate the matches that would be used for that mode assuming order=tff (and field on auto or top).

In terms of speed pc mode is by far the fastest and pcn_ub is the slowest.

Default value is pc_n.

Mark the main input stream as a pre-processed input, and enable the secondary input stream as the clean source to pick the fields from. See the filter introduction for more details. It is similar to the clip2 feature from VFM/TFM.

Default value is 0 (disabled).

field
Set the field to match from. It is recommended to set this to the same value as order unless you experience matching failures with that setting. In certain circumstances changing the field that is used to match from can have a large impact on matching performance. Available values are:
Automatic (same value as order).
Match from the bottom field.
Match from the top field.

Default value is auto.

Set whether or not chroma is included during the match comparisons. In most cases it is recommended to leave this enabled. You should set this to 0 only if your clip has bad chroma problems such as heavy rainbowing or other artifacts. Setting this to 0 could also be used to speed things up at the cost of some accuracy.

Default value is 1.

These define an exclusion band which excludes the lines between y0 and y1 from being included in the field matching decision. An exclusion band can be used to ignore subtitles, a logo, or other things that may interfere with the matching. y0 sets the starting scan line and y1 sets the ending line; all lines in between y0 and y1 (including y0 and y1) will be ignored. Setting y0 and y1 to the same value will disable the feature. y0 and y1 defaults to 0.
Set the scene change detection threshold as a percentage of maximum change on the luma plane. Good values are in the "[8.0, 14.0]" range. Scene change detection is only relevant in case combmatch=sc. The range for scthresh is "[0.0, 100.0]".

Default value is 12.0.

When combatch is not none, "fieldmatch" will take into account the combed scores of matches when deciding what match to use as the final match. Available values are:
No final matching based on combed scores.
Combed scores are only used when a scene change is detected.
Use combed scores all the time.

Default is sc.

Force "fieldmatch" to calculate the combed metrics for certain matches and print them. This setting is known as micout in TFM/VFM vocabulary. Available values are:
No forced calculation.
Force p/c/n calculations.
Force p/c/n/u/b calculations.

Default value is none.

This is the area combing threshold used for combed frame detection. This essentially controls how "strong" or "visible" combing must be to be detected. Larger values mean combing must be more visible and smaller values mean combing can be less visible or strong and still be detected. Valid settings are from -1 (every pixel will be detected as combed) to 255 (no pixel will be detected as combed). This is basically a pixel difference value. A good range is "[8, 12]".

Default value is 9.

Sets whether or not chroma is considered in the combed frame decision. Only disable this if your source has chroma problems (rainbowing, etc.) that are causing problems for the combed frame detection with chroma enabled. Actually, using chroma=0 is usually more reliable, except for the case where there is chroma only combing in the source.

Default value is 0.

Respectively set the x-axis and y-axis size of the window used during combed frame detection. This has to do with the size of the area in which combpel pixels are required to be detected as combed for a frame to be declared combed. See the combpel parameter description for more info. Possible values are any number that is a power of 2 starting at 4 and going up to 512.

Default value is 16.

The number of combed pixels inside any of the blocky by blockx size blocks on the frame for the frame to be detected as combed. While cthresh controls how "visible" the combing must be, this setting controls "how much" combing there must be in any localized area (a window defined by the blockx and blocky settings) on the frame. Minimum value is 0 and maximum is "blocky x blockx" (at which point no frames will ever be detected as combed). This setting is known as MI in TFM/VFM vocabulary.

Default value is 80.

p/c/n/u/b meaning

p/c/n

We assume the following telecined stream:

Top fields:     1 2 2 3 4
Bottom fields:  1 2 3 4 4

The numbers correspond to the progressive frame the fields relate to. Here, the first two frames are progressive, the 3rd and 4th are combed, and so on.

When "fieldmatch" is configured to run a matching from bottom (field=bottom) this is how this input stream get transformed:

Input stream:
                T     1 2 2 3 4
                B     1 2 3 4 4   <-- matching reference

Matches:              c c n n c

Output stream:
                T     1 2 3 4 4
                B     1 2 3 4 4

As a result of the field matching, we can see that some frames get duplicated. To perform a complete inverse telecine, you need to rely on a decimation filter after this operation. See for instance the decimate filter.

The same operation now matching from top fields (field=top) looks like this:

Input stream:
                T     1 2 2 3 4   <-- matching reference
                B     1 2 3 4 4

Matches:              c c p p c

Output stream:
                T     1 2 2 3 4
                B     1 2 2 3 4

In these examples, we can see what p, c and n mean; basically, they refer to the frame and field of the opposite parity:

*<p matches the field of the opposite parity in the previous frame>
*<c matches the field of the opposite parity in the current frame>
*<n matches the field of the opposite parity in the next frame>

u/b

The u and b matching are a bit special in the sense that they match from the opposite parity flag. In the following examples, we assume that we are currently matching the 2nd frame (Top:2, bottom:2). According to the match, a 'x' is placed above and below each matched fields.

With bottom matching (field=bottom):

Match:           c         p           n          b          u

                 x       x               x        x          x
  Top          1 2 2     1 2 2       1 2 2      1 2 2      1 2 2
  Bottom       1 2 3     1 2 3       1 2 3      1 2 3      1 2 3
                 x         x           x        x              x

Output frames:
                 2          1          2          2          2
                 2          2          2          1          3

With top matching (field=top):

Match:           c         p           n          b          u

                 x         x           x        x              x
  Top          1 2 2     1 2 2       1 2 2      1 2 2      1 2 2
  Bottom       1 2 3     1 2 3       1 2 3      1 2 3      1 2 3
                 x       x               x        x          x

Output frames:
                 2          2          2          1          2
                 2          1          3          2          2

Examples

Simple IVTC of a top field first telecined stream:

fieldmatch=order=tff:combmatch=none, decimate

Advanced IVTC, with fallback on yadif for still combed frames:

fieldmatch=order=tff:combmatch=full, yadif=deint=interlaced, decimate

Transform the field order of the input video.

It accepts the following parameters:

The output field order. Valid values are tff for top field first or bff for bottom field first.

The default value is tff.

The transformation is done by shifting the picture content up or down by one line, and filling the remaining line with appropriate picture content. This method is consistent with most broadcast field order converters.

If the input video is not flagged as being interlaced, or it is already flagged as being of the required output field order, then this filter does not alter the incoming video.

It is very useful when converting to or from PAL DV material, which is bottom field first.

For example:

ffmpeg -i in.vob -vf "fieldorder=bff" out.dv

Fill borders of the input video, without changing video stream dimensions. Sometimes video can have garbage at the four edges and you may not want to crop video input to keep size multiple of some number.

This filter accepts the following options:

Number of pixels to fill from left border.
Number of pixels to fill from right border.
Number of pixels to fill from top border.
Number of pixels to fill from bottom border.
Set fill mode.

It accepts the following values:

fill pixels using outermost pixels
fill pixels using mirroring (half sample symmetric)
fill pixels with constant value
fill pixels using reflecting (whole sample symmetric)
fill pixels using wrapping
fade
fade pixels to constant value
fill pixels at top and bottom with weighted averages pixels near borders

Default is smear.

Set color for pixels in fixed or fade mode. Default is black.

Commands

This filter supports same commands as options. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Find a rectangular object in the input video.

The object to search for must be specified as a gray8 image specified with the object option.

For each possible match, a score is computed. If the score reaches the specified threshold, the object is considered found.

If the input video contains multiple instances of the object, the filter will find only one of them.

When an object is found, the following metadata entries are set in the matching frame:

width of object
height of object
x position of object
y position of object
match score of the found object

It accepts the following options:

Filepath of the object image, needs to be in gray8.
threshold
Detection threshold, expressed as a decimal number in the range 0-1.

A threshold value of 0.01 means only exact matches, a threshold of 0.99 means almost everything matches.

Default value is 0.5.

Number of mipmaps, default is 3.
Specifies the rectangle in which to search.
Discard frames where object is not detected. Default is disabled.

Examples

  • Cover a rectangular object by the supplied image of a given video using ffmpeg:
    ffmpeg -i file.ts -vf find_rect=newref.pgm,cover_rect=cover.jpg:mode=cover new.mkv
    
  • Find the position of an object in each frame using ffprobe and write it to a log file:
    ffprobe -f lavfi movie=test.mp4,find_rect=object=object.pgm:threshold=0.3 \
      -show_entries frame=pkt_pts_time:frame_tags=lavfi.rect.x,lavfi.rect.y \
      -of csv -o find_rect.csv
    

Flood area with values of same pixel components with another values.

It accepts the following options:

Set pixel x coordinate.
Set pixel y coordinate.
Set source #0 component value.
Set source #1 component value.
Set source #2 component value.
Set source #3 component value.
Set destination #0 component value.
Set destination #1 component value.
Set destination #2 component value.
Set destination #3 component value.

Convert the input video to one of the specified pixel formats. Libavfilter will try to pick one that is suitable as input to the next filter.

It accepts the following parameters:

A '|'-separated list of pixel format names, such as "pix_fmts=yuv420p|monow|rgb24".
A '|'-separated list of color space names, such as "color_spaces=bt709|bt470bg|bt2020nc".
A '|'-separated list of color range names, such as "color_spaces=tv|pc".

Examples

Convert the input video to the yuv420p format
format=pix_fmts=yuv420p

Convert the input video to any of the formats in the list

format=pix_fmts=yuv420p|yuv444p|yuv410p

Convert the video to specified constant frame rate by duplicating or dropping frames as necessary.

It accepts the following parameters:

fps
The desired output frame rate. It accepts expressions containing the following constants:
The input's frame rate
NTSC frame rate of "30000/1001"
PAL frame rate of 25.0
Film frame rate of 24.0
NTSC-film frame rate of "24000/1001"

The default is 25.

Assume the first PTS should be the given value, in seconds. This allows for padding/trimming at the start of stream. By default, no assumption is made about the first frame's expected PTS, so no padding or trimming is done. For example, this could be set to 0 to pad the beginning with duplicates of the first frame if a video stream starts after the audio stream or to trim any frames with a negative PTS.
Timestamp (PTS) rounding method.

Possible values are:

round towards 0
round away from 0
round towards -infinity
round towards +infinity
round to nearest

The default is "near".

Action performed when reading the last frame.

Possible values are:

Use same timestamp rounding method as used for other frames.
Pass through last frame if input duration has not been reached yet.

The default is "round".

Alternatively, the options can be specified as a flat string: fps[:start_time[:round]].

See also the setpts filter.

Examples

  • A typical usage in order to set the fps to 25:
    fps=fps=25
    
  • Sets the fps to 24, using abbreviation and rounding method to round to nearest:
    fps=fps=film:round=near
    

Pack two different video streams into a stereoscopic video, setting proper metadata on supported codecs. The two views should have the same size and framerate and processing will stop when the shorter video ends. Please note that you may conveniently adjust view properties with the scale and fps filters.

It accepts the following parameters:

format
The desired packing format. Supported values are:
The views are next to each other (default).
The views are on top of each other.
The views are packed by line.
The views are packed by column.
The views are temporally interleaved.

Some examples:

# Convert left and right views into a frame-sequential video
ffmpeg -i LEFT -i RIGHT -filter_complex framepack=frameseq OUTPUT

# Convert views into a side-by-side video with the same output resolution as the input
ffmpeg -i LEFT -i RIGHT -filter_complex [0:v]scale=w=iw/2[left],[1:v]scale=w=iw/2[right],[left][right]framepack=sbs OUTPUT

Change the frame rate by interpolating new video output frames from the source frames.

This filter is not designed to function correctly with interlaced media. If you wish to change the frame rate of interlaced media then you are required to deinterlace before this filter and re-interlace after this filter.

A description of the accepted options follows.

fps
Specify the output frames per second. This option can also be specified as a value alone. The default is 50.
Specify the start of a range where the output frame will be created as a linear interpolation of two frames. The range is [0-255], the default is 15.
Specify the end of a range where the output frame will be created as a linear interpolation of two frames. The range is [0-255], the default is 240.
Specify the level at which a scene change is detected as a value between 0 and 100 to indicate a new scene; a low value reflects a low probability for the current frame to introduce a new scene, while a higher value means the current frame is more likely to be one. The default is 8.2.
Specify flags influencing the filter process.

Available value for flags is:

Enable scene change detection using the value of the option scene. This flag is enabled by default.

Select one frame every N-th frame.

This filter accepts the following option:

Select frame after every "step" frames. Allowed values are positive integers higher than 0. Default value is 1.

Detect frozen video.

This filter logs a message and sets frame metadata when it detects that the input video has no significant change in content during a specified duration. Video freeze detection calculates the mean average absolute difference of all the components of video frames and compares it to a noise floor.

The printed times and duration are expressed in seconds. The "lavfi.freezedetect.freeze_start" metadata key is set on the first frame whose timestamp equals or exceeds the detection duration and it contains the timestamp of the first frame of the freeze. The "lavfi.freezedetect.freeze_duration" and "lavfi.freezedetect.freeze_end" metadata keys are set on the first frame after the freeze.

The filter accepts the following options:

Set noise tolerance. Can be specified in dB (in case "dB" is appended to the specified value) or as a difference ratio between 0 and 1. Default is -60dB, or 0.001.
Set freeze duration until notification (default is 2 seconds).

Freeze video frames.

This filter freezes video frames using frame from 2nd input.

The filter accepts the following options:

Set number of first frame from which to start freeze.
Set number of last frame from which to end freeze.
Set number of frame from 2nd input which will be used instead of replaced frames.

Apply a frei0r effect to the input video.

To enable the compilation of this filter, you need to install the frei0r header and configure FFmpeg with "--enable-frei0r".

It accepts the following parameters:

The name of the frei0r effect to load. If the environment variable FREI0R_PATH is defined, the frei0r effect is searched for in each of the directories specified by the colon-separated list in FREI0R_PATH. Otherwise, the standard frei0r paths are searched, in this order: HOME/.frei0r-1/lib/, /usr/local/lib/frei0r-1/, /usr/lib/frei0r-1/.
A '|'-separated list of parameters to pass to the frei0r effect.

A frei0r effect parameter can be a boolean (its value is either "y" or "n"), a double, a color (specified as R/G/B, where R, G, and B are floating point numbers between 0.0 and 1.0, inclusive) or a color description as specified in the "Color" section in the ffmpeg-utils manual, a position (specified as X/Y, where X and Y are floating point numbers) and/or a string.

The number and types of parameters depend on the loaded effect. If an effect parameter is not specified, the default value is set.

Examples

  • Apply the distort0r effect, setting the first two double parameters:
    frei0r=filter_name=distort0r:filter_params=0.5|0.01
    
  • Apply the colordistance effect, taking a color as the first parameter:
    frei0r=colordistance:0.2/0.3/0.4
    frei0r=colordistance:violet
    frei0r=colordistance:0x112233
    
  • Apply the perspective effect, specifying the top left and top right image positions:
    frei0r=perspective:0.2/0.2|0.8/0.2
    

For more information, see http://frei0r.dyne.org

Commands

This filter supports the filter_params option as commands.

Apply fast and simple postprocessing. It is a faster version of spp.

It splits (I)DCT into horizontal/vertical passes. Unlike the simple post- processing filter, one of them is performed once per block, not per pixel. This allows for much higher speed.

The filter accepts the following options:

Set quality. This option defines the number of levels for averaging. It accepts an integer in the range 4-5. Default value is 4.
qp
Force a constant quantization parameter. It accepts an integer in range 0-63. If not set, the filter will use the QP from the video stream (if available).
Set filter strength. It accepts an integer in range -15 to 32. Lower values mean more details but also more artifacts, while higher values make the image smoother but also blurrier. Default value is 0 − PSNR optimal.
Enable the use of the QP from the B-Frames if set to 1. Using this option may cause flicker since the B-Frames have often larger QP. Default is 0 (not enabled).

Synchronize video frames with an external mapping from a file.

For each input PTS given in the map file it either drops or creates as many frames as necessary to recreate the sequence of output frames given in the map file.

This filter is useful to recreate the output frames of a framerate conversion by the fps filter, recorded into a map file using the ffmpeg option "-stats_mux_pre", and do further processing to the corresponding frames e.g. quality comparison.

Each line of the map file must contain three items per input frame, the input PTS (decimal), the output PTS (decimal) and the output TIMEBASE (decimal/decimal), seperated by a space. This file format corresponds to the output of "-stats_mux_pre_fmt="{ptsi} {pts} {tb}"".

The filter assumes the map file is sorted by increasing input PTS.

The filter accepts the following options:

The filename of the map file to be used.

Example:

# Convert a video to 25 fps and record a MAP_FILE file with the default format of this filter
ffmpeg -i INPUT -vf fps=fps=25 -stats_mux_pre MAP_FILE -stats_mux_pre_fmt "{ptsi} {pts} {tb}" OUTPUT

# Sort MAP_FILE by increasing input PTS
sort -n MAP_FILE

# Use INPUT, OUTPUT and the MAP_FILE from above to compare the corresponding frames in INPUT and OUTPUT via SSIM
ffmpeg -i INPUT -i OUTPUT -filter_complex '[0:v]fsync=file=MAP_FILE[ref];[1:v][ref]ssim' -f null -

Apply Gaussian blur filter.

The filter accepts the following options:

Set horizontal sigma, standard deviation of Gaussian blur. Default is 0.5.
Set number of steps for Gaussian approximation. Default is 1.
Set which planes to filter. By default all planes are filtered.
Set vertical sigma, if negative it will be same as "sigma". Default is -1.

Commands

This filter supports same commands as options. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Apply generic equation to each pixel.

The filter accepts the following options:

Set the luma expression.
Set the chrominance blue expression.
Set the chrominance red expression.
Set the alpha expression.
Set the red expression.
Set the green expression.
Set the blue expression.

The colorspace is selected according to the specified options. If one of the lum_expr, cb_expr, or cr_expr options is specified, the filter will automatically select a YCbCr colorspace. If one of the red_expr, green_expr, or blue_expr options is specified, it will select an RGB colorspace.

If one of the chrominance expression is not defined, it falls back on the other one. If no alpha expression is specified it will evaluate to opaque value. If none of chrominance expressions are specified, they will evaluate to the luma expression.

The expressions can use the following variables and functions:

The sequential number of the filtered frame, starting from 0.
The coordinates of the current sample.
The width and height of the image.
Width and height scale depending on the currently filtered plane. It is the ratio between the corresponding luma plane number of pixels and the current plane ones. E.g. for YUV4:2:0 the values are "1,1" for the luma plane, and "0.5,0.5" for chroma planes.
Time of the current frame, expressed in seconds.
Return the value of the pixel at location (x,y) of the current plane.
Return the value of the pixel at location (x,y) of the luma plane.
Return the value of the pixel at location (x,y) of the blue-difference chroma plane. Return 0 if there is no such plane.
Return the value of the pixel at location (x,y) of the red-difference chroma plane. Return 0 if there is no such plane.
Return the value of the pixel at location (x,y) of the red/green/blue component. Return 0 if there is no such component.
Return the value of the pixel at location (x,y) of the alpha plane. Return 0 if there is no such plane.
Sum of sample values in the rectangle from (0,0) to (x,y), this allows obtaining sums of samples within a rectangle. See the functions without the sum postfix.
Set one of interpolation methods:

Default is bilinear.

For functions, if x and y are outside the area, the value will be automatically clipped to the closer edge.

Please note that this filter can use multiple threads in which case each slice will have its own expression state. If you want to use only a single expression state because your expressions depend on previous state then you should limit the number of filter threads to 1.

Examples

  • Flip the image horizontally:
    geq=p(W-X\,Y)
    
  • Generate a bidimensional sine wave, with angle "PI/3" and a wavelength of 100 pixels:
    geq=128 + 100*sin(2*(PI/100)*(cos(PI/3)*(X-50*T) + sin(PI/3)*Y)):128:128
    
  • Generate a fancy enigmatic moving light:
    nullsrc=s=256x256,geq=random(1)/hypot(X-cos(N*0.07)*W/2-W/2\,Y-sin(N*0.09)*H/2-H/2)^2*1000000*sin(N*0.02):128:128
    
  • Generate a quick emboss effect:
    format=gray,geq=lum_expr='(p(X,Y)+(256-p(X-4,Y-4)))/2'
    
  • Modify RGB components depending on pixel position:
    geq=r='X/W*r(X,Y)':g='(1-X/W)*g(X,Y)':b='(H-Y)/H*b(X,Y)'
    
  • Create a radial gradient that is the same size as the input (also see the vignette filter):
    geq=lum=255*gauss((X/W-0.5)*3)*gauss((Y/H-0.5)*3)/gauss(0)/gauss(0),format=gray
    

Fix the banding artifacts that are sometimes introduced into nearly flat regions by truncation to 8-bit color depth. Interpolate the gradients that should go where the bands are, and dither them.

It is designed for playback only. Do not use it prior to lossy compression, because compression tends to lose the dither and bring back the bands.

It accepts the following parameters:

The maximum amount by which the filter will change any one pixel. This is also the threshold for detecting nearly flat regions. Acceptable values range from .51 to 64; the default value is 1.2. Out-of-range values will be clipped to the valid range.
The neighborhood to fit the gradient to. A larger radius makes for smoother gradients, but also prevents the filter from modifying the pixels near detailed regions. Acceptable values are 8-32; the default value is 16. Out-of-range values will be clipped to the valid range.

Alternatively, the options can be specified as a flat string: strength[:radius]

Examples

  • Apply the filter with a 3.5 strength and radius of 8:
    gradfun=3.5:8
    
  • Specify radius, omitting the strength (which will fall-back to the default value):
    gradfun=radius=8
    

Show various filtergraph stats.

With this filter one can debug complete filtergraph. Especially issues with links filling with queued frames.

The filter accepts the following options:

Set video output size. Default is hd720.
Set video opacity. Default is 0.9. Allowed range is from 0 to 1.
Set output mode flags.

Available values for flags are:

No any filtering. Default.
Show only filters with queued frames.
Show only filters with non-zero stats.
Show only filters with non-eof stat.
Show only filters that are enabled in timeline.
Set flags which enable which stats are shown in video.

Available values for flags are:

All flags turned off.
All flags turned on.
Display number of queued frames in each link.
Display number of frames taken from filter.
Display number of frames given out from filter.
Display delta number of frames between above two values.
Display current filtered frame pts.
Display pts delta between current and previous frame.
Display current filtered frame time.
Display time delta between current and previous frame.
Display time base for filter link.
format
Display used format for filter link.
Display video size or number of audio channels in case of audio used by filter link.
Display video frame rate or sample rate in case of audio used by filter link.
Display link output status.
Display number of samples taken from filter.
Display number of samples given out from filter.
Display delta number of samples between above two values.
Show the timeline filter status.
Set upper limit for video rate of output stream, Default value is 25. This guarantee that output video frame rate will not be higher than this value.

A color constancy filter that applies color correction based on the grayworld assumption

See: https://www.researchgate.net/publication/275213614_A_New_Color_Correction_Method_for_Underwater_Imaging

The algorithm uses linear light, so input data should be linearized beforehand (and possibly correctly tagged).

ffmpeg -i INPUT -vf zscale=transfer=linear,grayworld,zscale=transfer=bt709,format=yuv420p OUTPUT

A color constancy variation filter which estimates scene illumination via grey edge algorithm and corrects the scene colors accordingly.

See: https://staff.science.uva.nl/th.gevers/pub/GeversTIP07.pdf

The filter accepts the following options:

The order of differentiation to be applied on the scene. Must be chosen in the range [0,2] and default value is 1.
The Minkowski parameter to be used for calculating the Minkowski distance. Must be chosen in the range [0,20] and default value is 1. Set to 0 for getting max value instead of calculating Minkowski distance.
The standard deviation of Gaussian blur to be applied on the scene. Must be chosen in the range [0,1024.0] and default value = 1. floor( sigma * break_off_sigma(3) ) can't be equal to 0 if difford is greater than 0.

Examples

  • Grey Edge:
    greyedge=difford=1:minknorm=5:sigma=2
    
  • Max Edge:
    greyedge=difford=1:minknorm=0:sigma=2
    

Apply guided filter for edge-preserving smoothing, dehazing and so on.

The filter accepts the following options:

Set the box radius in pixels. Allowed range is 1 to 20. Default is 3.
Set regularization parameter (with square). Allowed range is 0 to 1. Default is 0.01.
Set filter mode. Can be "basic" or "fast". Default is "basic".
Set subsampling ratio for "fast" mode. Range is 2 to 64. Default is 4. No subsampling occurs in "basic" mode.
Set guidance mode. Can be "off" or "on". Default is "off". If "off", single input is required. If "on", two inputs of the same resolution and pixel format are required. The second input serves as the guidance.
Set planes to filter. Default is first only.

Commands

This filter supports the all above options as commands.

Examples

  • Edge-preserving smoothing with guided filter:
    ffmpeg -i in.png -vf guided out.png
    
  • Dehazing, structure-transferring filtering, detail enhancement with guided filter. For the generation of guidance image, refer to paper "Guided Image Filtering". See: http://kaiminghe.com/publications/pami12guidedfilter.pdf.
    ffmpeg -i in.png -i guidance.png -filter_complex guided=guidance=on out.png
    

Apply a Hald CLUT to a video stream.

First input is the video stream to process, and second one is the Hald CLUT. The Hald CLUT input can be a simple picture or a complete video stream.

The filter accepts the following options:

Set which CLUT video frames will be processed from second input stream, can be first or all. Default is all.
Force termination when the shortest input terminates. Default is 0.
Continue applying the last CLUT after the end of the stream. A value of 0 disable the filter after the last frame of the CLUT is reached. Default is 1.

"haldclut" also has the same interpolation options as lut3d (both filters share the same internals).

This filter also supports the framesync options.

More information about the Hald CLUT can be found on Eskil Steenberg's website (Hald CLUT author) at http://www.quelsolaar.com/technology/clut.html.

Commands

This filter supports the "interp" option as commands.

Workflow examples

Hald CLUT video stream

Generate an identity Hald CLUT stream altered with various effects:

ffmpeg -f lavfi -i B<haldclutsrc>=8 -vf "hue=H=2*PI*t:s=sin(2*PI*t)+1, curves=cross_process" -t 10 -c:v ffv1 clut.nut

Note: make sure you use a lossless codec.

Then use it with "haldclut" to apply it on some random stream:

ffmpeg -f lavfi -i mandelbrot -i clut.nut -filter_complex '[0][1] haldclut' -t 20 mandelclut.mkv

The Hald CLUT will be applied to the 10 first seconds (duration of clut.nut), then the latest picture of that CLUT stream will be applied to the remaining frames of the "mandelbrot" stream.

Hald CLUT with preview

A Hald CLUT is supposed to be a squared image of "Level*Level*Level" by "Level*Level*Level" pixels. For a given Hald CLUT, FFmpeg will select the biggest possible square starting at the top left of the picture. The remaining padding pixels (bottom or right) will be ignored. This area can be used to add a preview of the Hald CLUT.

Typically, the following generated Hald CLUT will be supported by the "haldclut" filter:

ffmpeg -f lavfi -i B<haldclutsrc>=8 -vf "
   pad=iw+320 [padded_clut];
   smptebars=s=320x256, split [a][b];
   [padded_clut][a] overlay=W-320:h, curves=color_negative [main];
   [main][b] overlay=W-320" -frames:v 1 clut.png

It contains the original and a preview of the effect of the CLUT: SMPTE color bars are displayed on the right-top, and below the same color bars processed by the color changes.

Then, the effect of this Hald CLUT can be visualized with:

ffplay input.mkv -vf "movie=clut.png, [in] haldclut"

Flip the input video horizontally.

For example, to horizontally flip the input video with ffmpeg:

ffmpeg -i in.avi -vf "hflip" out.avi

This filter applies a global color histogram equalization on a per-frame basis.

It can be used to correct video that has a compressed range of pixel intensities. The filter redistributes the pixel intensities to equalize their distribution across the intensity range. It may be viewed as an "automatically adjusting contrast filter". This filter is useful only for correcting degraded or poorly captured source video.

The filter accepts the following options:

Determine the amount of equalization to be applied. As the strength is reduced, the distribution of pixel intensities more-and-more approaches that of the input frame. The value must be a float number in the range [0,1] and defaults to 0.200.
Set the maximum intensity that can generated and scale the output values appropriately. The strength should be set as desired and then the intensity can be limited if needed to avoid washing-out. The value must be a float number in the range [0,1] and defaults to 0.210.
Set the antibanding level. If enabled the filter will randomly vary the luminance of output pixels by a small amount to avoid banding of the histogram. Possible values are "none", "weak" or "strong". It defaults to "none".

Compute and draw a color distribution histogram for the input video.

The computed histogram is a representation of the color component distribution in an image.

Standard histogram displays the color components distribution in an image. Displays color graph for each color component. Shows distribution of the Y, U, V, A or R, G, B components, depending on input format, in the current frame. Below each graph a color component scale meter is shown.

The filter accepts the following options:

Set height of level. Default value is 200. Allowed range is [50, 2048].
Set height of color scale. Default value is 12. Allowed range is [0, 40].
Set display mode. It accepts the following values:
Per color component graphs are placed below each other.
Per color component graphs are placed side by side.
overlay
Presents information identical to that in the "parade", except that the graphs representing color components are superimposed directly over one another.

Default is "stack".

Set mode. Can be either "linear", or "logarithmic". Default is "linear".
Set what color components to display. Default is 7.
Set foreground opacity. Default is 0.7.
Set background opacity. Default is 0.5.
Set colors mode. It accepts the following values:

Default is "whiteonblack".

Examples

Calculate and draw histogram:
ffplay -i input -vf histogram

This is a high precision/quality 3d denoise filter. It aims to reduce image noise, producing smooth images and making still images really still. It should enhance compressibility.

It accepts the following optional parameters:

A non-negative floating point number which specifies spatial luma strength. It defaults to 4.0.
A non-negative floating point number which specifies spatial chroma strength. It defaults to 3.0*luma_spatial/4.0.
A floating point number which specifies luma temporal strength. It defaults to 6.0*luma_spatial/4.0.
A floating point number which specifies chroma temporal strength. It defaults to luma_tmp*chroma_spatial/luma_spatial.

Commands

This filter supports same commands as options. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Download hardware frames to system memory.

The input must be in hardware frames, and the output a non-hardware format. Not all formats will be supported on the output - it may be necessary to insert an additional format filter immediately following in the graph to get the output in a supported format.

Map hardware frames to system memory or to another device.

This filter has several different modes of operation; which one is used depends on the input and output formats:

  • Hardware frame input, normal frame output

    Map the input frames to system memory and pass them to the output. If the original hardware frame is later required (for example, after overlaying something else on part of it), the hwmap filter can be used again in the next mode to retrieve it.

  • Normal frame input, hardware frame output

    If the input is actually a software-mapped hardware frame, then unmap it - that is, return the original hardware frame.

    Otherwise, a device must be provided. Create new hardware surfaces on that device for the output, then map them back to the software format at the input and give those frames to the preceding filter. This will then act like the hwupload filter, but may be able to avoid an additional copy when the input is already in a compatible format.

  • Hardware frame input and output

    A device must be supplied for the output, either directly or with the derive_device option. The input and output devices must be of different types and compatible - the exact meaning of this is system-dependent, but typically it means that they must refer to the same underlying hardware context (for example, refer to the same graphics card).

    If the input frames were originally created on the output device, then unmap to retrieve the original frames.

    Otherwise, map the frames to the output device - create new hardware frames on the output corresponding to the frames on the input.

The following additional parameters are accepted:

Set the frame mapping mode. Some combination of:
The mapped frame should be readable.
The mapped frame should be writeable.
The mapping will always overwrite the entire frame.

This may improve performance in some cases, as the original contents of the frame need not be loaded.

The mapping must not involve any copying.

Indirect mappings to copies of frames are created in some cases where either direct mapping is not possible or it would have unexpected properties. Setting this flag ensures that the mapping is direct and will fail if that is not possible.

Defaults to read+write if not specified.

Rather than using the device supplied at initialisation, instead derive a new device of type type from the device the input frames exist on.
reverse
In a hardware to hardware mapping, map in reverse - create frames in the sink and map them back to the source. This may be necessary in some cases where a mapping in one direction is required but only the opposite direction is supported by the devices being used.

This option is dangerous - it may break the preceding filter in undefined ways if there are any additional constraints on that filter's output. Do not use it without fully understanding the implications of its use.

Upload system memory frames to hardware surfaces.

The device to upload to must be supplied when the filter is initialised. If using ffmpeg, select the appropriate device with the -filter_hw_device option or with the derive_device option. The input and output devices must be of different types and compatible - the exact meaning of this is system-dependent, but typically it means that they must refer to the same underlying hardware context (for example, refer to the same graphics card).

The following additional parameters are accepted:

Rather than using the device supplied at initialisation, instead derive a new device of type type from the device the input frames exist on.

Upload system memory frames to a CUDA device.

It accepts the following optional parameters:

The number of the CUDA device to use

Apply a high-quality magnification filter designed for pixel art. This filter was originally created by Maxim Stepin.

It accepts the following option:

Set the scaling dimension: 2 for "hq2x", 3 for "hq3x" and 4 for "hq4x". Default is 3.

Stack input videos horizontally.

All streams must be of same pixel format and of same height.

Note that this filter is faster than using overlay and pad filter to create same output.

The filter accepts the following option:

Set number of input streams. Default is 2.
If set to 1, force the output to terminate when the shortest input terminates. Default value is 0.

Turns a certain HSV range into gray values.

This filter measures color difference between set HSV color in options and ones measured in video stream. Depending on options, output colors can be changed to be gray or not.

The filter accepts the following options:

hue
Set the hue value which will be used in color difference calculation. Allowed range is from -360 to 360. Default value is 0.
Set the saturation value which will be used in color difference calculation. Allowed range is from -1 to 1. Default value is 0.
Set the value which will be used in color difference calculation. Allowed range is from -1 to 1. Default value is 0.
Set similarity percentage with the key color. Allowed range is from 0 to 1. Default value is 0.01.

0.00001 matches only the exact key color, while 1.0 matches everything.

blend
Blend percentage. Allowed range is from 0 to 1. Default value is 0.

0.0 makes pixels either fully gray, or not gray at all.

Higher values result in more gray pixels, with a higher gray pixel the more similar the pixels color is to the key color.

Turns a certain HSV range into transparency.

This filter measures color difference between set HSV color in options and ones measured in video stream. Depending on options, output colors can be changed to transparent by adding alpha channel.

The filter accepts the following options:

hue
Set the hue value which will be used in color difference calculation. Allowed range is from -360 to 360. Default value is 0.
Set the saturation value which will be used in color difference calculation. Allowed range is from -1 to 1. Default value is 0.
Set the value which will be used in color difference calculation. Allowed range is from -1 to 1. Default value is 0.
Set similarity percentage with the key color. Allowed range is from 0 to 1. Default value is 0.01.

0.00001 matches only the exact key color, while 1.0 matches everything.

blend
Blend percentage. Allowed range is from 0 to 1. Default value is 0.

0.0 makes pixels either fully transparent, or not transparent at all.

Higher values result in semi-transparent pixels, with a higher transparency the more similar the pixels color is to the key color.

Modify the hue and/or the saturation of the input.

It accepts the following parameters:

Specify the hue angle as a number of degrees. It accepts an expression, and defaults to "0".
Specify the saturation in the [-10,10] range. It accepts an expression and defaults to "1".
Specify the hue angle as a number of radians. It accepts an expression, and defaults to "0".
Specify the brightness in the [-10,10] range. It accepts an expression and defaults to "0".

h and H are mutually exclusive, and can't be specified at the same time.

The b, h, H and s option values are expressions containing the following constants:

frame count of the input frame starting from 0
presentation timestamp of the input frame expressed in time base units
frame rate of the input video, NAN if the input frame rate is unknown
timestamp expressed in seconds, NAN if the input timestamp is unknown
time base of the input video

Examples

  • Set the hue to 90 degrees and the saturation to 1.0:
    hue=h=90:s=1
    
  • Same command but expressing the hue in radians:
    hue=H=PI/2:s=1
    
  • Rotate hue and make the saturation swing between 0 and 2 over a period of 1 second:
    hue="H=2*PI*t: s=sin(2*PI*t)+1"
    
  • Apply a 3 seconds saturation fade-in effect starting at 0:
    hue="s=min(t/3\,1)"
    

    The general fade-in expression can be written as:

    hue="s=min(0\, max((t-START)/DURATION\, 1))"
    
  • Apply a 3 seconds saturation fade-out effect starting at 5 seconds:
    hue="s=max(0\, min(1\, (8-t)/3))"
    

    The general fade-out expression can be written as:

    hue="s=max(0\, min(1\, (START+DURATION-t)/DURATION))"
    

Commands

This filter supports the following commands:

Modify the hue and/or the saturation and/or brightness of the input video. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Apply hue-saturation-intensity adjustments to input video stream.

This filter operates in RGB colorspace.

This filter accepts the following options:

hue
Set the hue shift in degrees to apply. Default is 0. Allowed range is from -180 to 180.
Set the saturation shift. Default is 0. Allowed range is from -1 to 1.
Set the intensity shift. Default is 0. Allowed range is from -1 to 1.
Set which primary and complementary colors are going to be adjusted. This options is set by providing one or multiple values. This can select multiple colors at once. By default all colors are selected.
Adjust reds.
Adjust yellows.
Adjust greens.
Adjust cyans.
Adjust blues.
Adjust magentas.
Adjust all colors.
Set strength of filtering. Allowed range is from 0 to 100. Default value is 1.
Set weight for each RGB component. Allowed range is from 0 to 1. By default is set to 0.333, 0.334, 0.333. Those options are used in saturation and lightess processing.
Set preserving lightness, by default is disabled. Adjusting hues can change lightness from original RGB triplet, with this option enabled lightness is kept at same value.

Grow first stream into second stream by connecting components. This makes it possible to build more robust edge masks.

This filter accepts the following options:

Set which planes will be processed as bitmap, unprocessed planes will be copied from first stream. By default value 0xf, all planes will be processed.
threshold
Set threshold which is used in filtering. If pixel component value is higher than this value filter algorithm for connecting components is activated. By default value is 0.

The "hysteresis" filter also supports the framesync options.

Detect the colorspace from an embedded ICC profile (if present), and update the frame's tags accordingly.

This filter accepts the following options:

If true, the frame's existing colorspace tags will always be overridden by values detected from an ICC profile. Otherwise, they will only be assigned if they contain "unknown". Enabled by default.

Generate ICC profiles and attach them to frames.

This filter accepts the following options:

Configure the colorspace that the ICC profile will be generated for. The default value of "auto" infers the value from the input frame's metadata, defaulting to BT.709/sRGB as appropriate.

See the setparams filter for a list of possible values, but note that "unknown" are not valid values for this filter.

If true, an ICC profile will be generated even if it would overwrite an already existing ICC profile. Disabled by default.

Obtain the identity score between two input videos.

This filter takes two input videos.

Both input videos must have the same resolution and pixel format for this filter to work correctly. Also it assumes that both inputs have the same number of frames, which are compared one by one.

The obtained per component, average, min and max identity score is printed through the logging system.

The filter stores the calculated identity scores of each frame in frame metadata.

This filter also supports the framesync options.

In the below example the input file main.mpg being processed is compared with the reference file ref.mpg.

ffmpeg -i main.mpg -i ref.mpg -lavfi identity -f null -

Detect video interlacing type.

This filter tries to detect if the input frames are interlaced, progressive, top or bottom field first. It will also try to detect fields that are repeated between adjacent frames (a sign of telecine).

Single frame detection considers only immediately adjacent frames when classifying each frame. Multiple frame detection incorporates the classification history of previous frames.

The filter will log these metadata values:

Detected type of current frame using single-frame detection. One of: ``tff'' (top field first), ``bff'' (bottom field first), ``progressive'', or ``undetermined''
Cumulative number of frames detected as top field first using single-frame detection.
Cumulative number of frames detected as top field first using multiple-frame detection.
Cumulative number of frames detected as bottom field first using single-frame detection.
Detected type of current frame using multiple-frame detection. One of: ``tff'' (top field first), ``bff'' (bottom field first), ``progressive'', or ``undetermined''
Cumulative number of frames detected as bottom field first using multiple-frame detection.
Cumulative number of frames detected as progressive using single-frame detection.
Cumulative number of frames detected as progressive using multiple-frame detection.
Cumulative number of frames that could not be classified using single-frame detection.
Cumulative number of frames that could not be classified using multiple-frame detection.
Which field in the current frame is repeated from the last. One of ``neither'', ``top'', or ``bottom''.
Cumulative number of frames with no repeated field.
Cumulative number of frames with the top field repeated from the previous frame's top field.
Cumulative number of frames with the bottom field repeated from the previous frame's bottom field.

The filter accepts the following options:

Set interlacing threshold.
Set progressive threshold.
Threshold for repeated field detection.
Number of frames after which a given frame's contribution to the statistics is halved (i.e., it contributes only 0.5 to its classification). The default of 0 means that all frames seen are given full weight of 1.0 forever.
When this is not 0 then idet will use the specified number of frames to determine if the interlaced flag is accurate, it will not count undetermined frames. If the flag is found to be accurate it will be used without any further computations, if it is found to be inaccurate it will be cleared without any further computations. This allows inserting the idet filter as a low computational method to clean up the interlaced flag

Examples

Inspect the field order of the first 360 frames in a video, in verbose detail:

ffmpeg -i INPUT -filter:v idet,metadata=mode=print -frames:v 360 -an -f null -

The idet filter will add analysis metadata to each frame, which will then be discarded. At the end, the filter will also print a final report with statistics.

Deinterleave or interleave fields.

This filter allows one to process interlaced images fields without deinterlacing them. Deinterleaving splits the input frame into 2 fields (so called half pictures). Odd lines are moved to the top half of the output image, even lines to the bottom half. You can process (filter) them independently and then re-interleave them.

The filter accepts the following options:

Available values for luma_mode, chroma_mode and alpha_mode are:
Do nothing.
Deinterleave fields, placing one above the other.
Interleave fields. Reverse the effect of deinterleaving.

Default value is "none".

Swap luma/chroma/alpha fields. Exchange even & odd lines. Default value is 0.

Commands

This filter supports the all above options as commands.

Apply inflate effect to the video.

This filter replaces the pixel by the local(3x3) average by taking into account only values higher than the pixel.

It accepts the following options:

Limit the maximum change for each plane, default is 65535. If 0, plane will remain unchanged.

Commands

This filter supports the all above options as commands.

Simple interlacing filter from progressive contents. This interleaves upper (or lower) lines from odd frames with lower (or upper) lines from even frames, halving the frame rate and preserving image height.

   Original        Original             New Frame
   Frame 'j'      Frame 'j+1'             (tff)
  ==========      ===========       ==================
    Line 0  -------------------->    Frame 'j' Line 0
    Line 1          Line 1  ---->   Frame 'j+1' Line 1
    Line 2 --------------------->    Frame 'j' Line 2
    Line 3          Line 3  ---->   Frame 'j+1' Line 3
     ...             ...                   ...
New Frame + 1 will be generated by Frame 'j+2' and Frame 'j+3' and so on

It accepts the following optional parameters:

This determines whether the interlaced frame is taken from the even (tff - default) or odd (bff) lines of the progressive frame.
lowpass
Vertical lowpass filter to avoid twitter interlacing and reduce moire patterns.
0, off
Disable vertical lowpass filter
1, linear
Enable linear filter (default)
2, complex
Enable complex filter. This will slightly less reduce twitter and moire but better retain detail and subjective sharpness impression.

Deinterlace input video by applying Donald Graft's adaptive kernel deinterling. Work on interlaced parts of a video to produce progressive frames.

The description of the accepted parameters follows.

Set the threshold which affects the filter's tolerance when determining if a pixel line must be processed. It must be an integer in the range [0,255] and defaults to 10. A value of 0 will result in applying the process on every pixels.
Paint pixels exceeding the threshold value to white if set to 1. Default is 0.
Set the fields order. Swap fields if set to 1, leave fields alone if 0. Default is 0.
Enable additional sharpening if set to 1. Default is 0.
Enable twoway sharpening if set to 1. Default is 0.

Examples

  • Apply default values:
    kerndeint=thresh=10:map=0:order=0:sharp=0:twoway=0
    
  • Enable additional sharpening:
    kerndeint=sharp=1
    
  • Paint processed pixels in white:
    kerndeint=map=1
    

Apply kirsch operator to input video stream.

The filter accepts the following option:

Set which planes will be processed, unprocessed planes will be copied. By default value 0xf, all planes will be processed.
scale
Set value which will be multiplied with filtered result.
Set value which will be added to filtered result.

Commands

This filter supports the all above options as commands.

Slowly update darker pixels.

This filter makes short flashes of light appear longer. This filter accepts the following options:

Set factor for decaying. Default is .95. Allowed range is from 0 to 1.
Set which planes to filter. Default is all. Allowed range is from 0 to 15.

Commands

This filter supports the all above options as commands.

Correct radial lens distortion

This filter can be used to correct for radial distortion as can result from the use of wide angle lenses, and thereby re-rectify the image. To find the right parameters one can use tools available for example as part of opencv or simply trial-and-error. To use opencv use the calibration sample (under samples/cpp) from the opencv sources and extract the k1 and k2 coefficients from the resulting matrix.

Note that effectively the same filter is available in the open-source tools Krita and Digikam from the KDE project.

In contrast to the vignette filter, which can also be used to compensate lens errors, this filter corrects the distortion of the image, whereas vignette corrects the brightness distribution, so you may want to use both filters together in certain cases, though you will have to take care of ordering, i.e. whether vignetting should be applied before or after lens correction.

Options

The filter accepts the following options:

Relative x-coordinate of the focal point of the image, and thereby the center of the distortion. This value has a range [0,1] and is expressed as fractions of the image width. Default is 0.5.
Relative y-coordinate of the focal point of the image, and thereby the center of the distortion. This value has a range [0,1] and is expressed as fractions of the image height. Default is 0.5.
Coefficient of the quadratic correction term. This value has a range [-1,1]. 0 means no correction. Default is 0.
Coefficient of the double quadratic correction term. This value has a range [-1,1]. 0 means no correction. Default is 0.
Set interpolation type. Can be "nearest" or "bilinear". Default is "nearest".
Specify the color of the unmapped pixels. For the syntax of this option, check the "Color" section in the ffmpeg-utils manual. Default color is "black@0".

The formula that generates the correction is:

r_src = r_tgt * (1 + k1 * (r_tgt / r_0)^2 + k2 * (r_tgt / r_0)^4)

where r_0 is halve of the image diagonal and r_src and r_tgt are the distances from the focal point in the source and target images, respectively.

Commands

This filter supports the all above options as commands.

Apply lens correction via the lensfun library (http://lensfun.sourceforge.net/).

The "lensfun" filter requires the camera make, camera model, and lens model to apply the lens correction. The filter will load the lensfun database and query it to find the corresponding camera and lens entries in the database. As long as these entries can be found with the given options, the filter can perform corrections on frames. Note that incomplete strings will result in the filter choosing the best match with the given options, and the filter will output the chosen camera and lens models (logged with level "info"). You must provide the make, camera model, and lens model as they are required.

To obtain a list of available makes and models, leave out one or both of "make" and "model" options. The filter will send the full list to the log with level "INFO". The first column is the make and the second column is the model. To obtain a list of available lenses, set any values for make and model and leave out the "lens_model" option. The filter will send the full list of lenses in the log with level "INFO". The ffmpeg tool will exit after the list is printed.

The filter accepts the following options:

The make of the camera (for example, "Canon"). This option is required.
The model of the camera (for example, "Canon EOS 100D"). This option is required.
The model of the lens (for example, "Canon EF-S 18-55mm f/3.5-5.6 IS STM"). This option is required.
The full path to the lens database folder. If not set, the filter will attempt to load the database from the install path when the library was built. Default is unset.
The type of correction to apply. The following values are valid options:
Enables fixing lens vignetting.
Enables fixing lens geometry. This is the default.
Enables fixing chromatic aberrations.
Enables fixing lens vignetting and lens geometry.
Enables fixing lens vignetting and chromatic aberrations.
Enables fixing both lens geometry and chromatic aberrations.
Enables all possible corrections.
The focal length of the image/video (zoom; expected constant for video). For example, a 18--55mm lens has focal length range of [18--55], so a value in that range should be chosen when using that lens. Default 18.
The aperture of the image/video (expected constant for video). Note that aperture is only used for vignetting correction. Default 3.5.
The focus distance of the image/video (expected constant for video). Note that focus distance is only used for vignetting and only slightly affects the vignetting correction process. If unknown, leave it at the default value (which is 1000).
scale
The scale factor which is applied after transformation. After correction the video is no longer necessarily rectangular. This parameter controls how much of the resulting image is visible. The value 0 means that a value will be chosen automatically such that there is little or no unmapped area in the output image. 1.0 means that no additional scaling is done. Lower values may result in more of the corrected image being visible, while higher values may avoid unmapped areas in the output.
The target geometry of the output image/video. The following values are valid options:
reverse
Apply the reverse of image correction (instead of correcting distortion, apply it).
The type of interpolation used when correcting distortion. The following values are valid options:

Examples

  • Apply lens correction with make "Canon", camera model "Canon EOS 100D", and lens model "Canon EF-S 18-55mm f/3.5-5.6 IS STM" with focal length of "18" and aperture of "8.0".
    ffmpeg -i input.mov -vf lensfun=make=Canon:model="Canon EOS 100D":lens_model="Canon EF-S 18-55mm f/3.5-5.6 IS STM":focal_length=18:aperture=8 -c:v h264 -b:v 8000k output.mov
    
  • Apply the same as before, but only for the first 5 seconds of video.
    ffmpeg -i input.mov -vf lensfun=make=Canon:model="Canon EOS 100D":lens_model="Canon EF-S 18-55mm f/3.5-5.6 IS STM":focal_length=18:aperture=8:enable='lte(t\,5)' -c:v h264 -b:v 8000k output.mov
    

Flexible GPU-accelerated processing filter based on libplacebo (https://code.videolan.org/videolan/libplacebo).

Options

The options for this filter are divided into the following sections:

Output mode

These options control the overall output mode. By default, libplacebo will try to preserve the source colorimetry and size as best as it can, but it will apply any embedded film grain, dolby vision metadata or anamorphic SAR present in source frames.

Set the number of inputs. This can be used, alongside the "idx" variable, to allow placing/blending multiple inputs inside the output frame. This effectively enables functionality similar to hstack, overlay, etc.
Set the output video dimension expression. Default values are "iw" and "ih".

Allows for the same expressions as the scale filter.

Set the input crop x/y expressions, default values are "(iw-cw)/2" and "(ih-ch)/2".
Set the input crop width/height expressions, default values are "iw" and "ih".
Set the output placement x/y expressions, default values are "(ow-pw)/2" and "(oh-ph)/2".
Set the output placement width/height expressions, default values are "ow" and "oh".
fps
Set the output frame rate. This can be rational, e.g. "60000/1001". If set to the special string "none" (the default), input timestamps will instead be passed through to the output unmodified. Otherwise, the input video frames will be interpolated as necessary to rescale the video to the specified target framerate, in a manner as determined by the frame_mixer option.
format
Set the output format override. If unset (the default), frames will be output in the same format as the respective input frames. Otherwise, format conversion will be performed.
Work the same as the identical scale filter options.
If enabled, output frames will always have a pixel aspect ratio of 1:1. This will introduce additional padding/cropping as necessary. If disabled (the default), any aspect ratio mismatches, including those from e.g. anamorphic video sources, are forwarded to the output pixel aspect ratio.
Specifies a ratio (between 0.0 and 1.0) between padding and cropping when the input aspect ratio does not match the output aspect ratio and normalize_sar is in effect. The default of 0.0 always pads the content with black borders, while a value of 1.0 always crops off parts of the content. Intermediate values are possible, leading to a mix of the two approaches.
Set the color used to fill the output area not covered by the output image, for example as a result of normalize_sar. For the general syntax of this option, check the "Color" section in the ffmpeg-utils manual. Defaults to "black".
Render frames with rounded corners. The value, given as a float ranging from 0.0 to 1.0, indicates the relative degree of rounding, from fully square to fully circular. In other words, it gives the radius divided by half the smaller side length. Defaults to 0.0.
Pass extra libplacebo internal configuration options. These can be specified as a list of key=value pairs separated by ':'. The following example shows how to configure a custom filter kernel ("EWA LanczosSharp") and use it to double the input image resolution:
-vf "libplacebo=w=iw*2:h=ih*2:extra_opts='upscaler=custom\:upscaler_preset=ewa_lanczos\:upscaler_blur=0.9812505644269356'"
colorspace
Configure the colorspace that output frames will be delivered in. The default value of "auto" outputs frames in the same format as the input frames, leading to no change. For any other value, conversion will be performed.

See the setparams filter for a list of possible values.

Apply film grain (e.g. AV1 or H.274) if present in source frames, and strip it from the output. Enabled by default.
Apply Dolby Vision RPU metadata if present in source frames, and strip it from the output. Enabled by default. Note that Dolby Vision will always output BT.2020+PQ, overriding the usual input frame metadata. These will also be picked as the values of "auto" for the respective frame output options.

In addition to the expression constants documented for the scale filter, the crop_w, crop_h, crop_x, crop_y, pos_w, pos_h, pos_x and pos_y options can also contain the following constants:

The (0-based) numeric index of the currently active input stream.
The computed values of crop_w and crop_h.
The computed values of pos_w and pos_h.
The input frame timestamp, in seconds. NAN if input timestamp is unknown.
The input frame timestamp, in seconds. NAN if input timestamp is unknown.
The input frame number, starting with 0.

Scaling

The options in this section control how libplacebo performs upscaling and (if necessary) downscaling. Note that libplacebo will always internally operate on 4:4:4 content, so any sub-sampled chroma formats such as "yuv420p" will necessarily be upsampled and downsampled as part of the rendering process. That means scaling might be in effect even if the source and destination resolution are the same.

Configure the filter kernel used for upscaling and downscaling. The respective defaults are "spline36" and "mitchell". For a full list of possible values, pass "help" to these options. The most important values are:
Forces the use of built-in GPU texture sampling (typically bilinear). Extremely fast but poor quality, especially when downscaling.
Bilinear interpolation. Can generally be done for free on GPUs, except when doing so would lead to aliasing. Fast and low quality.
Nearest-neighbour interpolation. Sharp but highly aliasing.
Algorithm that looks visually similar to nearest-neighbour interpolation but tries to preserve pixel aspect ratio. Good for pixel art, since it results in minimal distortion of the artistic appearance.
Standard sinc-sinc interpolation kernel.
Cubic spline approximation of lanczos. No difference in performance, but has very slightly less ringing.
Elliptically weighted average version of lanczos, based on a jinc-sinc kernel. This is also popularly referred to as just "Jinc scaling". Slow but very high quality.
Gaussian kernel. Has certain ideal mathematical properties, but subjectively very blurry.
Cubic BC spline with parameters recommended by Mitchell and Netravali. Very little ringing.
Controls the kernel used for mixing frames temporally. The default value is "none", which disables frame mixing. For a full list of possible values, pass "help" to this option. The most important values are:
Disables frame mixing, giving a result equivalent to "nearest neighbour" semantics.
Oversamples the input video to create a "Smooth Motion"-type effect: if an output frame would exactly fall on the transition between two video frames, it is blended according to the relative overlap. This is the recommended option whenever preserving the original subjective appearance is desired.
Larger filter kernel that smoothly interpolates multiple frames in a manner designed to eliminate ringing and other artefacts as much as possible. This is the recommended option wherever maximum visual smoothness is desired.
Linear blend/fade between frames. Especially useful for constructing e.g. slideshows.
Configures the size of scaler LUTs, ranging from 1 to 256. The default of 0 will pick libplacebo's internal default, typically 64.
Enables anti-ringing (for non-EWA filters). The value (between 0.0 and 1.0) configures the strength of the anti-ringing algorithm. May increase aliasing if set too high. Disabled by default.
Enable sigmoidal compression during upscaling. Reduces ringing slightly. Enabled by default.

Debanding

Libplacebo comes with a built-in debanding filter that is good at counteracting many common sources of banding and blocking. Turning this on is highly recommended whenever quality is desired.

deband
Enable (fast) debanding algorithm. Disabled by default.
Number of deband iterations of the debanding algorithm. Each iteration is performed with progressively increased radius (and diminished threshold). Recommended values are in the range 1 to 4. Defaults to 1.
Debanding filter strength. Higher numbers lead to more aggressive debanding. Defaults to 4.0.
Debanding filter radius. A higher radius is better for slow gradients, while a lower radius is better for steep gradients. Defaults to 16.0.
Amount of extra output grain to add. Helps hide imperfections. Defaults to 6.0.

Color adjustment

A collection of subjective color controls. Not very rigorous, so the exact effect will vary somewhat depending on the input primaries and colorspace.

Brightness boost, between -1.0 and 1.0. Defaults to 0.0.
Contrast gain, between 0.0 and 16.0. Defaults to 1.0.
Saturation gain, between 0.0 and 16.0. Defaults to 1.0.
hue
Hue shift in radians, between -3.14 and 3.14. Defaults to 0.0. This will rotate the UV subvector, defaulting to BT.709 coefficients for RGB inputs.
Gamma adjustment, between 0.0 and 16.0. Defaults to 1.0.
Cone model to use for color blindness simulation. Accepts any combination of "l", "m" and "s". Here are some examples:
Deuteranomaly / deuteranopia (affecting 3%-4% of the population)
Protanomaly / protanopia (affecting 1%-2% of the population)
Monochromacy (very rare)
Achromatopsy (complete loss of daytime vision, extremely rare)
Gain factor for the cones specified by "cones", between 0.0 and 10.0. A value of 1.0 results in no change to color vision. A value of 0.0 (the default) simulates complete loss of those cones. Values above 1.0 result in exaggerating the differences between cones, which may help compensate for reduced color vision.

Peak detection

To help deal with sources that only have static HDR10 metadata (or no tagging whatsoever), libplacebo uses its own internal frame analysis compute shader to analyze source frames and adapt the tone mapping function in realtime. If this is too slow, or if exactly reproducible frame-perfect results are needed, it's recommended to turn this feature off.

Enable HDR peak detection. Ignores static MaxCLL/MaxFALL values in favor of dynamic detection from the input. Note that the detected values do not get written back to the output frames, they merely guide the internal tone mapping process. Enabled by default.
Peak detection smoothing period, between 0.0 and 1000.0. Higher values result in peak detection becoming less responsive to changes in the input. Defaults to 100.0.
Lower bound on the detected peak (relative to SDR white), between 0.0 and 100.0. Defaults to 1.0.
Lower and upper thresholds for scene change detection. Expressed in a logarithmic scale between 0.0 and 100.0. Default to 5.5 and 10.0, respectively. Setting either to a negative value disables this functionality.
Which percentile of the frame brightness histogram to use as the source peak for tone-mapping. Defaults to 99.995, a fairly conservative value. Setting this to 100.0 disables frame histogram measurement and instead uses the true peak brightness for tone-mapping.

Tone mapping

The options in this section control how libplacebo performs tone-mapping and gamut-mapping when dealing with mismatches between wide-gamut or HDR content. In general, libplacebo relies on accurate source tagging and mastering display gamut information to produce the best results.

How to handle out-of-gamut colors that can occur as a result of colorimetric gamut mapping.
Do nothing, simply clip out-of-range colors to the RGB volume. Low quality but extremely fast.
Perceptually soft-clip colors to the gamut volume. This is the default.
Relative colorimetric hard-clip. Similar to "perceptual" but without the soft knee.
Saturation mapping, maps primaries directly to primaries in RGB space. Not recommended except for artificial computer graphics for which a bright, saturated display is desired.
Absolute colorimetric hard-clip. Performs no adjustment of the white point.
Hard-desaturates out-of-gamut colors towards white, while preserving the luminance. Has a tendency to distort the visual appearance of bright objects.
Linearly reduces content brightness to preserves saturated details, followed by clipping the remaining out-of-gamut colors.
Highlight out-of-gamut pixels (by inverting/marking them).
Linearly reduces chromaticity of the entire image to make it fit within the target color volume. Be careful when using this on BT.2020 sources without proper mastering metadata, as doing so will lead to excessive desaturation.
Tone-mapping algorithm to use. Available values are:
Automatic selection based on internal heuristics. This is the default.
Performs no tone-mapping, just clips out-of-range colors. Retains perfect color accuracy for in-range colors but completely destroys out-of-range information. Does not perform any black point adaptation. Not configurable.
EETF from SMPTE ST 2094-40 Annex B, which applies the Bezier curves from HDR10+ dynamic metadata based on Bezier curves to perform tone-mapping. The OOTF used is adjusted based on the ratio between the targeted and actual display peak luminances.
EETF from SMPTE ST 2094-10 Annex B.2, which takes into account the input signal average luminance in addition to the maximum/minimum. The configurable contrast parameter influences the slope of the linear output segment, defaulting to 1.0 for no increase/decrease in contrast. Note that this does not currently include the subjective gain/offset/gamma controls defined in Annex B.3.
EETF from the ITU-R Report BT.2390, a hermite spline roll-off with linear segment. The knee point offset is configurable. Note that this parameter defaults to 1.0, rather than the value of 0.5 from the ITU-R spec.
EETF from ITU-R Report BT.2446, method A. Designed for well-mastered HDR sources. Can be used for both forward and inverse tone mapping. Not configurable.
Simple spline consisting of two polynomials, joined by a single pivot point. The parameter gives the pivot point (in PQ space), defaulting to 0.30. Can be used for both forward and inverse tone mapping.
Simple non-linear, global tone mapping algorithm. The parameter specifies the local contrast coefficient at the display peak. Essentially, a parameter of 0.5 implies that the reference white will be about half as bright as when clipping. Defaults to 0.5, which results in the simplest formulation of this function.
Generalization of the reinhard tone mapping algorithm to support an additional linear slope near black. The tone mapping parameter indicates the trade-off between the linear section and the non-linear section. Essentially, for a given parameter x, every color value below x will be mapped linearly, while higher values get non-linearly tone-mapped. Values near 1.0 make this curve behave like "clip", while values near 0.0 make this curve behave like "reinhard". The default value is 0.3, which provides a good balance between colorimetric accuracy and preserving out-of-gamut details.
Piece-wise, filmic tone-mapping algorithm developed by John Hable for use in Uncharted 2, inspired by a similar tone-mapping algorithm used by Kodak. Popularized by its use in video games with HDR rendering. Preserves both dark and bright details very well, but comes with the drawback of changing the average brightness quite significantly. This is sort of similar to "reinhard" with parameter 0.24.
Fits a gamma (power) function to transfer between the source and target color spaces, effectively resulting in a perceptual hard-knee joining two roughly linear sections. This preserves details at all scales fairly accurately, but can result in an image with a muted or dull appearance. The parameter is used as the cutoff point, defaulting to 0.5.
Linearly stretches the input range to the output range, in PQ space. This will preserve all details accurately, but results in a significantly different average brightness. Can be used for inverse tone-mapping in addition to regular tone-mapping. The parameter can be used as an additional linear gain coefficient (defaulting to 1.0).
For tunable tone mapping functions, this parameter can be used to fine-tune the curve behavior. Refer to the documentation of "tonemapping". The default value of 0.0 is replaced by the curve's preferred default setting.
If enabled, this filter will also attempt stretching SDR signals to fill HDR output color volumes. Disabled by default.
Size of the tone-mapping LUT, between 2 and 1024. Defaults to 256. Note that this figure is squared when combined with "peak_detect".
Contrast recovery strength. If set to a value above 0.0, the source image will be divided into high-frequency and low-frequency components, and a portion of the high-frequency image is added back onto the tone-mapped output. May cause excessive ringing artifacts for some HDR sources, but can improve the subjective sharpness and detail left over in the image after tone-mapping. Defaults to 0.30.
Contrast recovery lowpass kernel size. Defaults to 3.5. Increasing or decreasing this will affect the visual appearance substantially. Has no effect when "contrast_recovery" is disabled.

Dithering

By default, libplacebo will dither whenever necessary, which includes rendering to any integer format below 16-bit precision. It's recommended to always leave this on, since not doing so may result in visible banding in the output, even if the "debanding" filter is enabled. If maximum performance is needed, use "ordered_fixed" instead of disabling dithering.

Dithering method to use. Accepts the following values:
Disables dithering completely. May result in visible banding.
Dither with pseudo-blue noise. This is the default.
Tunable ordered dither pattern.
Faster ordered dither with a fixed size of 6. Texture-less.
Dither with white noise. Texture-less.
Dither LUT size, as log base2 between 1 and 8. Defaults to 6, corresponding to a LUT size of "64x64".
Enables temporal dithering. Disabled by default.

Custom shaders

libplacebo supports a number of custom shaders based on the mpv .hook GLSL syntax. A collection of such shaders can be found here: https://github.com/mpv-player/mpv/wiki/User-Scripts#user-shaders

A full description of the mpv shader format is beyond the scope of this section, but a summary can be found here: https://mpv.io/manual/master/#options-glsl-shader

Specifies a path to a custom shader file to load at runtime.
Specifies a complete custom shader as a raw string.

Debugging / performance

All of the options in this section default off. They may be of assistance when attempting to squeeze the maximum performance at the cost of quality.

Disable anti-aliasing when downscaling.
Truncate polar (EWA) scaler kernels below this absolute magnitude, between 0.0 and 1.0.
Disable linear light scaling.
Disable built-in GPU sampling (forces LUT).
Forcibly disable FBOs, resulting in loss of almost all functionality, but offering the maximum possible speed.

Commands

This filter supports almost all of the above options as commands.

Examples

  • Tone-map input to standard gamut BT.709 output:
    libplacebo=colorspace=bt709:color_primaries=bt709:color_trc=bt709:range=tv
    
  • Rescale input to fit into standard 1080p, with high quality scaling:
    libplacebo=w=1920:h=1080:force_original_aspect_ratio=decrease:normalize_sar=true:upscaler=ewa_lanczos:downscaler=ewa_lanczos
    
  • Interpolate low FPS / VFR input to smoothed constant 60 fps output:
    libplacebo=fps=60:frame_mixer=mitchell_clamp
    
  • Convert input to standard sRGB JPEG:
    libplacebo=format=yuv420p:colorspace=bt470bg:color_primaries=bt709:color_trc=iec61966-2-1:range=pc
    
  • Use higher quality debanding settings:
    libplacebo=deband=true:deband_iterations=3:deband_radius=8:deband_threshold=6
    
  • Run this filter on the CPU, on systems with Mesa installed (and with the most expensive options disabled):
    ffmpeg ... -init_hw_device vulkan:llvmpipe ... -vf libplacebo=upscaler=none:downscaler=none:peak_detect=false
    
  • Suppress CPU-based AV1/H.274 film grain application in the decoder, in favor of doing it with this filter. Note that this is only a gain if the frames are either already on the GPU, or if you're using libplacebo for other purposes, since otherwise the VRAM roundtrip will more than offset any expected speedup.
    ffmpeg -export_side_data +film_grain ... -vf libplacebo=apply_filmgrain=true
    
  • Interop with VAAPI hwdec to avoid round-tripping through RAM:
    ffmpeg -init_hw_device vulkan -hwaccel vaapi -hwaccel_output_format vaapi ... -vf libplacebo
    

Calculate the VMAF (Video Multi-Method Assessment Fusion) score for a reference/distorted pair of input videos.

The first input is the distorted video, and the second input is the reference video.

The obtained VMAF score is printed through the logging system.

It requires Netflix's vmaf library (libvmaf) as a pre-requisite. After installing the library it can be enabled using: "./configure --enable-libvmaf".

The filter has following options:

A `|` delimited list of vmaf models. Each model can be configured with a number of parameters. Default value: "version=vmaf_v0.6.1"
A `|` delimited list of features. Each feature can be configured with a number of parameters.
Set the file path to be used to store log files.
Set the format of the log file (xml, json, csv, or sub).
Set the pool method to be used for computing vmaf. Options are "min", "harmonic_mean" or "mean" (default).
Set number of threads to be used when initializing libvmaf. Default value: 0, no threads.
Set frame subsampling interval to be used.

This filter also supports the framesync options.

Examples

  • In the examples below, a distorted video distorted.mpg is compared with a reference file reference.mpg.
  • Basic usage:
    ffmpeg -i distorted.mpg -i reference.mpg -lavfi libvmaf=log_path=output.xml -f null -
    
  • Example with multiple models:
    ffmpeg -i distorted.mpg -i reference.mpg -lavfi libvmaf='model=version=vmaf_v0.6.1\\:name=vmaf|version=vmaf_v0.6.1neg\\:name=vmaf_neg' -f null -
    
  • Example with multiple additional features:
    ffmpeg -i distorted.mpg -i reference.mpg -lavfi libvmaf='feature=name=psnr|name=ciede' -f null -
    
  • Example with options and different containers:
    ffmpeg -i distorted.mpg -i reference.mkv -lavfi "[0:v]settb=AVTB,setpts=PTS-STARTPTS[main];[1:v]settb=AVTB,setpts=PTS-STARTPTS[ref];[main][ref]libvmaf=log_fmt=json:log_path=output.json" -f null -
    

This is the CUDA variant of the libvmaf filter. It only accepts CUDA frames.

It requires Netflix's vmaf library (libvmaf) as a pre-requisite. After installing the library it can be enabled using: "./configure --enable-nonfree --enable-ffnvcodec --enable-libvmaf".

Examples

Basic usage showing CUVID hardware decoding and CUDA scaling with scale_cuda:
ffmpeg \
    -hwaccel cuda -hwaccel_output_format cuda -codec:v av1_cuvid -i dis.obu \
    -hwaccel cuda -hwaccel_output_format cuda -codec:v av1_cuvid -i ref.obu \
    -filter_complex "
        [0:v]scale_cuda=format=yuv420p[dis]; \
        [1:v]scale_cuda=format=yuv420p[ref]; \
        [dis][ref]libvmaf_cuda=log_fmt=json:log_path=output.json
    " \
    -f null -

Apply limited difference filter using second and optionally third video stream.

The filter accepts the following options:

threshold
Set the threshold to use when allowing certain differences between video streams. Any absolute difference value lower or exact than this threshold will pick pixel components from first video stream.
Set the elasticity of soft thresholding when processing video streams. This value multiplied with first one sets second threshold. Any absolute difference value greater or exact than second threshold will pick pixel components from second video stream. For values between those two threshold linear interpolation between first and second video stream will be used.
Enable the reference (third) video stream processing. By default is disabled. If set, this video stream will be used for calculating absolute difference with first video stream.
Specify which planes will be processed. Defaults to all available.

Commands

This filter supports the all above options as commands except option reference.

Limits the pixel components values to the specified range [min, max].

The filter accepts the following options:

Lower bound. Defaults to the lowest allowed value for the input.
Upper bound. Defaults to the highest allowed value for the input.
Specify which planes will be processed. Defaults to all available.

Commands

This filter supports the all above options as commands.

Loop video frames.

The filter accepts the following options:

loop
Set the number of loops. Setting this value to -1 will result in infinite loops. Default is 0.
Set maximal size in number of frames. Default is 0.
Set first frame of loop. Default is 0.
Set the time of loop start in seconds. Only used if option named start is set to -1.

Examples

  • Loop single first frame infinitely:
    loop=loop=-1:size=1:start=0
    
  • Loop single first frame 10 times:
    loop=loop=10:size=1:start=0
    
  • Loop 10 first frames 5 times:
    loop=loop=5:size=10:start=0
    

Apply a 1D LUT to an input video.

The filter accepts the following options:

file
Set the 1D LUT file name.

Currently supported formats:

Iridas
cineSpace
Select interpolation mode.

Available values are:

Use values from the nearest defined point.
Interpolate values using the linear interpolation.
Interpolate values using the cosine interpolation.
Interpolate values using the cubic interpolation.
Interpolate values using the spline interpolation.

Commands

This filter supports the all above options as commands.

Apply a 3D LUT to an input video.

The filter accepts the following options:

file
Set the 3D LUT file name.

Currently supported formats:

3dl
AfterEffects
Iridas
DaVinci
Pandora
cineSpace
Select interpolation mode.

Available values are:

Use values from the nearest defined point.
Interpolate values using the 8 points defining a cube.
Interpolate values using a tetrahedron.
Interpolate values using a pyramid.
Interpolate values using a prism.

Commands

This filter supports the "interp" option as commands.

Turn certain luma values into transparency.

The filter accepts the following options:

threshold
Set the luma which will be used as base for transparency. Default value is 0.
Set the range of luma values to be keyed out. Default value is 0.01.
Set the range of softness. Default value is 0. Use this to control gradual transition from zero to full transparency.

Commands

This filter supports same commands as options. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Compute a look-up table for binding each pixel component input value to an output value, and apply it to the input video.

lutyuv applies a lookup table to a YUV input video, lutrgb to an RGB input video.

These filters accept the following parameters:

set first pixel component expression
set second pixel component expression
set third pixel component expression
set fourth pixel component expression, corresponds to the alpha component
set red component expression
set green component expression
set blue component expression
alpha component expression
set Y/luma component expression
set U/Cb component expression
set V/Cr component expression

Each of them specifies the expression to use for computing the lookup table for the corresponding pixel component values.

The exact component associated to each of the c* options depends on the format in input.

The lut filter requires either YUV or RGB pixel formats in input, lutrgb requires RGB pixel formats in input, and lutyuv requires YUV.

The expressions can contain the following constants and functions:

The input width and height.
The input value for the pixel component.
The input value, clipped to the minval-maxval range.
The maximum value for the pixel component.
The minimum value for the pixel component.
The negated value for the pixel component value, clipped to the minval-maxval range; it corresponds to the expression "maxval-clipval+minval".
The computed value in val, clipped to the minval-maxval range.
The computed gamma correction value of the pixel component value, clipped to the minval-maxval range. It corresponds to the expression "pow((clipval-minval)/(maxval-minval)\,gamma)*(maxval-minval)+minval"

All expressions default to "clipval".

Commands

This filter supports same commands as options.

Examples

  • Negate input video:
    lutrgb="r=maxval+minval-val:g=maxval+minval-val:b=maxval+minval-val"
    lutyuv="y=maxval+minval-val:u=maxval+minval-val:v=maxval+minval-val"
    

    The above is the same as:

    lutrgb="r=negval:g=negval:b=negval"
    lutyuv="y=negval:u=negval:v=negval"
    
  • Negate luma:
    lutyuv=y=negval
    
  • Remove chroma components, turning the video into a graytone image:
    lutyuv="u=128:v=128"
    
  • Apply a luma burning effect:
    lutyuv="y=2*val"
    
  • Remove green and blue components:
    lutrgb="g=0:b=0"
    
  • Set a constant alpha channel value on input:
    format=rgba,lutrgb=a="maxval-minval/2"
    
  • Correct luma gamma by a factor of 0.5:
    lutyuv=y=gammaval(0.5)
    
  • Discard least significant bits of luma:
    lutyuv=y='bitand(val, 128+64+32)'
    
  • Technicolor like effect:
    lutyuv=u='(val-maxval/2)*2+maxval/2':v='(val-maxval/2)*2+maxval/2'
    

The "lut2" filter takes two input streams and outputs one stream.

The "tlut2" (time lut2) filter takes two consecutive frames from one single stream.

This filter accepts the following parameters:

set first pixel component expression
set second pixel component expression
set third pixel component expression
set fourth pixel component expression, corresponds to the alpha component
set output bit depth, only available for "lut2" filter. By default is 0, which means bit depth is automatically picked from first input format.

The "lut2" filter also supports the framesync options.

Each of them specifies the expression to use for computing the lookup table for the corresponding pixel component values.

The exact component associated to each of the c* options depends on the format in inputs.

The expressions can contain the following constants:

The input width and height.
The first input value for the pixel component.
The second input value for the pixel component.
The first input video bit depth.
The second input video bit depth.

All expressions default to "x".

Commands

This filter supports the all above options as commands except option "d".

Examples

  • Highlight differences between two RGB video streams:
    lut2='ifnot(x-y,0,pow(2,bdx)-1):ifnot(x-y,0,pow(2,bdx)-1):ifnot(x-y,0,pow(2,bdx)-1)'
    
  • Highlight differences between two YUV video streams:
    lut2='ifnot(x-y,0,pow(2,bdx)-1):ifnot(x-y,pow(2,bdx-1),pow(2,bdx)-1):ifnot(x-y,pow(2,bdx-1),pow(2,bdx)-1)'
    
  • Show max difference between two video streams:
    lut2='if(lt(x,y),0,if(gt(x,y),pow(2,bdx)-1,pow(2,bdx-1))):if(lt(x,y),0,if(gt(x,y),pow(2,bdx)-1,pow(2,bdx-1))):if(lt(x,y),0,if(gt(x,y),pow(2,bdx)-1,pow(2,bdx-1)))'
    

Clamp the first input stream with the second input and third input stream.

Returns the value of first stream to be between second input stream - "undershoot" and third input stream + "overshoot".

This filter accepts the following options:

Default value is 0.
Default value is 0.
Set which planes will be processed as bitmap, unprocessed planes will be copied from first stream. By default value 0xf, all planes will be processed.

Commands

This filter supports the all above options as commands.

Merge the second and third input stream into output stream using absolute differences between second input stream and first input stream and absolute difference between third input stream and first input stream. The picked value will be from second input stream if second absolute difference is greater than first one or from third input stream otherwise.

This filter accepts the following options:

Set which planes will be processed as bitmap, unprocessed planes will be copied from first stream. By default value 0xf, all planes will be processed.

Commands

This filter supports the all above options as commands.

Merge the first input stream with the second input stream using per pixel weights in the third input stream.

A value of 0 in the third stream pixel component means that pixel component from first stream is returned unchanged, while maximum value (eg. 255 for 8-bit videos) means that pixel component from second stream is returned unchanged. Intermediate values define the amount of merging between both input stream's pixel components.

This filter accepts the following options:

Set which planes will be processed as bitmap, unprocessed planes will be copied from first stream. By default value 0xf, all planes will be processed.

Commands

This filter supports the all above options as commands.

Merge the second and third input stream into output stream using absolute differences between second input stream and first input stream and absolute difference between third input stream and first input stream. The picked value will be from second input stream if second absolute difference is less than first one or from third input stream otherwise.

This filter accepts the following options:

Set which planes will be processed as bitmap, unprocessed planes will be copied from first stream. By default value 0xf, all planes will be processed.

Commands

This filter supports the all above options as commands.

Pick pixels comparing absolute difference of two video streams with fixed threshold.

If absolute difference between pixel component of first and second video stream is equal or lower than user supplied threshold than pixel component from first video stream is picked, otherwise pixel component from second video stream is picked.

This filter accepts the following options:

threshold
Set threshold used when picking pixels from absolute difference from two input video streams.
Set which planes will be processed as bitmap, unprocessed planes will be copied from second stream. By default value 0xf, all planes will be processed.
Set mode of filter operation. Can be "abs" or "diff". Default is "abs".

Commands

This filter supports the all above options as commands.

Create mask from input video.

For example it is useful to create motion masks after "tblend" filter.

This filter accepts the following options:

Set low threshold. Any pixel component lower or exact than this value will be set to 0.
Set high threshold. Any pixel component higher than this value will be set to max value allowed for current pixel format.
Set planes to filter, by default all available planes are filtered.
Fill all frame pixels with this value.
Set max average pixel value for frame. If sum of all pixel components is higher that this average, output frame will be completely filled with value set by fill option. Typically useful for scene changes when used in combination with "tblend" filter.

Commands

This filter supports the all above options as commands.

Apply motion-compensation deinterlacing.

It needs one field per frame as input and must thus be used together with yadif=1/3 or equivalent.

This filter accepts the following options:

Set the deinterlacing mode.

It accepts one of the following values:

use iterative motion estimation
like slow, but use multiple reference frames.

Default value is fast.

Set the picture field parity assumed for the input video. It must be one of the following values:
0, tff
assume top field first
1, bff
assume bottom field first

Default value is bff.

qp
Set per-block quantization parameter (QP) used by the internal encoder.

Higher values should result in a smoother motion vector field but less optimal individual vectors. Default value is 1.

Pick median pixel from certain rectangle defined by radius.

This filter accepts the following options:

Set horizontal radius size. Default value is 1. Allowed range is integer from 1 to 127.
Set which planes to process. Default is 15, which is all available planes.
Set vertical radius size. Default value is 0. Allowed range is integer from 0 to 127. If it is 0, value will be picked from horizontal "radius" option.
Set median percentile. Default value is 0.5. Default value of 0.5 will pick always median values, while 0 will pick minimum values, and 1 maximum values.

Commands

This filter supports same commands as options. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Merge color channel components from several video streams.

The filter accepts up to 4 input streams, and merge selected input planes to the output video.

This filter accepts the following options:

Set input to output plane mapping. Default is 0.

The mappings is specified as a bitmap. It should be specified as a hexadecimal number in the form 0xAa[Bb[Cc[Dd]]]. 'Aa' describes the mapping for the first plane of the output stream. 'A' sets the number of the input stream to use (from 0 to 3), and 'a' the plane number of the corresponding input to use (from 0 to 3). The rest of the mappings is similar, 'Bb' describes the mapping for the output stream second plane, 'Cc' describes the mapping for the output stream third plane and 'Dd' describes the mapping for the output stream fourth plane.

format
Set output pixel format. Default is "yuva444p".
Set input to output stream mapping for output Nth plane. Default is 0.
Set input to output plane mapping for output Nth plane. Default is 0.

Examples

  • Merge three gray video streams of same width and height into single video stream:
    [a0][a1][a2]mergeplanes=0x001020:yuv444p
    
  • Merge 1st yuv444p stream and 2nd gray video stream into yuva444p video stream:
    [a0][a1]mergeplanes=0x00010210:yuva444p
    
  • Swap Y and A plane in yuva444p stream:
    format=yuva444p,mergeplanes=0x03010200:yuva444p
    
  • Swap U and V plane in yuv420p stream:
    format=yuv420p,mergeplanes=0x000201:yuv420p
    
  • Cast a rgb24 clip to yuv444p:
    format=rgb24,mergeplanes=0x000102:yuv444p
    

Estimate and export motion vectors using block matching algorithms. Motion vectors are stored in frame side data to be used by other filters.

This filter accepts the following options:

Specify the motion estimation method. Accepts one of the following values:
Exhaustive search algorithm.
Three step search algorithm.
Two dimensional logarithmic search algorithm.
New three step search algorithm.
Four step search algorithm.
Diamond search algorithm.
Hexagon-based search algorithm.
Enhanced predictive zonal search algorithm.
Uneven multi-hexagon search algorithm.

Default value is esa.

Macroblock size. Default 16.
Search parameter. Default 7.

Apply Midway Image Equalization effect using two video streams.

Midway Image Equalization adjusts a pair of images to have the same histogram, while maintaining their dynamics as much as possible. It's useful for e.g. matching exposures from a pair of stereo cameras.

This filter has two inputs and one output, which must be of same pixel format, but may be of different sizes. The output of filter is first input adjusted with midway histogram of both inputs.

This filter accepts the following option:

Set which planes to process. Default is 15, which is all available planes.

Convert the video to specified frame rate using motion interpolation.

This filter accepts the following options:

fps
Specify the output frame rate. This can be rational e.g. "60000/1001". Frames are dropped if fps is lower than source fps. Default 60.
Motion interpolation mode. Following values are accepted:
Duplicate previous or next frame for interpolating new ones.
blend
Blend source frames. Interpolated frame is mean of previous and next frames.
Motion compensated interpolation. Following options are effective when this mode is selected:
Motion compensation mode. Following values are accepted:
Overlapped block motion compensation.
Adaptive overlapped block motion compensation. Window weighting coefficients are controlled adaptively according to the reliabilities of the neighboring motion vectors to reduce oversmoothing.

Default mode is obmc.

Motion estimation mode. Following values are accepted:
Bidirectional motion estimation. Motion vectors are estimated for each source frame in both forward and backward directions.
Bilateral motion estimation. Motion vectors are estimated directly for interpolated frame.

Default mode is bilat.

The algorithm to be used for motion estimation. Following values are accepted:
Exhaustive search algorithm.
Three step search algorithm.
Two dimensional logarithmic search algorithm.
New three step search algorithm.
Four step search algorithm.
Diamond search algorithm.
Hexagon-based search algorithm.
Enhanced predictive zonal search algorithm.
Uneven multi-hexagon search algorithm.

Default algorithm is epzs.

Macroblock size. Default 16.
Motion estimation search parameter. Default 32.
Enable variable-size block motion compensation. Motion estimation is applied with smaller block sizes at object boundaries in order to make them less blurry. Default is 0 (disabled).
Scene change detection method. Scene change leads motion vectors to be in random direction. Scene change detection replace interpolated frames by duplicate ones. May not be needed for other modes. Following values are accepted:
Disable scene change detection.
Frame difference. Corresponding pixel values are compared and if it satisfies scd_threshold scene change is detected.

Default method is fdiff.

Scene change detection threshold. Default is 10..

Mix several video input streams into one video stream.

A description of the accepted options follows.

The number of inputs. If unspecified, it defaults to 2.
Specify weight of each input video stream as sequence. Each weight is separated by space. If number of weights is smaller than number of frames last specified weight will be used for all remaining unset weights.
scale
Specify scale, if it is set it will be multiplied with sum of each weight multiplied with pixel values to give final destination pixel value. By default scale is auto scaled to sum of weights.
Set which planes to filter. Default is all. Allowed range is from 0 to 15.
Specify how end of stream is determined.
The duration of the longest input. (default)
The duration of the shortest input.
The duration of the first input.

Commands

This filter supports the following commands:

scale
Syntax is same as option with same name.

Convert video to gray using custom color filter.

A description of the accepted options follows.

Set the chroma blue spot. Allowed range is from -1 to 1. Default value is 0.
Set the chroma red spot. Allowed range is from -1 to 1. Default value is 0.
Set the color filter size. Allowed range is from .1 to 10. Default value is 1.
Set the highlights strength. Allowed range is from 0 to 1. Default value is 0.

Commands

This filter supports the all above options as commands.

This filter allows to apply main morphological grayscale transforms, erode and dilate with arbitrary structures set in second input stream.

Unlike naive implementation and much slower performance in erosion and dilation filters, when speed is critical "morpho" filter should be used instead.

A description of accepted options follows,

Set morphological transform to apply, can be:

Default is "erode".

Set planes to filter, by default all planes except alpha are filtered.
Set which structure video frames will be processed from second input stream, can be first or all. Default is all.

The "morpho" filter also supports the framesync options.

Commands

This filter supports same commands as options.

Drop frames that do not differ greatly from the previous frame in order to reduce frame rate.

The main use of this filter is for very-low-bitrate encoding (e.g. streaming over dialup modem), but it could in theory be used for fixing movies that were inverse-telecined incorrectly.

A description of the accepted options follows.

Set the maximum number of consecutive frames which can be dropped (if positive), or the minimum interval between dropped frames (if negative). If the value is 0, the frame is dropped disregarding the number of previous sequentially dropped frames.

Default value is 0.

Set the maximum number of consecutive similar frames to ignore before to start dropping them. If the value is 0, the frame is dropped disregarding the number of previous sequentially similar frames.

Default value is 0.

Set the dropping threshold values.

Values for hi and lo are for 8x8 pixel blocks and represent actual pixel value differences, so a threshold of 64 corresponds to 1 unit of difference for each pixel, or the same spread out differently over the block.

A frame is a candidate for dropping if no 8x8 blocks differ by more than a threshold of hi, and if no more than frac blocks (1 meaning the whole image) differ by more than a threshold of lo.

Default value for hi is 64*12, default value for lo is 64*5, and default value for frac is 0.33.

Obtain the MSAD (Mean Sum of Absolute Differences) between two input videos.

This filter takes two input videos.

Both input videos must have the same resolution and pixel format for this filter to work correctly. Also it assumes that both inputs have the same number of frames, which are compared one by one.

The obtained per component, average, min and max MSAD is printed through the logging system.

The filter stores the calculated MSAD of each frame in frame metadata.

This filter also supports the framesync options.

In the below example the input file main.mpg being processed is compared with the reference file ref.mpg.

ffmpeg -i main.mpg -i ref.mpg -lavfi msad -f null -

Multiply first video stream pixels values with second video stream pixels values.

The filter accepts the following options:

scale
Set the scale applied to second video stream. By default is 1. Allowed range is from 0 to 9.
Set the offset applied to second video stream. By default is 0.5. Allowed range is from -1 to 1.
Specify planes from input video stream that will be processed. By default all planes are processed.

Commands

This filter supports same commands as options.

Negate (invert) the input video.

It accepts the following option:

Set components to negate.

Available values for components are:

With value 1, it negates the alpha component, if present. Default value is 0.

Commands

This filter supports same commands as options.

Denoise frames using Non-Local Means algorithm.

Each pixel is adjusted by looking for other pixels with similar contexts. This context similarity is defined by comparing their surrounding patches of size pxp. Patches are searched in an area of rxr around the pixel.

Note that the research area defines centers for patches, which means some patches will be made of pixels outside that research area.

The filter accepts the following options.

Set denoising strength. Default is 1.0. Must be in range [1.0, 30.0].
Set patch size. Default is 7. Must be odd number in range [0, 99].
Same as p but for chroma planes.

The default value is 0 and means automatic.

Set research size. Default is 15. Must be odd number in range [0, 99].
Same as r but for chroma planes.

The default value is 0 and means automatic.

Deinterlace video using neural network edge directed interpolation.

This filter accepts the following options:

Mandatory option, without binary file filter can not work. Currently file can be found here: https://github.com/dubhater/vapoursynth-nnedi3/blob/master/src/nnedi3_weights.bin
Set which frames to deinterlace, by default it is "all". Can be "all" or "interlaced".
field
Set mode of operation.

Can be one of the following:

Use frame flags, both fields.
Use frame flags, single field.
Use top field only.
Use bottom field only.
Use both fields, top first.
Use both fields, bottom first.
Set which planes to process, by default filter process all frames.
Set size of local neighborhood around each pixel, used by the predictor neural network.

Can be one of the following:

Set the number of neurons in predictor neural network. Can be one of the following:
Controls the number of different neural network predictions that are blended together to compute the final output value. Can be "fast", default or "slow".
Set which set of weights to use in the predictor. Can be one of the following:
weights trained to minimize absolute error
weights trained to minimize squared error
Controls whether or not the prescreener neural network is used to decide which pixels should be processed by the predictor neural network and which can be handled by simple cubic interpolation. The prescreener is trained to know whether cubic interpolation will be sufficient for a pixel or whether it should be predicted by the predictor nn. The computational complexity of the prescreener nn is much less than that of the predictor nn. Since most pixels can be handled by cubic interpolation, using the prescreener generally results in much faster processing. The prescreener is pretty accurate, so the difference between using it and not using it is almost always unnoticeable.

Can be one of the following:

Default is "new".

Commands

This filter supports same commands as options, excluding weights option.

Force libavfilter not to use any of the specified pixel formats for the input to the next filter.

It accepts the following parameters:

A '|'-separated list of pixel format names, such as pix_fmts=yuv420p|monow|rgb24".

Examples

  • Force libavfilter to use a format different from yuv420p for the input to the vflip filter:
    noformat=pix_fmts=yuv420p,vflip
    
  • Convert the input video to any of the formats not contained in the list:
    noformat=yuv420p|yuv444p|yuv410p
    

Add noise on video input frame.

The filter accepts the following options:

Set noise seed for specific pixel component or all pixel components in case of all_seed. Default value is 123457.
Set noise strength for specific pixel component or all pixel components in case all_strength. Default value is 0. Allowed range is [0, 100].
Set pixel component flags or set flags for all components if all_flags. Available values for component flags are:
averaged temporal noise (smoother)
mix random noise with a (semi)regular pattern
temporal noise (noise pattern changes between frames)
uniform noise (gaussian otherwise)

Examples

Add temporal and uniform noise to input video:

noise=alls=20:allf=t+u

Normalize RGB video (aka histogram stretching, contrast stretching). See: https://en.wikipedia.org/wiki/Normalization_(image_processing)

For each channel of each frame, the filter computes the input range and maps it linearly to the user-specified output range. The output range defaults to the full dynamic range from pure black to pure white.

Temporal smoothing can be used on the input range to reduce flickering (rapid changes in brightness) caused when small dark or bright objects enter or leave the scene. This is similar to the auto-exposure (automatic gain control) on a video camera, and, like a video camera, it may cause a period of over- or under-exposure of the video.

The R,G,B channels can be normalized independently, which may cause some color shifting, or linked together as a single channel, which prevents color shifting. Linked normalization preserves hue. Independent normalization does not, so it can be used to remove some color casts. Independent and linked normalization can be combined in any ratio.

The normalize filter accepts the following options:

Colors which define the output range. The minimum input value is mapped to the blackpt. The maximum input value is mapped to the whitept. The defaults are black and white respectively. Specifying white for blackpt and black for whitept will give color-inverted, normalized video. Shades of grey can be used to reduce the dynamic range (contrast). Specifying saturated colors here can create some interesting effects.
The number of previous frames to use for temporal smoothing. The input range of each channel is smoothed using a rolling average over the current frame and the smoothing previous frames. The default is 0 (no temporal smoothing).
Controls the ratio of independent (color shifting) channel normalization to linked (color preserving) normalization. 0.0 is fully linked, 1.0 is fully independent. Defaults to 1.0 (fully independent).
Overall strength of the filter. 1.0 is full strength. 0.0 is a rather expensive no-op. Defaults to 1.0 (full strength).

Commands

This filter supports same commands as options, excluding smoothing option. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Examples

Stretch video contrast to use the full dynamic range, with no temporal smoothing; may flicker depending on the source content:

normalize=blackpt=black:whitept=white:smoothing=0

As above, but with 50 frames of temporal smoothing; flicker should be reduced, depending on the source content:

normalize=blackpt=black:whitept=white:smoothing=50

As above, but with hue-preserving linked channel normalization:

normalize=blackpt=black:whitept=white:smoothing=50:independence=0

As above, but with half strength:

normalize=blackpt=black:whitept=white:smoothing=50:independence=0:strength=0.5

Map the darkest input color to red, the brightest input color to cyan:

normalize=blackpt=red:whitept=cyan

Pass the video source unchanged to the output.

Optical Character Recognition

This filter uses Tesseract for optical character recognition. To enable compilation of this filter, you need to configure FFmpeg with "--enable-libtesseract".

It accepts the following options:

Set datapath to tesseract data. Default is to use whatever was set at installation.
Set language, default is "eng".
Set character whitelist.
Set character blacklist.

The filter exports recognized text as the frame metadata "lavfi.ocr.text". The filter exports confidence of recognized words as the frame metadata "lavfi.ocr.confidence".

Apply a video transform using libopencv.

To enable this filter, install the libopencv library and headers and configure FFmpeg with "--enable-libopencv".

It accepts the following parameters:

The name of the libopencv filter to apply.
The parameters to pass to the libopencv filter. If not specified, the default values are assumed.

Refer to the official libopencv documentation for more precise information: http://docs.opencv.org/master/modules/imgproc/doc/filtering.html

Several libopencv filters are supported; see the following subsections.

dilate

Dilate an image by using a specific structuring element. It corresponds to the libopencv function "cvDilate".

It accepts the parameters: struct_el|nb_iterations.

struct_el represents a structuring element, and has the syntax: colsxrows+anchor_xxanchor_y/shape

cols and rows represent the number of columns and rows of the structuring element, anchor_x and anchor_y the anchor point, and shape the shape for the structuring element. shape must be "rect", "cross", "ellipse", or "custom".

If the value for shape is "custom", it must be followed by a string of the form "=filename". The file with name filename is assumed to represent a binary image, with each printable character corresponding to a bright pixel. When a custom shape is used, cols and rows are ignored, the number or columns and rows of the read file are assumed instead.

The default value for struct_el is "3x3+0x0/rect".

nb_iterations specifies the number of times the transform is applied to the image, and defaults to 1.

Some examples:

# Use the default values
ocv=dilate

# Dilate using a structuring element with a 5x5 cross, iterating two times
ocv=filter_name=dilate:filter_params=5x5+2x2/cross|2

# Read the shape from the file diamond.shape, iterating two times.
# The file diamond.shape may contain a pattern of characters like this
#   *
#  ***
# *****
#  ***
#   *
# The specified columns and rows are ignored
# but the anchor point coordinates are not
ocv=dilate:0x0+2x2/custom=diamond.shape|2

erode

Erode an image by using a specific structuring element. It corresponds to the libopencv function "cvErode".

It accepts the parameters: struct_el:nb_iterations, with the same syntax and semantics as the dilate filter.

smooth

Smooth the input video.

The filter takes the following parameters: type|param1|param2|param3|param4.

type is the type of smooth filter to apply, and must be one of the following values: "blur", "blur_no_scale", "median", "gaussian", or "bilateral". The default value is "gaussian".

The meaning of param1, param2, param3, and param4 depends on the smooth type. param1 and param2 accept integer positive values or 0. param3 and param4 accept floating point values.

The default value for param1 is 3. The default value for the other parameters is 0.

These parameters correspond to the parameters assigned to the libopencv function "cvSmooth".

2D Video Oscilloscope.

Useful to measure spatial impulse, step responses, chroma delays, etc.

It accepts the following parameters:

Set scope center x position.
Set scope center y position.
Set scope size, relative to frame diagonal.
Set scope tilt/rotation.
Set trace opacity.
Set trace center x position.
Set trace center y position.
Set trace width, relative to width of frame.
Set trace height, relative to height of frame.
Set which components to trace. By default it traces first three components.
Draw trace grid. By default is enabled.
Draw some statistics. By default is enabled.
Draw scope. By default is enabled.

Commands

This filter supports same commands as options. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Examples

  • Inspect full first row of video frame.
    oscilloscope=x=0.5:y=0:s=1
    
  • Inspect full last row of video frame.
    oscilloscope=x=0.5:y=1:s=1
    
  • Inspect full 5th line of video frame of height 1080.
    oscilloscope=x=0.5:y=5/1080:s=1
    
  • Inspect full last column of video frame.
    oscilloscope=x=1:y=0.5:s=1:t=1
    

Overlay one video on top of another.

It takes two inputs and has one output. The first input is the "main" video on which the second input is overlaid.

It accepts the following parameters:

A description of the accepted options follows.

Set the expression for the x and y coordinates of the overlaid video on the main video. Default value is "0" for both expressions. In case the expression is invalid, it is set to a huge value (meaning that the overlay will not be displayed within the output visible area).
See framesync.
Set when the expressions for x, and y are evaluated.

It accepts the following values:

only evaluate expressions once during the filter initialization or when a command is processed
evaluate expressions for each incoming frame

Default value is frame.

See framesync.
format
Set the format for the output video.

It accepts the following values:

force YUV 4:2:0 8-bit planar output
force YUV 4:2:0 10-bit planar output
force YUV 4:2:2 8-bit planar output
force YUV 4:2:2 10-bit planar output
force YUV 4:4:4 8-bit planar output
force YUV 4:4:4 10-bit planar output
force RGB 8-bit packed output
force RGB 8-bit planar output
automatically pick format

Default value is yuv420.

See framesync.
Set format of alpha of the overlaid video, it can be straight or premultiplied. Default is straight.

The x, and y expressions can contain the following parameters.

The main input width and height.
The overlay input width and height.
The computed values for x and y. They are evaluated for each new frame.
horizontal and vertical chroma subsample values of the output format. For example for the pixel format "yuv422p" hsub is 2 and vsub is 1.
the number of input frame, starting from 0
the position in the file of the input frame, NAN if unknown; deprecated, do not use
The timestamp, expressed in seconds. It's NAN if the input timestamp is unknown.

This filter also supports the framesync options.

Note that the n, t variables are available only when evaluation is done per frame, and will evaluate to NAN when eval is set to init.

Be aware that frames are taken from each input video in timestamp order, hence, if their initial timestamps differ, it is a good idea to pass the two inputs through a setpts=PTS-STARTPTS filter to have them begin in the same zero timestamp, as the example for the movie filter does.

You can chain together more overlays but you should test the efficiency of such approach.

Commands

This filter supports the following commands:

Modify the x and y of the overlay input. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Examples

  • Draw the overlay at 10 pixels from the bottom right corner of the main video:
    overlay=main_w-overlay_w-10:main_h-overlay_h-10
    

    Using named options the example above becomes:

    overlay=x=main_w-overlay_w-10:y=main_h-overlay_h-10
    
  • Insert a transparent PNG logo in the bottom left corner of the input, using the ffmpeg tool with the "-filter_complex" option:
    ffmpeg -i input -i logo -filter_complex 'overlay=10:main_h-overlay_h-10' output
    
  • Insert 2 different transparent PNG logos (second logo on bottom right corner) using the ffmpeg tool:
    ffmpeg -i input -i logo1 -i logo2 -filter_complex 'overlay=x=10:y=H-h-10,overlay=x=W-w-10:y=H-h-10' output
    
  • Add a transparent color layer on top of the main video; "WxH" must specify the size of the main input to the overlay filter:
    color=color=red@.3:size=WxH [over]; [in][over] overlay [out]
    
  • Play an original video and a filtered version (here with the deshake filter) side by side using the ffplay tool:
    ffplay input.avi -vf 'split[a][b]; [a]pad=iw*2:ih[src]; [b]deshake[filt]; [src][filt]overlay=w'
    

    The above command is the same as:

    ffplay input.avi -vf 'split[b], pad=iw*2[src], [b]deshake, [src]overlay=w'
    
  • Make a sliding overlay appearing from the left to the right top part of the screen starting since time 2:
    overlay=x='if(gte(t,2), -w+(t-2)*20, NAN)':y=0
    
  • Compose output by putting two input videos side to side:
    ffmpeg -i left.avi -i right.avi -filter_complex "
    nullsrc=size=200x100 [background];
    [0:v] setpts=PTS-STARTPTS, scale=100x100 [left];
    [1:v] setpts=PTS-STARTPTS, scale=100x100 [right];
    [background][left]       overlay=shortest=1       [background+left];
    [background+left][right] overlay=shortest=1:x=100 [left+right]
    "
    
  • Mask 10-20 seconds of a video by applying the delogo filter to a section
    ffmpeg -i test.avi -codec:v:0 wmv2 -ar 11025 -b:v 9000k
    -vf '[in]split[split_main][split_delogo];[split_delogo]trim=start=360:end=371,delogo=0:0:640:480[delogoed];[split_main][delogoed]overlay=eof_action=pass[out]'
    masked.avi
    
  • Chain several overlays in cascade:
    nullsrc=s=200x200 [bg];
    testsrc=s=100x100, split=4 [in0][in1][in2][in3];
    [in0] lutrgb=r=0, [bg]   overlay=0:0     [mid0];
    [in1] lutrgb=g=0, [mid0] overlay=100:0   [mid1];
    [in2] lutrgb=b=0, [mid1] overlay=0:100   [mid2];
    [in3] null,       [mid2] overlay=100:100 [out0]
    

Overlay one video on top of another.

This is the CUDA variant of the overlay filter. It only accepts CUDA frames. The underlying input pixel formats have to match.

It takes two inputs and has one output. The first input is the "main" video on which the second input is overlaid.

It accepts the following parameters:

Set expressions for the x and y coordinates of the overlaid video on the main video.

They can contain the following parameters:

The main input width and height.
The overlay input width and height.
The computed values for x and y. They are evaluated for each new frame.
The ordinal index of the main input frame, starting from 0.
The byte offset position in the file of the main input frame, NAN if unknown. Deprecated, do not use.
The timestamp of the main input frame, expressed in seconds, NAN if unknown.

Default value is "0" for both expressions.

Set when the expressions for x and y are evaluated.

It accepts the following values:

Evaluate expressions once during filter initialization or when a command is processed.
Evaluate expressions for each incoming frame

Default value is frame.

See framesync.
See framesync.
See framesync.

This filter also supports the framesync options.

Apply Overcomplete Wavelet denoiser.

The filter accepts the following options:

Set depth.

Larger depth values will denoise lower frequency components more, but slow down filtering.

Must be an int in the range 8-16, default is 8.

Set luma strength.

Must be a double value in the range 0-1000, default is 1.0.

Set chroma strength.

Must be a double value in the range 0-1000, default is 1.0.

Add paddings to the input image, and place the original input at the provided x, y coordinates.

It accepts the following parameters:

Specify an expression for the size of the output image with the paddings added. If the value for width or height is 0, the corresponding input size is used for the output.

The width expression can reference the value set by the height expression, and vice versa.

The default value of width and height is 0.

Specify the offsets to place the input image at within the padded area, with respect to the top/left border of the output image.

The x expression can reference the value set by the y expression, and vice versa.

The default value of x and y is 0.

If x or y evaluate to a negative number, they'll be changed so the input image is centered on the padded area.

Specify the color of the padded area. For the syntax of this option, check the "Color" section in the ffmpeg-utils manual.

The default value of color is "black".

Specify when to evaluate width, height, x and y expression.

It accepts the following values:

Only evaluate expressions once during the filter initialization or when a command is processed.
Evaluate expressions for each incoming frame.

Default value is init.

Pad to aspect instead to a resolution.

The value for the width, height, x, and y options are expressions containing the following constants:

The input video width and height.
These are the same as in_w and in_h.
The output width and height (the size of the padded area), as specified by the width and height expressions.
These are the same as out_w and out_h.
The x and y offsets as specified by the x and y expressions, or NAN if not yet specified.
same as iw / ih
input sample aspect ratio
input display aspect ratio, it is the same as (iw / ih) * sar
The horizontal and vertical chroma subsample values. For example for the pixel format "yuv422p" hsub is 2 and vsub is 1.

Examples

  • Add paddings with the color "violet" to the input video. The output video size is 640x480, and the top-left corner of the input video is placed at column 0, row 40
    pad=640:480:0:40:violet
    

    The example above is equivalent to the following command:

    pad=width=640:height=480:x=0:y=40:color=violet
    
  • Pad the input to get an output with dimensions increased by 3/2, and put the input video at the center of the padded area:
    pad="3/2*iw:3/2*ih:(ow-iw)/2:(oh-ih)/2"
    
  • Pad the input to get a squared output with size equal to the maximum value between the input width and height, and put the input video at the center of the padded area:
    pad="max(iw\,ih):ow:(ow-iw)/2:(oh-ih)/2"
    
  • Pad the input to get a final w/h ratio of 16:9:
    pad="ih*16/9:ih:(ow-iw)/2:(oh-ih)/2"
    
  • In case of anamorphic video, in order to set the output display aspect correctly, it is necessary to use sar in the expression, according to the relation:
    (ih * X / ih) * sar = output_dar
    X = output_dar / sar
    

    Thus the previous example needs to be modified to:

    pad="ih*16/9/sar:ih:(ow-iw)/2:(oh-ih)/2"
    
  • Double the output size and put the input video in the bottom-right corner of the output padded area:
    pad="2*iw:2*ih:ow-iw:oh-ih"
    

Generate one palette for a whole video stream.

It accepts the following options:

Set the maximum number of colors to quantize in the palette. Note: the palette will still contain 256 colors; the unused palette entries will be black.
Create a palette of 255 colors maximum and reserve the last one for transparency. Reserving the transparency color is useful for GIF optimization. If not set, the maximum of colors in the palette will be 256. You probably want to disable this option for a standalone image. Set by default.
Set the color that will be used as background for transparency.
Set statistics mode.

It accepts the following values:

Compute full frame histograms.
Compute histograms only for the part that differs from previous frame. This might be relevant to give more importance to the moving part of your input if the background is static.
Compute new histogram for each frame.

Default value is full.

The filter also exports the frame metadata "lavfi.color_quant_ratio" ("nb_color_in / nb_color_out") which you can use to evaluate the degree of color quantization of the palette. This information is also visible at info logging level.

Examples

Generate a representative palette of a given video using ffmpeg:
ffmpeg -i input.mkv -vf palettegen palette.png

Use a palette to downsample an input video stream.

The filter takes two inputs: one video stream and a palette. The palette must be a 256 pixels image.

It accepts the following options:

Select dithering mode. Available algorithms are:
Ordered 8x8 bayer dithering (deterministic)
Dithering as defined by Paul Heckbert in 1982 (simple error diffusion). Note: this dithering is sometimes considered "wrong" and is included as a reference.
Floyd and Steingberg dithering (error diffusion)
Frankie Sierra dithering v2 (error diffusion)
Frankie Sierra dithering v2 "Lite" (error diffusion)
Frankie Sierra dithering v3 (error diffusion)
Burkes dithering (error diffusion)
Atkinson dithering by Bill Atkinson at Apple Computer (error diffusion)
Disable dithering.

Default is sierra2_4a.

When bayer dithering is selected, this option defines the scale of the pattern (how much the crosshatch pattern is visible). A low value means more visible pattern for less banding, and higher value means less visible pattern at the cost of more banding.

The option must be an integer value in the range [0,5]. Default is 2.

If set, define the zone to process
Only the changing rectangle will be reprocessed. This is similar to GIF cropping/offsetting compression mechanism. This option can be useful for speed if only a part of the image is changing, and has use cases such as limiting the scope of the error diffusal dither to the rectangle that bounds the moving scene (it leads to more deterministic output if the scene doesn't change much, and as a result less moving noise and better GIF compression).

Default is none.

Take new palette for each output frame.
Sets the alpha threshold for transparency. Alpha values above this threshold will be treated as completely opaque, and values below this threshold will be treated as completely transparent.

The option must be an integer value in the range [0,255]. Default is 128.

Examples

Use a palette (generated for example with palettegen) to encode a GIF using ffmpeg:
ffmpeg -i input.mkv -i palette.png -lavfi paletteuse output.gif

Correct perspective of video not recorded perpendicular to the screen.

A description of the accepted parameters follows.

Set coordinates expression for top left, top right, bottom left and bottom right corners. Default values are "0:0:W:0:0:H:W:H" with which perspective will remain unchanged. If the "sense" option is set to "source", then the specified points will be sent to the corners of the destination. If the "sense" option is set to "destination", then the corners of the source will be sent to the specified coordinates.

The expressions can use the following variables:

the width and height of video frame.
Input frame count.
Output frame count.
Set interpolation for perspective correction.

It accepts the following values:

Default value is linear.

Set interpretation of coordinate options.

It accepts the following values:

0, source
Send point in the source specified by the given coordinates to the corners of the destination.
1, destination
Send the corners of the source to the point in the destination specified by the given coordinates.

Default value is source.

Set when the expressions for coordinates x0,y0,...x3,y3 are evaluated.

It accepts the following values:

only evaluate expressions once during the filter initialization or when a command is processed
evaluate expressions for each incoming frame

Default value is init.

Delay interlaced video by one field time so that the field order changes.

The intended use is to fix PAL movies that have been captured with the opposite field order to the film-to-video transfer.

A description of the accepted parameters follows.

Set phase mode.

It accepts the following values:

Capture field order top-first, transfer bottom-first. Filter will delay the bottom field.
Capture field order bottom-first, transfer top-first. Filter will delay the top field.
Capture and transfer with the same field order. This mode only exists for the documentation of the other options to refer to, but if you actually select it, the filter will faithfully do nothing.
Capture field order determined automatically by field flags, transfer opposite. Filter selects among t and b modes on a frame by frame basis using field flags. If no field information is available, then this works just like u.
Capture unknown or varying, transfer opposite. Filter selects among t and b on a frame by frame basis by analyzing the images and selecting the alternative that produces best match between the fields.
Capture top-first, transfer unknown or varying. Filter selects among t and p using image analysis.
Capture bottom-first, transfer unknown or varying. Filter selects among b and p using image analysis.
Capture determined by field flags, transfer unknown or varying. Filter selects among t, b and p using field flags and image analysis. If no field information is available, then this works just like U. This is the default mode.
Both capture and transfer unknown or varying. Filter selects among t, b and p using image analysis only.

Commands

This filter supports the all above options as commands.

Reduce various flashes in video, so to help users with epilepsy.

It accepts the following options:

Set how many frames to use when filtering. Default is 30.
Set detection threshold factor. Default is 1. Lower is stricter.
Set how many pixels to skip when sampling frames. Default is 1. Allowed range is from 1 to 1024.
Leave frames unchanged. Default is disabled.

Pixel format descriptor test filter, mainly useful for internal testing. The output video should be equal to the input video.

For example:

format=monow, pixdesctest

can be used to test the monowhite pixel format descriptor definition.

Apply pixelization to video stream.

The filter accepts the following options:

Set block dimensions that will be used for pixelization. Default value is 16.
Set the mode of pixelization used.

Possible values are:

Default value is "avg".

Set what planes to filter. Default is to filter all planes.

Commands

This filter supports all options as commands.

Display sample values of color channels. Mainly useful for checking color and levels. Minimum supported resolution is 640x480.

The filters accept the following options:

Set scope X position, relative offset on X axis.
Set scope Y position, relative offset on Y axis.
Set scope width.
Set scope height.
Set window opacity. This window also holds statistics about pixel area.
Set window X position, relative offset on X axis.
Set window Y position, relative offset on Y axis.

Commands

This filter supports same commands as options.

Enable the specified chain of postprocessing subfilters using libpostproc. This library should be automatically selected with a GPL build ("--enable-gpl"). Subfilters must be separated by '/' and can be disabled by prepending a '-'. Each subfilter and some options have a short and a long name that can be used interchangeably, i.e. dr/dering are the same.

The filters accept the following options:

Set postprocessing subfilters string.

All subfilters share common options to determine their scope:

Honor the quality commands for this subfilter.
Do chrominance filtering, too (default).
Do luma filtering only (no chrominance).
Do chrominance filtering only (no luma).

These options can be appended after the subfilter name, separated by a '|'.

Available subfilters are:

Horizontal deblocking filter
Difference factor where higher values mean more deblocking (default: 32).
Flatness threshold where lower values mean more deblocking (default: 39).
Vertical deblocking filter
Difference factor where higher values mean more deblocking (default: 32).
Flatness threshold where lower values mean more deblocking (default: 39).
Accurate horizontal deblocking filter
Difference factor where higher values mean more deblocking (default: 32).
Flatness threshold where lower values mean more deblocking (default: 39).
Accurate vertical deblocking filter
Difference factor where higher values mean more deblocking (default: 32).
Flatness threshold where lower values mean more deblocking (default: 39).

The horizontal and vertical deblocking filters share the difference and flatness values so you cannot set different horizontal and vertical thresholds.

Experimental horizontal deblocking filter
Experimental vertical deblocking filter
Deringing filter
larger -> stronger filtering
larger -> stronger filtering
larger -> stronger filtering
Stretch luma to "0-255".
Linear blend deinterlacing filter that deinterlaces the given block by filtering all lines with a "(1 2 1)" filter.
Linear interpolating deinterlacing filter that deinterlaces the given block by linearly interpolating every second line.
Cubic interpolating deinterlacing filter deinterlaces the given block by cubically interpolating every second line.
Median deinterlacing filter that deinterlaces the given block by applying a median filter to every second line.
FFmpeg deinterlacing filter that deinterlaces the given block by filtering every second line with a "(-1 4 2 4 -1)" filter.
Vertically applied FIR lowpass deinterlacing filter that deinterlaces the given block by filtering all lines with a "(-1 2 6 2 -1)" filter.
Overrides the quantizer table from the input with the constant quantizer you specify.
Quantizer to use
Default pp filter combination ("hb|a,vb|a,dr|a")
Fast pp filter combination ("h1|a,v1|a,dr|a")
High quality pp filter combination ("ha|a|128|7,va|a,dr|a")

Examples

  • Apply horizontal and vertical deblocking, deringing and automatic brightness/contrast:
    pp=hb/vb/dr/al
    
  • Apply default filters without brightness/contrast correction:
    pp=de/-al
    
  • Apply default filters and temporal denoiser:
    pp=default/tmpnoise|1|2|3
    
  • Apply deblocking on luma only, and switch vertical deblocking on or off automatically depending on available CPU time:
    pp=hb|y/vb|a
    

Apply Postprocessing filter 7. It is variant of the spp filter, similar to spp = 6 with 7 point DCT, where only the center sample is used after IDCT.

The filter accepts the following options:

qp
Force a constant quantization parameter. It accepts an integer in range 0 to 63. If not set, the filter will use the QP from the video stream (if available).
Set thresholding mode. Available modes are:
Set hard thresholding.
Set soft thresholding (better de-ringing effect, but likely blurrier).
Set medium thresholding (good results, default).

Apply alpha premultiply effect to input video stream using first plane of second stream as alpha.

Both streams must have same dimensions and same pixel format.

The filter accepts the following option:

Set which planes will be processed, unprocessed planes will be copied. By default value 0xf, all planes will be processed.
Do not require 2nd input for processing, instead use alpha plane from input stream.

Apply prewitt operator to input video stream.

The filter accepts the following option:

Set which planes will be processed, unprocessed planes will be copied. By default value 0xf, all planes will be processed.
scale
Set value which will be multiplied with filtered result.
Set value which will be added to filtered result.

Commands

This filter supports the all above options as commands.

Alter frame colors in video with pseudocolors.

This filter accepts the following options:

set pixel first component expression
set pixel second component expression
set pixel third component expression
set pixel fourth component expression, corresponds to the alpha component
set component to use as base for altering colors
Pick one of built-in LUTs. By default is set to none.

Available LUTs:

Set opacity of output colors. Allowed range is from 0 to 1. Default value is set to 1.

Each of the expression options specifies the expression to use for computing the lookup table for the corresponding pixel component values.

The expressions can contain the following constants and functions:

The input width and height.
The input value for the pixel component.
The minimum allowed component value.
The maximum allowed component value.

All expressions default to "val".

Commands

This filter supports the all above options as commands.

Examples

Change too high luma values to gradient:
pseudocolor="'if(between(val,ymax,amax),lerp(ymin,ymax,(val-ymax)/(amax-ymax)),-1):if(between(val,ymax,amax),lerp(umax,umin,(val-ymax)/(amax-ymax)),-1):if(between(val,ymax,amax),lerp(vmin,vmax,(val-ymax)/(amax-ymax)),-1):-1'"

Obtain the average, maximum and minimum PSNR (Peak Signal to Noise Ratio) between two input videos.

This filter takes in input two input videos, the first input is considered the "main" source and is passed unchanged to the output. The second input is used as a "reference" video for computing the PSNR.

Both video inputs must have the same resolution and pixel format for this filter to work correctly. Also it assumes that both inputs have the same number of frames, which are compared one by one.

The obtained average PSNR is printed through the logging system.

The filter stores the accumulated MSE (mean squared error) of each frame, and at the end of the processing it is averaged across all frames equally, and the following formula is applied to obtain the PSNR:

PSNR = 10*log10(MAX^2/MSE)

Where MAX is the average of the maximum values of each component of the image.

The description of the accepted parameters follows.

If specified the filter will use the named file to save the PSNR of each individual frame. When filename equals "-" the data is sent to standard output.
Specifies which version of the stats file format to use. Details of each format are written below. Default value is 1.
Determines whether the max value is output to the stats log. Default value is 0. Requires stats_version >= 2. If this is set and stats_version < 2, the filter will return an error.

This filter also supports the framesync options.

The file printed if stats_file is selected, contains a sequence of key/value pairs of the form key:value for each compared couple of frames.

If a stats_version greater than 1 is specified, a header line precedes the list of per-frame-pair stats, with key value pairs following the frame format with the following parameters:

The version of the log file format. Will match stats_version.
A comma separated list of the per-frame-pair parameters included in the log.

A description of each shown per-frame-pair parameter follows:

sequential number of the input frame, starting from 1
Mean Square Error pixel-by-pixel average difference of the compared frames, averaged over all the image components.
Mean Square Error pixel-by-pixel average difference of the compared frames for the component specified by the suffix.
Peak Signal to Noise ratio of the compared frames for the component specified by the suffix.
Maximum allowed value for each channel, and average over all channels.

Examples

  • For example:
    movie=ref_movie.mpg, setpts=PTS-STARTPTS [main];
    [main][ref] psnr="stats_file=stats.log" [out]
    

    On this example the input file being processed is compared with the reference file ref_movie.mpg. The PSNR of each individual frame is stored in stats.log.

  • Another example with different containers:
    ffmpeg -i main.mpg -i ref.mkv -lavfi  "[0:v]settb=AVTB,setpts=PTS-STARTPTS[main];[1:v]settb=AVTB,setpts=PTS-STARTPTS[ref];[main][ref]psnr" -f null -
    

Pulldown reversal (inverse telecine) filter, capable of handling mixed hard-telecine, 24000/1001 fps progressive, and 30000/1001 fps progressive content.

The pullup filter is designed to take advantage of future context in making its decisions. This filter is stateless in the sense that it does not lock onto a pattern to follow, but it instead looks forward to the following fields in order to identify matches and rebuild progressive frames.

To produce content with an even framerate, insert the fps filter after pullup, use "fps=24000/1001" if the input frame rate is 29.97fps, "fps=24" for 30fps and the (rare) telecined 25fps input.

The filter accepts the following options:

These options set the amount of "junk" to ignore at the left, right, top, and bottom of the image, respectively. Left and right are in units of 8 pixels, while top and bottom are in units of 2 lines. The default is 8 pixels on each side.
Set the strict breaks. Setting this option to 1 will reduce the chances of filter generating an occasional mismatched frame, but it may also cause an excessive number of frames to be dropped during high motion sequences. Conversely, setting it to -1 will make filter match fields more easily. This may help processing of video where there is slight blurring between the fields, but may also cause there to be interlaced frames in the output. Default value is 0.
Set the metric plane to use. It accepts the following values:
Use luma plane.
Use chroma blue plane.
Use chroma red plane.

This option may be set to use chroma plane instead of the default luma plane for doing filter's computations. This may improve accuracy on very clean source material, but more likely will decrease accuracy, especially if there is chroma noise (rainbow effect) or any grayscale video. The main purpose of setting mp to a chroma plane is to reduce CPU load and make pullup usable in realtime on slow machines.

For best results (without duplicated frames in the output file) it is necessary to change the output frame rate. For example, to inverse telecine NTSC input:

ffmpeg -i input -vf pullup -r 24000/1001 ...

Change video quantization parameters (QP).

The filter accepts the following option:

qp
Set expression for quantization parameter.

The expression is evaluated through the eval API and can contain, among others, the following constants:

1 if index is not 129, 0 otherwise.
qp
Sequential index starting from -129 to 128.

Examples

Some equation like:
qp=2+2*sin(PI*qp)

Generate a QR code using the libqrencode library (see https://fukuchi.org/works/qrencode/), and overlay it on top of the current frame.

To enable the compilation of this filter, you need to configure FFmpeg with "--enable-libqrencode".

The QR code is generated from the provided text or text pattern. The corresponding QR code is scaled and overlayed into the video output according to the specified options.

In case no text is specified, no QR code is overlaied.

This filter accepts the following options:

Specify an expression for the width of the rendered QR code, with and without padding. The qrcode_width expression can reference the value set by the padded_qrcode_width expression, and vice versa. By default padded_qrcode_width is set to qrcode_width, meaning that there is no padding.

These expressions are evaluated for each new frame.

See the qrencode Expressions section for details.

Specify an expression for positioning the padded QR code top-left corner. The x expression can reference the value set by the y expression, and vice.

By default x and y are set set to 0, meaning that the QR code is placed in the top left corner of the input.

These expressions are evaluated for each new frame.

See the qrencode Expressions section for details.

Instruct libqrencode to use case sensitive encoding. This is enabled by default. This can be disabled to reduce the QR encoding size.
Specify the QR encoding error correction level. With an higher correction level, the encoding size will increase but the code will be more robust to corruption. Lower level is L.

It accepts the following values:

Select how the input text is expanded. Can be either "none", or "normal" (default). See the qrencode Text expansion section below for details.
Define the text to be rendered. In case neither is specified, no QR is encoded (just an empty colored frame).

In case expansion is enabled, the text is treated as a text template, using the qrencode expansion mechanism. See the qrencode Text expansion section below for details.

Set the QR code and background color. The default value of foreground_color is "black", the default value of background_color is "white".

For the syntax of the color options, check the "Color" section in the ffmpeg-utils manual.

qrencode Expressions

The expressions set by the options contain the following constants and functions.

input display aspect ratio, it is the same as (w / h) * sar
the current frame's duration, in seconds
horizontal and vertical chroma subsample values. For example for the pixel format "yuv422p" hsub is 2 and vsub is 1.
the input height
the input width
the number of input frame, starting from 0
a number representing the picture type
the width of the encoded QR code
the width of the rendered QR code, without and without padding.

These parameters allow the q and Q expressions to refer to each other, so you can for example specify "q=3/4*Q".

return a random number included between min and max
the input sample aspect ratio
timestamp expressed in seconds, NAN if the input timestamp is unknown
the x and y offset coordinates where the text is drawn.

These parameters allow the x and y expressions to refer to each other, so you can for example specify "y=x/dar".

qrencode Text expansion

If expansion is set to "none", the text is printed verbatim.

If expansion is set to "normal" (which is the default), the following expansion mechanism is used.

The backslash character \, followed by any character, always expands to the second character.

Sequences of the form "%{...}" are expanded. The text between the braces is a function name, possibly followed by arguments separated by ':'. If the arguments contain special characters or delimiters (':' or '}'), they should be escaped.

Note that they probably must also be escaped as the value for the text option in the filter argument string and as the filter argument in the filtergraph description, and possibly also for the shell, that makes up to four levels of escaping; using a text file with the textfile option avoids these problems.

The following functions are available:

return the frame number
Return the presentation timestamp of the current frame.

It can take up to two arguments.

The first argument is the format of the timestamp; it defaults to "flt" for seconds as a decimal number with microsecond accuracy; "hms" stands for a formatted [-]HH:MM:SS.mmm timestamp with millisecond accuracy. "gmtime" stands for the timestamp of the frame formatted as UTC time; "localtime" stands for the timestamp of the frame formatted as local time zone time. If the format is set to "hms24hh", the time is formatted in 24h format (00-23).

The second argument is an offset added to the timestamp.

If the format is set to "localtime" or "gmtime", a third argument may be supplied: a "strftime" C function format string. By default, YYYY-MM-DD HH:MM:SS format will be used.

Evaluate the expression's value and output as a double.

It must take one argument specifying the expression to be evaluated, accepting the constants and functions defined in qrencode_expressions.

Evaluate the expression's value and output as a formatted string.

The first argument is the expression to be evaluated, just as for the expr function. The second argument specifies the output format. Allowed values are x, X, d and u. They are treated exactly as in the "printf" function. The third parameter is optional and sets the number of positions taken by the output. It can be used to add padding with zeros from the left.

The time at which the filter is running, expressed in UTC. It can accept an argument: a "strftime" C function format string. The format string is extended to support the variable %[1-6]N which prints fractions of the second with optionally specified number of digits.
The time at which the filter is running, expressed in the local time zone. It can accept an argument: a "strftime" C function format string. The format string is extended to support the variable %[1-6]N which prints fractions of the second with optionally specified number of digits.
Frame metadata. Takes one or two arguments.

The first argument is mandatory and specifies the metadata key.

The second argument is optional and specifies a default value, used when the metadata key is not found or empty.

Available metadata can be identified by inspecting entries starting with TAG included within each frame section printed by running "ffprobe -show_frames".

String metadata generated in filters leading to the qrencode filter are also available.

return a random number included between min and max

Examples

  • Generate a QR code encoding the specified text with the default size, overalaid in the top left corner of the input video, with the default size:
    qrencode=text=www.ffmpeg.org
    
  • Same as below, but select blue on pink colors:
    qrencode=text=www.ffmpeg.org:bc=pink@0.5:fc=blue
    
  • Place the QR code in the bottom right corner of the input video:
    qrencode=text=www.ffmpeg.org:x=W-Q:y=H-Q
    
  • Generate a QR code with width of 200 pixels and padding, making the padded width 4/3 of the QR code width:
    qrencode=text=www.ffmpeg.org:q=200:Q=4/3*q
    
  • Generate a QR code with padded width of 200 pixels and padding, making the QR code width 3/4 of the padded width:
    qrencode=text=www.ffmpeg.org:Q=200:q=3/4*Q
    
  • Make the QR code a fraction of the input video width:
    qrencode=text=www.ffmpeg.org:q=W/5
    
  • Generate a QR code encoding the frame number:
    qrencode=text=%{n}
    
  • Generate a QR code encoding the GMT timestamp:
    qrencode=text=%{gmtime}
    
  • Generate a QR code encoding the timestamp expressed as a float:
    qrencode=text=%{pts}
    

Identify and decode a QR code using the libquirc library (see https://github.com/dlbeer/quirc/), and print the identified QR codes positions and payload as metadata.

To enable the compilation of this filter, you need to configure FFmpeg with "--enable-libquirc".

For each found QR code in the input video, some metadata entries are added with the prefix lavfi.quirc.N, where N is the index, starting from 0, associated to the QR code.

A description of each metadata value follows:

the number of found QR codes, it is not set in case none was found
the x/y positions of the four corners of the square containing the QR code, where M is the index of the corner starting from 0
the payload of the QR code

Flush video frames from internal cache of frames into a random order. No frame is discarded. Inspired by frei0r nervous filter.

Set size in number of frames of internal cache, in range from 2 to 512. Default is 30.
Set seed for random number generator, must be an integer included between 0 and "UINT32_MAX". If not specified, or if explicitly set to less than 0, the filter will try to use a good random seed on a best effort basis.

Read closed captioning (EIA-608) information from the top lines of a video frame.

This filter adds frame metadata for "lavfi.readeia608.X.cc" and "lavfi.readeia608.X.line", where "X" is the number of the identified line with EIA-608 data (starting from 0). A description of each metadata value follows:

The two bytes stored as EIA-608 data (printed in hexadecimal).
The number of the line on which the EIA-608 data was identified and read.

This filter accepts the following options:

Set the line to start scanning for EIA-608 data. Default is 0.
Set the line to end scanning for EIA-608 data. Default is 29.
Set the ratio of width reserved for sync code detection. Default is 0.27. Allowed range is "[0.1 - 0.7]".
Enable checking the parity bit. In the event of a parity error, the filter will output 0x00 for that character. Default is false.
Lowpass lines prior to further processing. Default is enabled.

Commands

This filter supports the all above options as commands.

Examples

Output a csv with presentation time and the first two lines of identified EIA-608 captioning data.
ffprobe -f lavfi -i movie=captioned_video.mov,readeia608 -show_entries frame=pts_time:frame_tags=lavfi.readeia608.0.cc,lavfi.readeia608.1.cc -of csv

Read vertical interval timecode (VITC) information from the top lines of a video frame.

The filter adds frame metadata key "lavfi.readvitc.tc_str" with the timecode value, if a valid timecode has been detected. Further metadata key "lavfi.readvitc.found" is set to 0/1 depending on whether timecode data has been found or not.

This filter accepts the following options:

Set the maximum number of lines to scan for VITC data. If the value is set to -1 the full video frame is scanned. Default is 45.
Set the luma threshold for black. Accepts float numbers in the range [0.0,1.0], default value is 0.2. The value must be equal or less than "thr_w".
Set the luma threshold for white. Accepts float numbers in the range [0.0,1.0], default value is 0.6. The value must be equal or greater than "thr_b".

Examples

Detect and draw VITC data onto the video frame; if no valid VITC is detected, draw "--:--:--:--" as a placeholder:
ffmpeg -i input.avi -filter:v 'readvitc,drawtext=fontfile=FreeMono.ttf:text=%{metadata\\:lavfi.readvitc.tc_str\\:--\\\\\\:--\\\\\\:--\\\\\\:--}:x=(w-tw)/2:y=400-ascent'

Remap pixels using 2nd: Xmap and 3rd: Ymap input video stream.

Destination pixel at position (X, Y) will be picked from source (x, y) position where x = Xmap(X, Y) and y = Ymap(X, Y). If mapping values are out of range, zero value for pixel will be used for destination pixel.

Xmap and Ymap input video streams must be of same dimensions. Output video stream will have Xmap/Ymap video stream dimensions. Xmap and Ymap input video streams are 16bit depth, single channel.

format
Specify pixel format of output from this filter. Can be "color" or "gray". Default is "color".
Specify the color of the unmapped pixels. For the syntax of this option, check the "Color" section in the ffmpeg-utils manual. Default color is "black".

The removegrain filter is a spatial denoiser for progressive video.

Set mode for the first plane.
Set mode for the second plane.
Set mode for the third plane.
Set mode for the fourth plane.

Range of mode is from 0 to 24. Description of each mode follows:

0
Leave input plane unchanged. Default.
1
Clips the pixel with the minimum and maximum of the 8 neighbour pixels.
2
Clips the pixel with the second minimum and maximum of the 8 neighbour pixels.
3
Clips the pixel with the third minimum and maximum of the 8 neighbour pixels.
4
Clips the pixel with the fourth minimum and maximum of the 8 neighbour pixels. This is equivalent to a median filter.
5
Line-sensitive clipping giving the minimal change.
6
Line-sensitive clipping, intermediate.
7
Line-sensitive clipping, intermediate.
8
Line-sensitive clipping, intermediate.
9
Line-sensitive clipping on a line where the neighbours pixels are the closest.
10
Replaces the target pixel with the closest neighbour.
11
[1 2 1] horizontal and vertical kernel blur.
12
Same as mode 11.
13
Bob mode, interpolates top field from the line where the neighbours pixels are the closest.
14
Bob mode, interpolates bottom field from the line where the neighbours pixels are the closest.
15
Bob mode, interpolates top field. Same as 13 but with a more complicated interpolation formula.
16
Bob mode, interpolates bottom field. Same as 14 but with a more complicated interpolation formula.
17
Clips the pixel with the minimum and maximum of respectively the maximum and minimum of each pair of opposite neighbour pixels.
18
Line-sensitive clipping using opposite neighbours whose greatest distance from the current pixel is minimal.
19
Replaces the pixel with the average of its 8 neighbours.
20
Averages the 9 pixels ([1 1 1] horizontal and vertical blur).
21
Clips pixels using the averages of opposite neighbour.
22
Same as mode 21 but simpler and faster.
23
Small edge and halo removal, but reputed useless.
24
Similar as 23.

Suppress a TV station logo, using an image file to determine which pixels comprise the logo. It works by filling in the pixels that comprise the logo with neighboring pixels.

The filter accepts the following options:

Set the filter bitmap file, which can be any image format supported by libavformat. The width and height of the image file must match those of the video stream being processed.

Pixels in the provided bitmap image with a value of zero are not considered part of the logo, non-zero pixels are considered part of the logo. If you use white (255) for the logo and black (0) for the rest, you will be safe. For making the filter bitmap, it is recommended to take a screen capture of a black frame with the logo visible, and then using a threshold filter followed by the erode filter once or twice.

If needed, little splotches can be fixed manually. Remember that if logo pixels are not covered, the filter quality will be much reduced. Marking too many pixels as part of the logo does not hurt as much, but it will increase the amount of blurring needed to cover over the image and will destroy more information than necessary, and extra pixels will slow things down on a large logo.

This filter uses the repeat_field flag from the Video ES headers and hard repeats fields based on its value.

Reverse a video clip.

Warning: This filter requires memory to buffer the entire clip, so trimming is suggested.

Examples

Take the first 5 seconds of a clip, and reverse it.
trim=end=5,reverse

Shift R/G/B/A pixels horizontally and/or vertically.

The filter accepts the following options:

Set amount to shift red horizontally.
Set amount to shift red vertically.
Set amount to shift green horizontally.
Set amount to shift green vertically.
Set amount to shift blue horizontally.
Set amount to shift blue vertically.
Set amount to shift alpha horizontally.
Set amount to shift alpha vertically.
Set edge mode, can be smear, default, or warp.

Commands

This filter supports the all above options as commands.

Apply roberts cross operator to input video stream.

The filter accepts the following option:

Set which planes will be processed, unprocessed planes will be copied. By default value 0xf, all planes will be processed.
scale
Set value which will be multiplied with filtered result.
Set value which will be added to filtered result.

Commands

This filter supports the all above options as commands.

Rotate video by an arbitrary angle expressed in radians.

The filter accepts the following options:

A description of the optional parameters follows.

Set an expression for the angle by which to rotate the input video clockwise, expressed as a number of radians. A negative value will result in a counter-clockwise rotation. By default it is set to "0".

This expression is evaluated for each frame.

Set the output width expression, default value is "iw". This expression is evaluated just once during configuration.
Set the output height expression, default value is "ih". This expression is evaluated just once during configuration.
Enable bilinear interpolation if set to 1, a value of 0 disables it. Default value is 1.
Set the color used to fill the output area not covered by the rotated image. For the general syntax of this option, check the "Color" section in the ffmpeg-utils manual. If the special value "none" is selected then no background is printed (useful for example if the background is never shown).

Default value is "black".

The expressions for the angle and the output size can contain the following constants and functions:

sequential number of the input frame, starting from 0. It is always NAN before the first frame is filtered.
time in seconds of the input frame, it is set to 0 when the filter is configured. It is always NAN before the first frame is filtered.
horizontal and vertical chroma subsample values. For example for the pixel format "yuv422p" hsub is 2 and vsub is 1.
the input video width and height
the output width and height, that is the size of the padded area as specified by the width and height expressions
the minimal width/height required for completely containing the input video rotated by a radians.

These are only available when computing the out_w and out_h expressions.

Examples

  • Rotate the input by PI/6 radians clockwise:
    rotate=PI/6
    
  • Rotate the input by PI/6 radians counter-clockwise:
    rotate=-PI/6
    
  • Rotate the input by 45 degrees clockwise:
    rotate=45*PI/180
    
  • Apply a constant rotation with period T, starting from an angle of PI/3:
    rotate=PI/3+2*PI*t/T
    
  • Make the input video rotation oscillating with a period of T seconds and an amplitude of A radians:
    rotate=A*sin(2*PI/T*t)
    
  • Rotate the video, output size is chosen so that the whole rotating input video is always completely contained in the output:
    rotate='2*PI*t:ow=hypot(iw,ih):oh=ow'
    
  • Rotate the video, reduce the output size so that no background is ever shown:
    rotate=2*PI*t:ow='min(iw,ih)/sqrt(2)':oh=ow:c=none
    

Commands

The filter supports the following commands:

Set the angle expression. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Apply Shape Adaptive Blur.

The filter accepts the following options:

Set luma blur filter strength, must be a value in range 0.1-4.0, default value is 1.0. A greater value will result in a more blurred image, and in slower processing.
Set luma pre-filter radius, must be a value in the 0.1-2.0 range, default value is 1.0.
Set luma maximum difference between pixels to still be considered, must be a value in the 0.1-100.0 range, default value is 1.0.
Set chroma blur filter strength, must be a value in range -0.9-4.0. A greater value will result in a more blurred image, and in slower processing.
Set chroma pre-filter radius, must be a value in the -0.9-2.0 range.
Set chroma maximum difference between pixels to still be considered, must be a value in the -0.9-100.0 range.

Each chroma option value, if not explicitly specified, is set to the corresponding luma option value.

Scale (resize) the input video, using the libswscale library.

The scale filter forces the output display aspect ratio to be the same of the input, by changing the output sample aspect ratio.

If the input image format is different from the format requested by the next filter, the scale filter will convert the input to the requested format.

Options

The filter accepts the following options, any of the options supported by the libswscale scaler, as well as any of the framesync options.

See the ffmpeg-scaler manual for the complete list of scaler options.

Set the output video dimension expression. Default value is the input dimension.

If the width or w value is 0, the input width is used for the output. If the height or h value is 0, the input height is used for the output.

If one and only one of the values is -n with n >= 1, the scale filter will use a value that maintains the aspect ratio of the input image, calculated from the other specified dimension. After that it will, however, make sure that the calculated dimension is divisible by n and adjust the value if necessary.

If both values are -n with n >= 1, the behavior will be identical to both values being set to 0 as previously detailed.

See below for the list of accepted constants for use in the dimension expression.

Specify when to evaluate width and height expression. It accepts the following values:
Only evaluate expressions once during the filter initialization or when a command is processed.
Evaluate expressions for each incoming frame.

Default value is init.

Set the interlacing mode. It accepts the following values:
1
Force interlaced aware scaling.
0
Do not apply interlaced scaling.
-1
Select interlaced aware scaling depending on whether the source frames are flagged as interlaced or not.

Default value is 0.

Set libswscale scaling flags. See the ffmpeg-scaler manual for the complete list of values. If not explicitly specified the filter applies the default flags.
Set libswscale input parameters for scaling algorithms that need them. See the ffmpeg-scaler manual for the complete documentation. If not explicitly specified the filter applies empty parameters.
Set the video size. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual.
Set in/output YCbCr color space type.

This allows the autodetected value to be overridden as well as allows forcing a specific value used for the output and encoder.

If not specified, the color space type depends on the pixel format.

Possible values:

Choose automatically.
Format conforming to International Telecommunication Union (ITU) Recommendation BT.709.
Set color space conforming to the United States Federal Communications Commission (FCC) Code of Federal Regulations (CFR) Title 47 (2003) 73.682 (a).
Set color space conforming to:
  • ITU Radiocommunication Sector (ITU-R) Recommendation BT.601
  • ITU-R Rec. BT.470-6 (1998) Systems B, B1, and G
  • Society of Motion Picture and Television Engineers (SMPTE) ST 170:2004
Set color space conforming to SMPTE ST 240:1999.
Set color space conforming to ITU-R BT.2020 non-constant luminance system.
Set in/output YCbCr sample range.

This allows the autodetected value to be overridden as well as allows forcing a specific value used for the output and encoder. If not specified, the range depends on the pixel format. Possible values:

Choose automatically.
Set full range (0-255 in case of 8-bit luma).
Set "MPEG" range (16-235 in case of 8-bit luma).
Set in/output chroma sample location. If not specified, center-sited chroma is used by default. Possible values:
Enable decreasing or increasing output video width or height if necessary to keep the original aspect ratio. Possible values:
Scale the video as specified and disable this feature.
The output video dimensions will automatically be decreased if needed.
The output video dimensions will automatically be increased if needed.

One useful instance of this option is that when you know a specific device's maximum allowed resolution, you can use this to limit the output video to that, while retaining the aspect ratio. For example, device A allows 1280x720 playback, and your video is 1920x800. Using this option (set it to decrease) and specifying 1280x720 to the command line makes the output 1280x533.

Please note that this is a different thing than specifying -1 for w or h, you still need to specify the output resolution for this option to work.

Ensures that both the output dimensions, width and height, are divisible by the given integer when used together with force_original_aspect_ratio. This works similar to using "-n" in the w and h options.

This option respects the value set for force_original_aspect_ratio, increasing or decreasing the resolution accordingly. The video's aspect ratio may be slightly modified.

This option can be handy if you need to have a video fit within or exceed a defined resolution using force_original_aspect_ratio but also have encoder restrictions on width or height divisibility.

The values of the w and h options are expressions containing the following constants:

The input width and height
These are the same as in_w and in_h.
The output (scaled) width and height
These are the same as out_w and out_h
The same as iw / ih
input sample aspect ratio
The input display aspect ratio. Calculated from "(iw / ih) * sar".
horizontal and vertical input chroma subsample values. For example for the pixel format "yuv422p" hsub is 2 and vsub is 1.
horizontal and vertical output chroma subsample values. For example for the pixel format "yuv422p" hsub is 2 and vsub is 1.
The (sequential) number of the input frame, starting from 0. Only available with "eval=frame".
The presentation timestamp of the input frame, expressed as a number of seconds. Only available with "eval=frame".
The position (byte offset) of the frame in the input stream, or NaN if this information is unavailable and/or meaningless (for example in case of synthetic video). Only available with "eval=frame". Deprecated, do not use.
Eqvuialent to the above, but for a second reference input. If any of these variables are present, this filter accepts two inputs.

Examples

  • Scale the input video to a size of 200x100
    scale=w=200:h=100
    

    This is equivalent to:

    scale=200:100
    

    or:

    scale=200x100
    
  • Specify a size abbreviation for the output size:
    scale=qcif
    

    which can also be written as:

    scale=size=qcif
    
  • Scale the input to 2x:
    scale=w=2*iw:h=2*ih
    
  • The above is the same as:
    scale=2*in_w:2*in_h
    
  • Scale the input to 2x with forced interlaced scaling:
    scale=2*iw:2*ih:interl=1
    
  • Scale the input to half size:
    scale=w=iw/2:h=ih/2
    
  • Increase the width, and set the height to the same size:
    scale=3/2*iw:ow
    
  • Seek Greek harmony:
    scale=iw:1/PHI*iw
    scale=ih*PHI:ih
    
  • Increase the height, and set the width to 3/2 of the height:
    scale=w=3/2*oh:h=3/5*ih
    
  • Increase the size, making the size a multiple of the chroma subsample values:
    scale="trunc(3/2*iw/hsub)*hsub:trunc(3/2*ih/vsub)*vsub"
    
  • Increase the width to a maximum of 500 pixels, keeping the same aspect ratio as the input:
    scale=w='min(500\, iw*3/2):h=-1'
    
  • Make pixels square by combining scale and setsar:
    scale='trunc(ih*dar):ih',setsar=1/1
    
  • Make pixels square by combining scale and setsar, making sure the resulting resolution is even (required by some codecs):
    scale='trunc(ih*dar/2)*2:trunc(ih/2)*2',setsar=1/1
    
  • Scale a subtitle stream (sub) to match the main video (main) in size before overlaying. ("scale2ref")
    '[main]split[a][b]; [ref][a]scale=rw:rh[c]; [b][c]overlay'
    
  • Scale a logo to 1/10th the height of a video, while preserving its display aspect ratio.
    [logo-in][video-in]scale=w=oh*dar:h=rh/10[logo-out]
    

Commands

This filter supports the following commands:

Set the output video dimension expression. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Scale (resize) and convert (pixel format) the input video, using accelerated CUDA kernels. Setting the output width and height works in the same way as for the scale filter.

The filter accepts the following options:

Set the output video dimension expression. Default value is the input dimension.

Allows for the same expressions as the scale filter.

Sets the algorithm used for scaling:
Nearest neighbour

Used by default if input parameters match the desired output.

Bilinear
Bicubic

This is the default.

Lanczos
format
Controls the output pixel format. By default, or if none is specified, the input pixel format is used.

The filter does not support converting between YUV and RGB pixel formats.

If set to 0, every frame is processed, even if no conversion is necessary. This mode can be useful to use the filter as a buffer for a downstream frame-consumer that exhausts the limited decoder frame pool.

If set to 1, frames are passed through as-is if they match the desired output parameters. This is the default behaviour.

Algorithm-Specific parameter.

Affects the curves of the bicubic algorithm.

Work the same as the identical scale filter options.

Examples

  • Scale input to 720p, keeping aspect ratio and ensuring the output is yuv420p.
    scale_cuda=-2:720:format=yuv420p
    
  • Upscale to 4K using nearest neighbour algorithm.
    scale_cuda=4096:2160:interp_algo=nearest
    
  • Don't do any conversion or scaling, but copy all input frames into newly allocated ones. This can be useful to deal with a filter and encode chain that otherwise exhausts the decoders frame pool.
    scale_cuda=passthrough=0
    

Use the NVIDIA Performance Primitives (libnpp) to perform scaling and/or pixel format conversion on CUDA video frames. Setting the output width and height works in the same way as for the scale filter.

The following additional options are accepted:

format
The pixel format of the output CUDA frames. If set to the string "same" (the default), the input format will be kept. Note that automatic format negotiation and conversion is not yet supported for hardware frames
The interpolation algorithm used for resizing. One of the following:
Nearest neighbour.
2-parameter cubic (B=1, C=0)
2-parameter cubic (B=0, C=1/2)
2-parameter cubic (B=1/2, C=3/10)
Supersampling
Enable decreasing or increasing output video width or height if necessary to keep the original aspect ratio. Possible values:
Scale the video as specified and disable this feature.
The output video dimensions will automatically be decreased if needed.
The output video dimensions will automatically be increased if needed.

One useful instance of this option is that when you know a specific device's maximum allowed resolution, you can use this to limit the output video to that, while retaining the aspect ratio. For example, device A allows 1280x720 playback, and your video is 1920x800. Using this option (set it to decrease) and specifying 1280x720 to the command line makes the output 1280x533.

Please note that this is a different thing than specifying -1 for w or h, you still need to specify the output resolution for this option to work.

Ensures that both the output dimensions, width and height, are divisible by the given integer when used together with force_original_aspect_ratio. This works similar to using "-n" in the w and h options.

This option respects the value set for force_original_aspect_ratio, increasing or decreasing the resolution accordingly. The video's aspect ratio may be slightly modified.

This option can be handy if you need to have a video fit within or exceed a defined resolution using force_original_aspect_ratio but also have encoder restrictions on width or height divisibility.

Specify when to evaluate width and height expression. It accepts the following values:
Only evaluate expressions once during the filter initialization or when a command is processed.
Evaluate expressions for each incoming frame.

The values of the w and h options are expressions containing the following constants:

The input width and height
These are the same as in_w and in_h.
The output (scaled) width and height
These are the same as out_w and out_h
The same as iw / ih
input sample aspect ratio
The input display aspect ratio. Calculated from "(iw / ih) * sar".
The (sequential) number of the input frame, starting from 0. Only available with "eval=frame".
The presentation timestamp of the input frame, expressed as a number of seconds. Only available with "eval=frame".
The position (byte offset) of the frame in the input stream, or NaN if this information is unavailable and/or meaningless (for example in case of synthetic video). Only available with "eval=frame". Deprecated, do not use.

Use the NVIDIA Performance Primitives (libnpp) to scale (resize) the input video, based on a reference video.

See the scale_npp filter for available options, scale2ref_npp supports the same but uses the reference video instead of the main input as basis. scale2ref_npp also supports the following additional constants for the w and h options:

The main input video's width and height
The same as main_w / main_h
The main input video's sample aspect ratio
The main input video's display aspect ratio. Calculated from "(main_w / main_h) * main_sar".
The (sequential) number of the main input frame, starting from 0. Only available with "eval=frame".
The presentation timestamp of the main input frame, expressed as a number of seconds. Only available with "eval=frame".
The position (byte offset) of the frame in the main input stream, or NaN if this information is unavailable and/or meaningless (for example in case of synthetic video). Only available with "eval=frame".

Examples

  • Scale a subtitle stream (b) to match the main video (a) in size before overlaying
    'scale2ref_npp[b][a];[a][b]overlay_cuda'
    
  • Scale a logo to 1/10th the height of a video, while preserving its display aspect ratio.
    [logo-in][video-in]scale2ref_npp=w=oh*mdar:h=ih/10[logo-out][video-out]
    

Scale and convert the color parameters using VTPixelTransferSession.

The filter accepts the following options:

Set the output video dimension expression. Default value is the input dimension.
Set the output colorspace matrix.
Set the output color primaries.
Set the output transfer characteristics.

Apply scharr operator to input video stream.

The filter accepts the following option:

Set which planes will be processed, unprocessed planes will be copied. By default value 0xf, all planes will be processed.
scale
Set value which will be multiplied with filtered result.
Set value which will be added to filtered result.

Commands

This filter supports the all above options as commands.

Scroll input video horizontally and/or vertically by constant speed.

The filter accepts the following options:

Set the horizontal scrolling speed. Default is 0. Allowed range is from -1 to 1. Negative values changes scrolling direction.
Set the vertical scrolling speed. Default is 0. Allowed range is from -1 to 1. Negative values changes scrolling direction.
Set the initial horizontal scrolling position. Default is 0. Allowed range is from 0 to 1.
Set the initial vertical scrolling position. Default is 0. Allowed range is from 0 to 1.

Commands

This filter supports the following commands:

Set the horizontal scrolling speed.
Set the vertical scrolling speed.

Detect video scene change.

This filter sets frame metadata with mafd between frame, the scene score, and forward the frame to the next filter, so they can use these metadata to detect scene change or others.

In addition, this filter logs a message and sets frame metadata when it detects a scene change by threshold.

"lavfi.scd.mafd" metadata keys are set with mafd for every frame.

"lavfi.scd.score" metadata keys are set with scene change score for every frame to detect scene change.

"lavfi.scd.time" metadata keys are set with current filtered frame time which detect scene change with threshold.

The filter accepts the following options:

Set the scene change detection threshold as a percentage of maximum change. Good values are in the "[8.0, 14.0]" range. The range for threshold is "[0., 100.]".

Default value is 10..

Set the flag to pass scene change frames to the next filter. Default value is 0 You can enable it if you want to get snapshot of scene change frames only.

Adjust cyan, magenta, yellow and black (CMYK) to certain ranges of colors (such as "reds", "yellows", "greens", "cyans", ...). The adjustment range is defined by the "purity" of the color (that is, how saturated it already is).

This filter is similar to the Adobe Photoshop Selective Color tool.

The filter accepts the following options:

Select color correction method.

Available values are:

Specified adjustments are applied "as-is" (added/subtracted to original pixel component value).
Specified adjustments are relative to the original component value.

Default is "absolute".

Adjustments for red pixels (pixels where the red component is the maximum)
Adjustments for yellow pixels (pixels where the blue component is the minimum)
Adjustments for green pixels (pixels where the green component is the maximum)
Adjustments for cyan pixels (pixels where the red component is the minimum)
Adjustments for blue pixels (pixels where the blue component is the maximum)
Adjustments for magenta pixels (pixels where the green component is the minimum)
Adjustments for white pixels (pixels where all components are greater than 128)
Adjustments for all pixels except pure black and pure white
Adjustments for black pixels (pixels where all components are lesser than 128)
Specify a Photoshop selective color file (".asv") to import the settings from.

All the adjustment settings (reds, yellows, ...) accept up to 4 space separated floating point adjustment values in the [-1,1] range, respectively to adjust the amount of cyan, magenta, yellow and black for the pixels of its range.

Examples

  • Increase cyan by 50% and reduce yellow by 33% in every green areas, and increase magenta by 27% in blue areas:
    selectivecolor=greens=.5 0 -.33 0:blues=0 .27
    
  • Use a Photoshop selective color preset:
    selectivecolor=psfile=MySelectiveColorPresets/Misty.asv
    

The "separatefields" takes a frame-based video input and splits each frame into its components fields, producing a new half height clip with twice the frame rate and twice the frame count.

This filter use field-dominance information in frame to decide which of each pair of fields to place first in the output. If it gets it wrong use setfield filter before "separatefields" filter.

The "setdar" filter sets the Display Aspect Ratio for the filter output video.

This is done by changing the specified Sample (aka Pixel) Aspect Ratio, according to the following equation:

<DAR> = <HORIZONTAL_RESOLUTION> / <VERTICAL_RESOLUTION> * <SAR>

Keep in mind that the "setdar" filter does not modify the pixel dimensions of the video frame. Also, the display aspect ratio set by this filter may be changed by later filters in the filterchain, e.g. in case of scaling or if another "setdar" or a "setsar" filter is applied.

The "setsar" filter sets the Sample (aka Pixel) Aspect Ratio for the filter output video.

Note that as a consequence of the application of this filter, the output display aspect ratio will change according to the equation above.

Keep in mind that the sample aspect ratio set by the "setsar" filter may be changed by later filters in the filterchain, e.g. if another "setsar" or a "setdar" filter is applied.

It accepts the following parameters:

Set the aspect ratio used by the filter.

The parameter can be a floating point number string, or an expression. If the parameter is not specified, the value "0" is assumed, meaning that the same input value is used.

Set the maximum integer value to use for expressing numerator and denominator when reducing the expressed aspect ratio to a rational. Default value is 100.

The parameter sar is an expression containing the following constants:

The input width and height.
Same as w / h.
The input sample aspect ratio.
The input display aspect ratio. It is the same as (w / h) * sar.
Horizontal and vertical chroma subsample values. For example, for the pixel format "yuv422p" hsub is 2 and vsub is 1.

Examples

  • To change the display aspect ratio to 16:9, specify one of the following:
    setdar=dar=1.77777
    setdar=dar=16/9
    
  • To change the sample aspect ratio to 10:11, specify:
    setsar=sar=10/11
    
  • To set a display aspect ratio of 16:9, and specify a maximum integer value of 1000 in the aspect ratio reduction, use the command:
    setdar=ratio=16/9:max=1000
    

Force field for the output video frame.

The "setfield" filter marks the interlace type field for the output frames. It does not change the input frame, but only sets the corresponding property, which affects how the frame is treated by following filters (e.g. "fieldorder" or "yadif").

The filter accepts the following options:

Available values are:
Keep the same field property.
Mark the frame as bottom-field-first.
Mark the frame as top-field-first.
Mark the frame as progressive.

Force frame parameter for the output video frame.

The "setparams" filter marks interlace and color range for the output frames. It does not change the input frame, but only sets the corresponding property, which affects how the frame is treated by filters/encoders.

Available values are:
Keep the same field property (default).
Mark the frame as bottom-field-first.
Mark the frame as top-field-first.
Mark the frame as progressive.
Available values are:
Keep the same color range property (default).
Mark the frame as unspecified color range.
Mark the frame as limited range.
Mark the frame as full range.
Set the color primaries. Available values are:
Keep the same color primaries property (default).
Set the color transfer. Available values are:
colorspace
Set the colorspace. Available values are:
Set the chroma sample location. Available values are:
Keep the same chroma location (default).

Use the NVIDIA Performance Primitives (libnpp) to perform image sharpening with border control.

The following additional options are accepted:

Type of sampling to be used ad frame borders. One of the following:
Replicate pixel values.

Apply shear transform to input video.

This filter supports the following options:

Shear factor in X-direction. Default value is 0. Allowed range is from -2 to 2.
Shear factor in Y-direction. Default value is 0. Allowed range is from -2 to 2.
Set the color used to fill the output area not covered by the transformed video. For the general syntax of this option, check the "Color" section in the ffmpeg-utils manual. If the special value "none" is selected then no background is printed (useful for example if the background is never shown).

Default value is "black".

Set interpolation type. Can be "bilinear" or "nearest". Default is "bilinear".

Commands

This filter supports the all above options as commands.

Show a line containing various information for each input video frame. The input video is not modified.

This filter supports the following options:

Calculate checksums of each plane. By default enabled.
Try to print user data unregistered SEI as ascii character when possible, in hex format otherwise.

The shown line contains a sequence of key/value pairs of the form key:value.

The following values are shown in the output:

The (sequential) number of the input frame, starting from 0.
The Presentation TimeStamp of the input frame, expressed as a number of time base units. The time base unit depends on the filter input pad.
The Presentation TimeStamp of the input frame, expressed as a number of seconds.
The pixel format name.
The sample aspect ratio of the input frame, expressed in the form num/den.
The size of the input frame. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual.
The type of interlaced mode ("P" for "progressive", "T" for top field first, "B" for bottom field first).
This is 1 if the frame is a key frame, 0 otherwise.
The picture type of the input frame ("I" for an I-frame, "P" for a P-frame, "B" for a B-frame, or "?" for an unknown type). Also refer to the documentation of the "AVPictureType" enum and of the "av_get_picture_type_char" function defined in libavutil/avutil.h.
The Adler-32 checksum (printed in hexadecimal) of all the planes of the input frame.
The Adler-32 checksum (printed in hexadecimal) of each plane of the input frame, expressed in the form "[c0 c1 c2 c3]".
The mean value of pixels in each plane of the input frame, expressed in the form "[mean0 mean1 mean2 mean3]".
The standard deviation of pixel values in each plane of the input frame, expressed in the form "[stdev0 stdev1 stdev2 stdev3]".

Displays the 256 colors palette of each frame. This filter is only relevant for pal8 pixel format frames.

It accepts the following option:

Set the size of the box used to represent one palette color entry. Default is 30 (for a "30x30" pixel box).

Reorder and/or duplicate and/or drop video frames.

It accepts the following parameters:

Set the destination indexes of input frames. This is space or '|' separated list of indexes that maps input frames to output frames. Number of indexes also sets maximal value that each index may have. '-1' index have special meaning and that is to drop frame.

The first frame has the index 0. The default is to keep the input unchanged.

Examples

  • Swap second and third frame of every three frames of the input:
    ffmpeg -i INPUT -vf "shuffleframes=0 2 1" OUTPUT
    
  • Swap 10th and 1st frame of every ten frames of the input:
    ffmpeg -i INPUT -vf "shuffleframes=9 1 2 3 4 5 6 7 8 0" OUTPUT
    

Reorder pixels in video frames.

This filter accepts the following options:

Set shuffle direction. Can be forward or inverse direction. Default direction is forward.
Set shuffle mode. Can be horizontal, vertical or block mode.
Set shuffle block_size. In case of horizontal shuffle mode only width part of size is used, and in case of vertical shuffle mode only height part of size is used.
Set random seed used with shuffling pixels. Mainly useful to set to be able to reverse filtering process to get original input. For example, to reverse forward shuffle you need to use same parameters and exact same seed and to set direction to inverse.

Reorder and/or duplicate video planes.

It accepts the following parameters:

The index of the input plane to be used as the first output plane.
The index of the input plane to be used as the second output plane.
The index of the input plane to be used as the third output plane.
The index of the input plane to be used as the fourth output plane.

The first plane has the index 0. The default is to keep the input unchanged.

Examples

Swap the second and third planes of the input:
ffmpeg -i INPUT -vf shuffleplanes=0:2:1:3 OUTPUT

Evaluate various visual metrics that assist in determining issues associated with the digitization of analog video media.

By default the filter will log these metadata values:

Display the minimal Y value contained within the input frame. Expressed in range of [0-255].
Display the Y value at the 10% percentile within the input frame. Expressed in range of [0-255].
Display the average Y value within the input frame. Expressed in range of [0-255].
Display the Y value at the 90% percentile within the input frame. Expressed in range of [0-255].
Display the maximum Y value contained within the input frame. Expressed in range of [0-255].
Display the minimal U value contained within the input frame. Expressed in range of [0-255].
Display the U value at the 10% percentile within the input frame. Expressed in range of [0-255].
Display the average U value within the input frame. Expressed in range of [0-255].
Display the U value at the 90% percentile within the input frame. Expressed in range of [0-255].
Display the maximum U value contained within the input frame. Expressed in range of [0-255].
Display the minimal V value contained within the input frame. Expressed in range of [0-255].
Display the V value at the 10% percentile within the input frame. Expressed in range of [0-255].
Display the average V value within the input frame. Expressed in range of [0-255].
Display the V value at the 90% percentile within the input frame. Expressed in range of [0-255].
Display the maximum V value contained within the input frame. Expressed in range of [0-255].
Display the minimal saturation value contained within the input frame. Expressed in range of [0-~181.02].
Display the saturation value at the 10% percentile within the input frame. Expressed in range of [0-~181.02].
Display the average saturation value within the input frame. Expressed in range of [0-~181.02].
Display the saturation value at the 90% percentile within the input frame. Expressed in range of [0-~181.02].
Display the maximum saturation value contained within the input frame. Expressed in range of [0-~181.02].
Display the median value for hue within the input frame. Expressed in range of [0-360].
Display the average value for hue within the input frame. Expressed in range of [0-360].
Display the average of sample value difference between all values of the Y plane in the current frame and corresponding values of the previous input frame. Expressed in range of [0-255].
Display the average of sample value difference between all values of the U plane in the current frame and corresponding values of the previous input frame. Expressed in range of [0-255].
Display the average of sample value difference between all values of the V plane in the current frame and corresponding values of the previous input frame. Expressed in range of [0-255].
Display bit depth of Y plane in current frame. Expressed in range of [0-16].
Display bit depth of U plane in current frame. Expressed in range of [0-16].
Display bit depth of V plane in current frame. Expressed in range of [0-16].

The filter accepts the following options:

stat specify an additional form of image analysis. out output video with the specified type of pixel highlighted.

Both options accept the following values:

Identify temporal outliers pixels. A temporal outlier is a pixel unlike the neighboring pixels of the same field. Examples of temporal outliers include the results of video dropouts, head clogs, or tape tracking issues.
Identify vertical line repetition. Vertical line repetition includes similar rows of pixels within a frame. In born-digital video vertical line repetition is common, but this pattern is uncommon in video digitized from an analog source. When it occurs in video that results from the digitization of an analog source it can indicate concealment from a dropout compensator.
Identify pixels that fall outside of legal broadcast range.
Set the highlight color for the out option. The default color is yellow.

Examples

  • Output data of various video metrics:
    ffprobe -f lavfi movie=example.mov,signalstats="stat=tout+vrep+brng" -show_frames
    
  • Output specific data about the minimum and maximum values of the Y plane per frame:
    ffprobe -f lavfi movie=example.mov,signalstats -show_entries frame_tags=lavfi.signalstats.YMAX,lavfi.signalstats.YMIN
    
  • Playback video while highlighting pixels that are outside of broadcast range in red.
    ffplay example.mov -vf signalstats="out=brng:color=red"
    
  • Playback video with signalstats metadata drawn over the frame.
    ffplay example.mov -vf signalstats=stat=brng+vrep+tout,drawtext=fontfile=FreeSerif.ttf:textfile=signalstat_drawtext.txt
    

    The contents of signalstat_drawtext.txt used in the command are:

    time %{pts:hms}
    Y (%{metadata:lavfi.signalstats.YMIN}-%{metadata:lavfi.signalstats.YMAX})
    U (%{metadata:lavfi.signalstats.UMIN}-%{metadata:lavfi.signalstats.UMAX})
    V (%{metadata:lavfi.signalstats.VMIN}-%{metadata:lavfi.signalstats.VMAX})
    saturation maximum: %{metadata:lavfi.signalstats.SATMAX}
    

Calculates the MPEG-7 Video Signature. The filter can handle more than one input. In this case the matching between the inputs can be calculated additionally. The filter always passes through the first input. The signature of each stream can be written into a file.

It accepts the following options:

Enable or disable the matching process.

Available values are:

Disable the calculation of a matching (default).
Calculate the matching for the whole video and output whether the whole video matches or only parts.
Calculate only until a matching is found or the video ends. Should be faster in some cases.
Set the number of inputs. The option value must be a non negative integer. Default value is 1.
Set the path to which the output is written. If there is more than one input, the path must be a prototype, i.e. must contain %d or %0nd (where n is a positive integer), that will be replaced with the input number. If no filename is specified, no output will be written. This is the default.
format
Choose the output format.

Available values are:

Use the specified binary representation (default).
Use the specified xml representation.
Set threshold to detect one word as similar. The option value must be an integer greater than zero. The default value is 9000.
Set threshold to detect all words as similar. The option value must be an integer greater than zero. The default value is 60000.
Set threshold to detect frames as similar. The option value must be an integer greater than zero. The default value is 116.
Set the minimum length of a sequence in frames to recognize it as matching sequence. The option value must be a non negative integer value. The default value is 0.
Set the minimum relation, that matching frames to all frames must have. The option value must be a double value between 0 and 1. The default value is 0.5.

Examples

  • To calculate the signature of an input video and store it in signature.bin:
    ffmpeg -i input.mkv -vf signature=filename=signature.bin -map 0:v -f null -
    
  • To detect whether two videos match and store the signatures in XML format in signature0.xml and signature1.xml:
    ffmpeg -i input1.mkv -i input2.mkv -filter_complex "[0:v][1:v] signature=nb_inputs=2:detectmode=full:format=xml:filename=signature%d.xml" -map :v -f null -
    

Calculate Spatial Information (SI) and Temporal Information (TI) scores for a video, as defined in ITU-T Rec. P.910 (11/21): Subjective video quality assessment methods for multimedia applications. Available PDF at https://www.itu.int/rec/T-REC-P.910-202111-S/en. Note that this is a legacy implementation that corresponds to a superseded recommendation. Refer to ITU-T Rec. P.910 (07/22) for the latest version: https://www.itu.int/rec/T-REC-P.910-202207-I/en

It accepts the following option:

If set to 1, Summary statistics will be printed to the console. Default 0.

Examples

To calculate SI/TI metrics and print summary:
ffmpeg -i input.mp4 -vf siti=print_summary=1 -f null -

Blur the input video without impacting the outlines.

It accepts the following options:

Set the luma radius. The option value must be a float number in the range [0.1,5.0] that specifies the variance of the gaussian filter used to blur the image (slower if larger). Default value is 1.0.
Set the luma strength. The option value must be a float number in the range [-1.0,1.0] that configures the blurring. A value included in [0.0,1.0] will blur the image whereas a value included in [-1.0,0.0] will sharpen the image. Default value is 1.0.
Set the luma threshold used as a coefficient to determine whether a pixel should be blurred or not. The option value must be an integer in the range [-30,30]. A value of 0 will filter all the image, a value included in [0,30] will filter flat areas and a value included in [-30,0] will filter edges. Default value is 0.
Set the chroma radius. The option value must be a float number in the range [0.1,5.0] that specifies the variance of the gaussian filter used to blur the image (slower if larger). Default value is luma_radius.
Set the chroma strength. The option value must be a float number in the range [-1.0,1.0] that configures the blurring. A value included in [0.0,1.0] will blur the image whereas a value included in [-1.0,0.0] will sharpen the image. Default value is luma_strength.
Set the chroma threshold used as a coefficient to determine whether a pixel should be blurred or not. The option value must be an integer in the range [-30,30]. A value of 0 will filter all the image, a value included in [0,30] will filter flat areas and a value included in [-30,0] will filter edges. Default value is luma_threshold.
Set the alpha radius. The option value must be a float number in the range [0.1,5.0] that specifies the variance of the gaussian filter used to blur the image (slower if larger). Default value is luma_radius.
Set the alpha strength. The option value must be a float number in the range [-1.0,1.0] that configures the blurring. A value included in [0.0,1.0] will blur the image whereas a value included in [-1.0,0.0] will sharpen the image. Default value is luma_strength.
Set the alpha threshold used as a coefficient to determine whether a pixel should be blurred or not. The option value must be an integer in the range [-30,30]. A value of 0 will filter all the image, a value included in [0,30] will filter flat areas and a value included in [-30,0] will filter edges. Default value is luma_threshold.

If a chroma or alpha option is not explicitly set, the corresponding luma value is set.

Apply sobel operator to input video stream.

The filter accepts the following option:

Set which planes will be processed, unprocessed planes will be copied. By default value 0xf, all planes will be processed.
scale
Set value which will be multiplied with filtered result.
Set value which will be added to filtered result.

Commands

This filter supports the all above options as commands.

Apply a simple postprocessing filter that compresses and decompresses the image at several (or - in the case of quality level 6 - all) shifts and average the results.

The filter accepts the following options:

Set quality. This option defines the number of levels for averaging. It accepts an integer in the range 0-6. If set to 0, the filter will have no effect. A value of 6 means the higher quality. For each increment of that value the speed drops by a factor of approximately 2. Default value is 3.
qp
Force a constant quantization parameter. If not set, the filter will use the QP from the video stream (if available).
Set thresholding mode. Available modes are:
Set hard thresholding (default).
Set soft thresholding (better de-ringing effect, but likely blurrier).
Enable the use of the QP from the B-Frames if set to 1. Using this option may cause flicker since the B-Frames have often larger QP. Default is 0 (not enabled).

Commands

This filter supports the following commands:

Set quality level. The value "max" can be used to set the maximum level, currently 6.

Scale the input by applying one of the super-resolution methods based on convolutional neural networks. Supported models:

Training scripts as well as scripts for model file (.pb) saving can be found at https://github.com/XueweiMeng/sr/tree/sr_dnn_native. Original repository is at https://github.com/HighVoltageRocknRoll/sr.git.

The filter accepts the following options:

Specify which DNN backend to use for model loading and execution. This option accepts the following values:
TensorFlow backend. To enable this backend you need to install the TensorFlow for C library (see https://www.tensorflow.org/install/lang_c) and configure FFmpeg with "--enable-libtensorflow"
Set path to model file specifying network architecture and its parameters. Note that different backends use different file formats. TensorFlow, OpenVINO backend can load files for only its format.
Set scale factor for SRCNN model. Allowed values are 2, 3 and 4. Default value is 2. Scale factor is necessary for SRCNN model, because it accepts input upscaled using bicubic upscaling with proper scale factor.

To get full functionality (such as async execution), please use the dnn_processing filter.

Obtain the SSIM (Structural SImilarity Metric) between two input videos.

This filter takes in input two input videos, the first input is considered the "main" source and is passed unchanged to the output. The second input is used as a "reference" video for computing the SSIM.

Both video inputs must have the same resolution and pixel format for this filter to work correctly. Also it assumes that both inputs have the same number of frames, which are compared one by one.

The filter stores the calculated SSIM of each frame.

The description of the accepted parameters follows.

If specified the filter will use the named file to save the SSIM of each individual frame. When filename equals "-" the data is sent to standard output.

The file printed if stats_file is selected, contains a sequence of key/value pairs of the form key:value for each compared couple of frames.

A description of each shown parameter follows:

sequential number of the input frame, starting from 1
SSIM of the compared frames for the component specified by the suffix.
SSIM of the compared frames for the whole frame.
Same as above but in dB representation.

This filter also supports the framesync options.

Examples

  • For example:
    movie=ref_movie.mpg, setpts=PTS-STARTPTS [main];
    [main][ref] ssim="stats_file=stats.log" [out]
    

    On this example the input file being processed is compared with the reference file ref_movie.mpg. The SSIM of each individual frame is stored in stats.log.

  • Another example with both psnr and ssim at same time:
    ffmpeg -i main.mpg -i ref.mpg -lavfi  "ssim;[0:v][1:v]psnr" -f null -
    
  • Another example with different containers:
    ffmpeg -i main.mpg -i ref.mkv -lavfi  "[0:v]settb=AVTB,setpts=PTS-STARTPTS[main];[1:v]settb=AVTB,setpts=PTS-STARTPTS[ref];[main][ref]ssim" -f null -
    

Convert between different stereoscopic image formats.

The filters accept the following options:

Set stereoscopic image format of input.

Available values for input image formats are:

side by side parallel (left eye left, right eye right)
side by side crosseye (right eye left, left eye right)
side by side parallel with half width resolution (left eye left, right eye right)
side by side crosseye with half width resolution (right eye left, left eye right)
above-below (left eye above, right eye below)
above-below (right eye above, left eye below)
above-below with half height resolution (left eye above, right eye below)
above-below with half height resolution (right eye above, left eye below)
alternating frames (left eye first, right eye second)
alternating frames (right eye first, left eye second)
interleaved rows (left eye has top row, right eye starts on next row)
interleaved rows (right eye has top row, left eye starts on next row)
interleaved columns, left eye first
interleaved columns, right eye first

Default value is sbsl.

Set stereoscopic image format of output.
side by side parallel (left eye left, right eye right)
side by side crosseye (right eye left, left eye right)
side by side parallel with half width resolution (left eye left, right eye right)
side by side crosseye with half width resolution (right eye left, left eye right)
above-below (left eye above, right eye below)
above-below (right eye above, left eye below)
above-below with half height resolution (left eye above, right eye below)
above-below with half height resolution (right eye above, left eye below)
alternating frames (left eye first, right eye second)
alternating frames (right eye first, left eye second)
interleaved rows (left eye has top row, right eye starts on next row)
interleaved rows (right eye has top row, left eye starts on next row)
anaglyph red/blue gray (red filter on left eye, blue filter on right eye)
anaglyph red/green gray (red filter on left eye, green filter on right eye)
anaglyph red/cyan gray (red filter on left eye, cyan filter on right eye)
anaglyph red/cyan half colored (red filter on left eye, cyan filter on right eye)
anaglyph red/cyan color (red filter on left eye, cyan filter on right eye)
anaglyph red/cyan color optimized with the least squares projection of dubois (red filter on left eye, cyan filter on right eye)
anaglyph green/magenta gray (green filter on left eye, magenta filter on right eye)
anaglyph green/magenta half colored (green filter on left eye, magenta filter on right eye)
anaglyph green/magenta colored (green filter on left eye, magenta filter on right eye)
anaglyph green/magenta color optimized with the least squares projection of dubois (green filter on left eye, magenta filter on right eye)
anaglyph yellow/blue gray (yellow filter on left eye, blue filter on right eye)
anaglyph yellow/blue half colored (yellow filter on left eye, blue filter on right eye)
anaglyph yellow/blue colored (yellow filter on left eye, blue filter on right eye)
anaglyph yellow/blue color optimized with the least squares projection of dubois (yellow filter on left eye, blue filter on right eye)
mono output (left eye only)
mono output (right eye only)
checkerboard, left eye first
checkerboard, right eye first
interleaved columns, left eye first
interleaved columns, right eye first
HDMI frame pack

Default value is arcd.

Examples

  • Convert input video from side by side parallel to anaglyph yellow/blue dubois:
    stereo3d=sbsl:aybd
    
  • Convert input video from above below (left eye above, right eye below) to side by side crosseye.
    stereo3d=abl:sbsr
    

Select video or audio streams.

The filter accepts the following options:

Set number of inputs. Default is 2.
Set input indexes to remap to outputs.

Commands

The "streamselect" and "astreamselect" filter supports the following commands:

Set input indexes to remap to outputs.

Examples

  • Select first 5 seconds 1st stream and rest of time 2nd stream:
    sendcmd='5.0 streamselect map 1',streamselect=inputs=2:map=0
    
  • Same as above, but for audio:
    asendcmd='5.0 astreamselect map 1',astreamselect=inputs=2:map=0
    

Draw subtitles on top of input video using the libass library.

To enable compilation of this filter you need to configure FFmpeg with "--enable-libass". This filter also requires a build with libavcodec and libavformat to convert the passed subtitles file to ASS (Advanced Substation Alpha) subtitles format.

The filter accepts the following options:

Set the filename of the subtitle file to read. It must be specified.
Specify the size of the original video, the video for which the ASS file was composed. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Due to a misdesign in ASS aspect ratio arithmetic, this is necessary to correctly scale the fonts if the aspect ratio has been changed.
Set a directory path containing fonts that can be used by the filter. These fonts will be used in addition to whatever the font provider uses.
Process alpha channel, by default alpha channel is untouched.
Set subtitles input character encoding. "subtitles" filter only. Only useful if not UTF-8.
Set subtitles stream index. "subtitles" filter only.
Override default style or script info parameters of the subtitles. It accepts a string containing ASS style format "KEY=VALUE" couples separated by ",".
Break lines according to the Unicode Line Breaking Algorithm. Availability requires at least libass release 0.17.0 (or LIBASS_VERSION 0x01600010), and libass must have been built with libunibreak.

The option is enabled by default except for native ASS.

If the first key is not specified, it is assumed that the first value specifies the filename.

For example, to render the file sub.srt on top of the input video, use the command:

subtitles=sub.srt

which is equivalent to:

subtitles=filename=sub.srt

To render the default subtitles stream from file video.mkv, use:

subtitles=video.mkv

To render the second subtitles stream from that file, use:

subtitles=video.mkv:si=1

To make the subtitles stream from sub.srt appear in 80% transparent blue "DejaVu Serif", use:

subtitles=sub.srt:force_style='Fontname=DejaVu Serif,PrimaryColour=&HCCFF0000'

Scale the input by 2x and smooth using the Super2xSaI (Scale and Interpolate) pixel art scaling algorithm.

Useful for enlarging pixel art images without reducing sharpness.

Swap two rectangular objects in video.

This filter accepts the following options:

Set object width.
Set object height.
Set 1st rect x coordinate.
Set 1st rect y coordinate.
Set 2nd rect x coordinate.
Set 2nd rect y coordinate.

All expressions are evaluated once for each frame.

The all options are expressions containing the following constants:

The input width and height.
same as w / h
input sample aspect ratio
input display aspect ratio, it is the same as (w / h) * sar
The number of the input frame, starting from 0.
The timestamp expressed in seconds. It's NAN if the input timestamp is unknown.
the position in the file of the input frame, NAN if unknown; deprecated, do not use

Commands

This filter supports the all above options as commands.

Swap U & V plane.

Blend successive video frames.

See blend

Apply telecine process to the video.

This filter accepts the following options:

top field first
bottom field first The default value is "top".
A string of numbers representing the pulldown pattern you wish to apply. The default value is 23.
Some typical patterns:

NTSC output (30i):
27.5p: 32222
24p: 23 (classic)
24p: 2332 (preferred)
20p: 33
18p: 334
16p: 3444

PAL output (25i):
27.5p: 12222
24p: 222222222223 ("Euro pulldown")
16.67p: 33
16p: 33333334

Compute and draw a color distribution histogram for the input video across time.

Unlike histogram video filter which only shows histogram of single input frame at certain time, this filter shows also past histograms of number of frames defined by "width" option.

The computed histogram is a representation of the color component distribution in an image.

The filter accepts the following options:

Set width of single color component output. Default value is 0. Value of 0 means width will be picked from input video. This also set number of passed histograms to keep. Allowed range is [0, 8192].
Set display mode. It accepts the following values:
Per color component graphs are placed below each other.
Per color component graphs are placed side by side.
overlay
Presents information identical to that in the "parade", except that the graphs representing color components are superimposed directly over one another.

Default is "stack".

Set mode. Can be either "linear", or "logarithmic". Default is "linear".
Set what color components to display. Default is 7.
Set background opacity. Default is 0.9.
Show envelope. Default is disabled.
Set envelope color. Default is "gold".
Set slide mode.

Available values for slide is:

Draw new frame when right border is reached.
Replace old columns with new ones.
scroll
Scroll from right to left.
Scroll from left to right.
Draw single picture.

Default is "replace".

Apply threshold effect to video stream.

This filter needs four video streams to perform thresholding. First stream is stream we are filtering. Second stream is holding threshold values, third stream is holding min values, and last, fourth stream is holding max values.

The filter accepts the following option:

Set which planes will be processed, unprocessed planes will be copied. By default value 0xf, all planes will be processed.

For example if first stream pixel's component value is less then threshold value of pixel component from 2nd threshold stream, third stream value will picked, otherwise fourth stream pixel component value will be picked.

Using color source filter one can perform various types of thresholding:

Commands

This filter supports the all options as commands.

Examples

  • Binary threshold, using gray color as threshold:
    ffmpeg -i 320x240.avi -f lavfi -i color=gray -f lavfi -i color=black -f lavfi -i color=white -lavfi threshold output.avi
    
  • Inverted binary threshold, using gray color as threshold:
    ffmpeg -i 320x240.avi -f lavfi -i color=gray -f lavfi -i color=white -f lavfi -i color=black -lavfi threshold output.avi
    
  • Truncate binary threshold, using gray color as threshold:
    ffmpeg -i 320x240.avi -f lavfi -i color=gray -i 320x240.avi -f lavfi -i color=gray -lavfi threshold output.avi
    
  • Threshold to zero, using gray color as threshold:
    ffmpeg -i 320x240.avi -f lavfi -i color=gray -f lavfi -i color=white -i 320x240.avi -lavfi threshold output.avi
    
  • Inverted threshold to zero, using gray color as threshold:
    ffmpeg -i 320x240.avi -f lavfi -i color=gray -i 320x240.avi -f lavfi -i color=white -lavfi threshold output.avi
    

Select the most representative frame in a given sequence of consecutive frames.

The filter accepts the following options:

Set the frames batch size to analyze; in a set of n frames, the filter will pick one of them, and then handle the next batch of n frames until the end. Default is 100.
Set the log level to display picked frame stats. Default is "info".

Since the filter keeps track of the whole frames sequence, a bigger n value will result in a higher memory usage, so a high value is not recommended.

Examples

  • Extract one picture each 50 frames:
    thumbnail=50
    
  • Complete example of a thumbnail creation with ffmpeg:
    ffmpeg -i in.avi -vf thumbnail,scale=300:200 -frames:v 1 out.png
    

Tile several successive frames together.

The untile filter can do the reverse.

The filter accepts the following options:

Set the grid size in the form "COLUMNSxROWS". Range is up to UINT_MAX cells. Default is "6x5".
Set the maximum number of frames to render in the given area. It must be less than or equal to wxh. The default value is 0, meaning all the area will be used.
Set the outer border margin in pixels. Range is 0 to 1024. Default is 0.
Set the inner border thickness (i.e. the number of pixels between frames). For more advanced padding options (such as having different values for the edges), refer to the pad video filter. Range is 0 to 1024. Default is 0.
Specify the color of the unused area. For the syntax of this option, check the "Color" section in the ffmpeg-utils manual. The default value of color is "black".
Set the number of frames to overlap when tiling several successive frames together. The value must be between 0 and nb_frames - 1. Default is 0.
Set the number of frames to initially be empty before displaying first output frame. This controls how soon will one get first output frame. The value must be between 0 and nb_frames - 1. Default is 0.

Examples

  • Produce 8x8 PNG tiles of all keyframes (-skip_frame nokey) in a movie:
    ffmpeg -skip_frame nokey -i file.avi -vf 'scale=128:72,tile=8x8' -an -vsync 0 keyframes%03d.png
    

    The -vsync 0 is necessary to prevent ffmpeg from duplicating each output frame to accommodate the originally detected frame rate.

  • Display 5 pictures in an area of "3x2" frames, with 7 pixels between them, and 2 pixels of initial margin, using mixed flat and named options:
    tile=3x2:nb_frames=5:padding=7:margin=2
    

Apply tilt-and-shift effect.

What happens when you invert time and space?

Normally a video is composed of several frames that represent a different instant of time and shows a scene that evolves in the space captured by the frame. This filter is the antipode of that concept, taking inspiration from tilt and shift photography.

A filtered frame contains the whole timeline of events composing the sequence, and this is obtained by placing a slice of pixels from each frame into a single one. However, since there are no infinite-width frames, this is done up the width of the input frame, and a video is recomposed by shifting away one column for each subsequent frame. In order to map space to time, the filter tilts each input frame as well, so that motion is preserved. This is accomplished by progressively selecting a different column from each input frame.

The end result is a sort of inverted parallax, so that far away objects move much faster that the ones in the front. The ideal conditions for this video effect are when there is either very little motion and the backgroud is static, or when there is a lot of motion and a very wide depth of field (e.g. wide panorama, while moving on a train).

The filter accepts the following parameters:

Tilt video while shifting (default). When unset, video will be sliding a static image, composed of the first column of each frame.
What to do at the start of filtering (see below).
What to do at the end of filtering (see below).
How many columns should pass through before start of filtering.
pad
How many columns should be inserted before end of filtering.

Normally the filter shifts and tilts from the very first frame, and stops when the last one is received. However, before filtering starts, normal video may be preseved, so that the effect is slowly shifted in its place. Similarly, the last video frame may be reconstructed at the end. Alternatively it is possible to just start and end with black.

Filtering starts immediately and ends when the last frame is received.
The first frames or the very last frame are kept intact during processing.
Black is padded at the beginning or at the end of filtering.

Perform various types of temporal field interlacing.

Frames are counted starting from 1, so the first input frame is considered odd.

The filter accepts the following options:

Specify the mode of the interlacing. This option can also be specified as a value alone. See below for a list of values for this option.

Available values are:

Move odd frames into the upper field, even into the lower field, generating a double height frame at half frame rate.
 ------> time
Input:
Frame 1         Frame 2         Frame 3         Frame 4

11111           22222           33333           44444
11111           22222           33333           44444
11111           22222           33333           44444
11111           22222           33333           44444

Output:
11111                           33333
22222                           44444
11111                           33333
22222                           44444
11111                           33333
22222                           44444
11111                           33333
22222                           44444
Only output odd frames, even frames are dropped, generating a frame with unchanged height at half frame rate.
 ------> time
Input:
Frame 1         Frame 2         Frame 3         Frame 4

11111           22222           33333           44444
11111           22222           33333           44444
11111           22222           33333           44444
11111           22222           33333           44444

Output:
11111                           33333
11111                           33333
11111                           33333
11111                           33333
Only output even frames, odd frames are dropped, generating a frame with unchanged height at half frame rate.
 ------> time
Input:
Frame 1         Frame 2         Frame 3         Frame 4

11111           22222           33333           44444
11111           22222           33333           44444
11111           22222           33333           44444
11111           22222           33333           44444

Output:
                22222                           44444
                22222                           44444
                22222                           44444
                22222                           44444
Expand each frame to full height, but pad alternate lines with black, generating a frame with double height at the same input frame rate.
 ------> time
Input:
Frame 1         Frame 2         Frame 3         Frame 4

11111           22222           33333           44444
11111           22222           33333           44444
11111           22222           33333           44444
11111           22222           33333           44444

Output:
11111           .....           33333           .....
.....           22222           .....           44444
11111           .....           33333           .....
.....           22222           .....           44444
11111           .....           33333           .....
.....           22222           .....           44444
11111           .....           33333           .....
.....           22222           .....           44444
Interleave the upper field from odd frames with the lower field from even frames, generating a frame with unchanged height at half frame rate.
 ------> time
Input:
Frame 1         Frame 2         Frame 3         Frame 4

11111<-         22222           33333<-         44444
11111           22222<-         33333           44444<-
11111<-         22222           33333<-         44444
11111           22222<-         33333           44444<-

Output:
11111                           33333
22222                           44444
11111                           33333
22222                           44444
Interleave the lower field from odd frames with the upper field from even frames, generating a frame with unchanged height at half frame rate.
 ------> time
Input:
Frame 1         Frame 2         Frame 3         Frame 4

11111           22222<-         33333           44444<-
11111<-         22222           33333<-         44444
11111           22222<-         33333           44444<-
11111<-         22222           33333<-         44444

Output:
22222                           44444
11111                           33333
22222                           44444
11111                           33333
Double frame rate with unchanged height. Frames are inserted each containing the second temporal field from the previous input frame and the first temporal field from the next input frame. This mode relies on the top_field_first flag. Useful for interlaced video displays with no field synchronisation.
 ------> time
Input:
Frame 1         Frame 2         Frame 3         Frame 4

11111           22222           33333           44444
 11111           22222           33333           44444
11111           22222           33333           44444
 11111           22222           33333           44444

Output:
11111   22222   22222   33333   33333   44444   44444
 11111   11111   22222   22222   33333   33333   44444
11111   22222   22222   33333   33333   44444   44444
 11111   11111   22222   22222   33333   33333   44444
Move odd frames into the upper field, even into the lower field, generating a double height frame at same frame rate.
 ------> time
Input:
Frame 1         Frame 2         Frame 3         Frame 4

11111           22222           33333           44444
11111           22222           33333           44444
11111           22222           33333           44444
11111           22222           33333           44444

Output:
11111           33333           33333           55555
22222           22222           44444           44444
11111           33333           33333           55555
22222           22222           44444           44444
11111           33333           33333           55555
22222           22222           44444           44444
11111           33333           33333           55555
22222           22222           44444           44444

Numeric values are deprecated but are accepted for backward compatibility reasons.

Default mode is "merge".

Specify flags influencing the filter process.

Available value for flags is:

Enable linear vertical low-pass filtering in the filter. Vertical low-pass filtering is required when creating an interlaced destination from a progressive source which contains high-frequency vertical detail. Filtering will reduce interlace 'twitter' and Moire patterning.
Enable complex vertical low-pass filtering. This will slightly less reduce interlace 'twitter' and Moire patterning but better retain detail and subjective sharpness impression.
Bypass already interlaced frames, only adjust the frame rate.

Vertical low-pass filtering and bypassing already interlaced frames can only be enabled for mode interleave_top and interleave_bottom.

Pick median pixels from several successive input video frames.

The filter accepts the following options:

Set radius of median filter. Default is 1. Allowed range is from 1 to 127.
Set which planes to filter. Default value is 15, by which all planes are processed.
Set median percentile. Default value is 0.5. Default value of 0.5 will pick always median values, while 0 will pick minimum values, and 1 maximum values.

Commands

This filter supports all above options as commands, excluding option "radius".

Apply Temporal Midway Video Equalization effect.

Midway Video Equalization adjusts a sequence of video frames to have the same histograms, while maintaining their dynamics as much as possible. It's useful for e.g. matching exposures from a video frames sequence.

This filter accepts the following option:

Set filtering radius. Default is 5. Allowed range is from 1 to 127.
Set filtering sigma. Default is 0.5. This controls strength of filtering. Setting this option to 0 effectively does nothing.
Set which planes to process. Default is 15, which is all available planes.

Mix successive video frames.

A description of the accepted options follows.

The number of successive frames to mix. If unspecified, it defaults to 3.
Specify weight of each input video frame. Each weight is separated by space. If number of weights is smaller than number of frames last specified weight will be used for all remaining unset weights.
scale
Specify scale, if it is set it will be multiplied with sum of each weight multiplied with pixel values to give final destination pixel value. By default scale is auto scaled to sum of weights.
Set which planes to filter. Default is all. Allowed range is from 0 to 15.

Examples

  • Average 7 successive frames:
    tmix=frames=7:weights="1 1 1 1 1 1 1"
    
  • Apply simple temporal convolution:
    tmix=frames=3:weights="-1 3 -1"
    
  • Similar as above but only showing temporal differences:
    tmix=frames=3:weights="-1 2 -1":scale=1
    

Commands

This filter supports the following commands:

scale
Syntax is same as option with same name.

Tone map colors from different dynamic ranges.

This filter expects data in single precision floating point, as it needs to operate on (and can output) out-of-range values. Another filter, such as zscale, is needed to convert the resulting frame to a usable format.

The tonemapping algorithms implemented only work on linear light, so input data should be linearized beforehand (and possibly correctly tagged).

ffmpeg -i INPUT -vf zscale=transfer=linear,tonemap=clip,zscale=transfer=bt709,format=yuv420p OUTPUT

Options

The filter accepts the following options.

tonemap
Set the tone map algorithm to use.

Possible values are:

Do not apply any tone map, only desaturate overbright pixels.
Hard-clip any out-of-range values. Use it for perfect color accuracy for in-range values, while distorting out-of-range values.
Stretch the entire reference gamut to a linear multiple of the display.
Fit a logarithmic transfer between the tone curves.
Preserve overall image brightness with a simple curve, using nonlinear contrast, which results in flattening details and degrading color accuracy.
Preserve both dark and bright details better than reinhard, at the cost of slightly darkening everything. Use it when detail preservation is more important than color and brightness accuracy.
Smoothly map out-of-range values, while retaining contrast and colors for in-range material as much as possible. Use it when color accuracy is more important than detail preservation.

Default is none.

Tune the tone mapping algorithm.

This affects the following algorithms:

Ignored.
Specifies the scale factor to use while stretching. Default to 1.0.
Specifies the exponent of the function. Default to 1.8.
Specify an extra linear coefficient to multiply into the signal before clipping. Default to 1.0.
Specify the local contrast coefficient at the display peak. Default to 0.5, which means that in-gamut values will be about half as bright as when clipping.
Ignored.
Specify the transition point from linear to mobius transform. Every value below this point is guaranteed to be mapped 1:1. The higher the value, the more accurate the result will be, at the cost of losing bright details. Default to 0.3, which due to the steep initial slope still preserves in-range colors fairly accurately.
Apply desaturation for highlights that exceed this level of brightness. The higher the parameter, the more color information will be preserved. This setting helps prevent unnaturally blown-out colors for super-highlights, by (smoothly) turning into white instead. This makes images feel more natural, at the cost of reducing information about out-of-range colors.

The default of 2.0 is somewhat conservative and will mostly just apply to skies or directly sunlit surfaces. A setting of 0.0 disables this option.

This option works only if the input frame has a supported color tag.

Override signal/nominal/reference peak with this value. Useful when the embedded peak information in display metadata is not reliable or when tone mapping from a lower range to a higher range.

Temporarily pad video frames.

The filter accepts the following options:

Specify number of delay frames before input video stream. Default is 0.
Specify number of padding frames after input video stream. Set to -1 to pad indefinitely. Default is 0.
Set kind of frames added to beginning of stream. Can be either add or clone. With add frames of solid-color are added. With clone frames are clones of first frame. Default is add.
Set kind of frames added to end of stream. Can be either add or clone. With add frames of solid-color are added. With clone frames are clones of last frame. Default is add.
Specify the duration of the start/stop delay. See the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax. These options override start and stop. Default is 0.
Specify the color of the padded area. For the syntax of this option, check the "Color" section in the ffmpeg-utils manual.

The default value of color is "black".

Transpose rows with columns in the input video and optionally flip it.

It accepts the following parameters:

Specify the transposition direction.

Can assume the following values:

0, 4, cclock_flip
Rotate by 90 degrees counterclockwise and vertically flip (default), that is:
L.R     L.l
. . ->  . .
l.r     R.r
1, 5, clock
Rotate by 90 degrees clockwise, that is:
L.R     l.L
. . ->  . .
l.r     r.R
2, 6, cclock
Rotate by 90 degrees counterclockwise, that is:
L.R     R.r
. . ->  . .
l.r     L.l
3, 7, clock_flip
Rotate by 90 degrees clockwise and vertically flip, that is:
L.R     r.R
. . ->  . .
l.r     l.L

For values between 4-7, the transposition is only done if the input video geometry is portrait and not landscape. These values are deprecated, the "passthrough" option should be used instead.

Numerical values are deprecated, and should be dropped in favor of symbolic constants.

Do not apply the transposition if the input geometry matches the one specified by the specified value. It accepts the following values:
Always apply transposition.
Preserve portrait geometry (when height >= width).
Preserve landscape geometry (when width >= height).

Default value is "none".

For example to rotate by 90 degrees clockwise and preserve portrait layout:

transpose=dir=1:passthrough=portrait

The command above can also be specified as:

transpose=1:portrait

Transpose rows with columns in the input video and optionally flip it. For more in depth examples see the transpose video filter, which shares mostly the same options.

It accepts the following parameters:

Specify the transposition direction.

Can assume the following values:

Rotate by 90 degrees counterclockwise and vertically flip. (default)
Rotate by 90 degrees clockwise.
Rotate by 90 degrees counterclockwise.
Rotate by 90 degrees clockwise and vertically flip.
Do not apply the transposition if the input geometry matches the one specified by the specified value. It accepts the following values:
Always apply transposition. (default)
Preserve portrait geometry (when height >= width).
Preserve landscape geometry (when width >= height).

Trim the input so that the output contains one continuous subpart of the input.

It accepts the following parameters:

Specify the time of the start of the kept section, i.e. the frame with the timestamp start will be the first frame in the output.
Specify the time of the first frame that will be dropped, i.e. the frame immediately preceding the one with the timestamp end will be the last frame in the output.
This is the same as start, except this option sets the start timestamp in timebase units instead of seconds.
This is the same as end, except this option sets the end timestamp in timebase units instead of seconds.
The maximum duration of the output in seconds.
The number of the first frame that should be passed to the output.
The number of the first frame that should be dropped.

start, end, and duration are expressed as time duration specifications; see the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax.

Note that the first two sets of the start/end options and the duration option look at the frame timestamp, while the _frame variants simply count the frames that pass through the filter. Also note that this filter does not modify the timestamps. If you wish for the output timestamps to start at zero, insert a setpts filter after the trim filter.

If multiple start or end options are set, this filter tries to be greedy and keep all the frames that match at least one of the specified constraints. To keep only the part that matches all the constraints at once, chain multiple trim filters.

The defaults are such that all the input is kept. So it is possible to set e.g. just the end values to keep everything before the specified time.

Examples:

  • Drop everything except the second minute of input:
    ffmpeg -i INPUT -vf trim=60:120
    
  • Keep only the first second:
    ffmpeg -i INPUT -vf trim=duration=1
    

Apply alpha unpremultiply effect to input video stream using first plane of second stream as alpha.

Both streams must have same dimensions and same pixel format.

The filter accepts the following option:

Set which planes will be processed, unprocessed planes will be copied. By default value 0xf, all planes will be processed.

If the format has 1 or 2 components, then luma is bit 0. If the format has 3 or 4 components: for RGB formats bit 0 is green, bit 1 is blue and bit 2 is red; for YUV formats bit 0 is luma, bit 1 is chroma-U and bit 2 is chroma-V. If present, the alpha channel is always the last bit.

Do not require 2nd input for processing, instead use alpha plane from input stream.

Sharpen or blur the input video.

It accepts the following parameters:

Set the luma matrix horizontal size. It must be an odd integer between 3 and 23. The default value is 5.
Set the luma matrix vertical size. It must be an odd integer between 3 and 23. The default value is 5.
Set the luma effect strength. It must be a floating point number, reasonable values lay between -1.5 and 1.5.

Negative values will blur the input video, while positive values will sharpen it, a value of zero will disable the effect.

Default value is 1.0.

Set the chroma matrix horizontal size. It must be an odd integer between 3 and 23. The default value is 5.
Set the chroma matrix vertical size. It must be an odd integer between 3 and 23. The default value is 5.
Set the chroma effect strength. It must be a floating point number, reasonable values lay between -1.5 and 1.5.

Negative values will blur the input video, while positive values will sharpen it, a value of zero will disable the effect.

Default value is 0.0.

Set the alpha matrix horizontal size. It must be an odd integer between 3 and 23. The default value is 5.
Set the alpha matrix vertical size. It must be an odd integer between 3 and 23. The default value is 5.
Set the alpha effect strength. It must be a floating point number, reasonable values lay between -1.5 and 1.5.

Negative values will blur the input video, while positive values will sharpen it, a value of zero will disable the effect.

Default value is 0.0.

All parameters are optional and default to the equivalent of the string '5:5:1.0:5:5:0.0'.

Examples

  • Apply strong luma sharpen effect:
    unsharp=luma_msize_x=7:luma_msize_y=7:luma_amount=2.5
    
  • Apply a strong blur of both luma and chroma parameters:
    unsharp=7:7:-2:7:7:-2
    

Decompose a video made of tiled images into the individual images.

The frame rate of the output video is the frame rate of the input video multiplied by the number of tiles.

This filter does the reverse of tile.

The filter accepts the following options:

Set the grid size (i.e. the number of lines and columns). For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual.

Examples

Produce a 1-second video from a still image file made of 25 frames stacked vertically, like an analogic film reel:
ffmpeg -r 1 -i image.jpg -vf untile=1x25 movie.mkv

Apply ultra slow/simple postprocessing filter that compresses and decompresses the image at several (or - in the case of quality level 8 - all) shifts and average the results.

The way this differs from the behavior of spp is that uspp actually encodes & decodes each case with libavcodec Snow, whereas spp uses a simplified intra only 8x8 DCT similar to MJPEG.

This filter is not available in ffmpeg versions between 5.0 and 6.0.

The filter accepts the following options:

Set quality. This option defines the number of levels for averaging. It accepts an integer in the range 0-8. If set to 0, the filter will have no effect. A value of 8 means the higher quality. For each increment of that value the speed drops by a factor of approximately 2. Default value is 3.
qp
Force a constant quantization parameter. If not set, the filter will use the QP from the video stream (if available).
Use specified codec instead of snow.

Convert 360 videos between various formats.

The filter accepts the following options:

Set format of the input/output video.

Available formats:

Equirectangular projection.
Cubemap with 3x2/6x1/1x6 layout.

Format specific options:

Set padding proportion for the input/output cubemap. Values in decimals.

Example values:

0
No padding.
0.01
1% of face is padding. For example, with 1920x1280 resolution face size would be 640x640 and padding would be 3 pixels from each side. (640 * 0.01 = 6 pixels)

Default value is @samp{0}. Maximum value is @samp{0.1}.

Set fixed padding for the input/output cubemap. Values in pixels.

Default value is @samp{0}. If greater than zero it overrides other padding options.

Set order of faces for the input/output cubemap. Choose one direction for each position.

Designation of directions:

right
left
up
down
forward
back

Default value is @samp{rludfb}.

Set rotation of faces for the input/output cubemap. Choose one angle for each position.

Designation of angles:

0
0 degrees clockwise
1
90 degrees clockwise
2
180 degrees clockwise
3
270 degrees clockwise

Default value is @samp{000000}.

Equi-Angular Cubemap.
Regular video.

Format specific options:

Set output horizontal/vertical/diagonal field of view. Values in degrees.

If diagonal field of view is set it overrides horizontal and vertical field of view.

Set input horizontal/vertical/diagonal field of view. Values in degrees.

If diagonal field of view is set it overrides horizontal and vertical field of view.

Dual fisheye.

Format specific options:

Set output horizontal/vertical/diagonal field of view. Values in degrees.

If diagonal field of view is set it overrides horizontal and vertical field of view.

Set input horizontal/vertical/diagonal field of view. Values in degrees.

If diagonal field of view is set it overrides horizontal and vertical field of view.

Facebook's 360 formats.
Stereographic format.

Format specific options:

Set output horizontal/vertical/diagonal field of view. Values in degrees.

If diagonal field of view is set it overrides horizontal and vertical field of view.

Set input horizontal/vertical/diagonal field of view. Values in degrees.

If diagonal field of view is set it overrides horizontal and vertical field of view.

Mercator format.
Ball format, gives significant distortion toward the back.
Hammer-Aitoff map projection format.
Sinusoidal map projection format.
Fisheye projection.

Format specific options:

Set output horizontal/vertical/diagonal field of view. Values in degrees.

If diagonal field of view is set it overrides horizontal and vertical field of view.

Set input horizontal/vertical/diagonal field of view. Values in degrees.

If diagonal field of view is set it overrides horizontal and vertical field of view.

Pannini projection.

Format specific options:

Set output pannini parameter.
Set input pannini parameter.
Cylindrical projection.

Format specific options:

Set output horizontal/vertical/diagonal field of view. Values in degrees.

If diagonal field of view is set it overrides horizontal and vertical field of view.

Set input horizontal/vertical/diagonal field of view. Values in degrees.

If diagonal field of view is set it overrides horizontal and vertical field of view.

perspective
Perspective projection. (output only)

Format specific options:

Set perspective parameter.
Tetrahedron projection.
Truncated square pyramid projection.
Half equirectangular projection.
Equisolid format.

Format specific options:

Set output horizontal/vertical/diagonal field of view. Values in degrees.

If diagonal field of view is set it overrides horizontal and vertical field of view.

Set input horizontal/vertical/diagonal field of view. Values in degrees.

If diagonal field of view is set it overrides horizontal and vertical field of view.

Orthographic format.

Format specific options:

Set output horizontal/vertical/diagonal field of view. Values in degrees.

If diagonal field of view is set it overrides horizontal and vertical field of view.

Set input horizontal/vertical/diagonal field of view. Values in degrees.

If diagonal field of view is set it overrides horizontal and vertical field of view.

Octahedron projection.
Cylindrical Equal Area projection.
Set interpolation method.Note: more complex interpolation methods require much more memory to run.

Available methods:

Nearest neighbour.
Bilinear interpolation.
Lagrange9 interpolation.
Bicubic interpolation.
Lanczos interpolation.
Spline16 interpolation.
Gaussian interpolation.
Mitchell interpolation.

Default value is @samp{line}.

Set the output video resolution.

Default resolution depends on formats.

Set the input/output stereo format.
2d
2D mono
Side by side
Top bottom

Default value is @samp{2d} for input and output format.

Set rotation for the output video. Values in degrees.
Set rotation order for the output video. Choose one item for each position.
yaw
pitch
roll

Default value is @samp{ypr}.

Flip the output video horizontally(swaps left-right)/vertically(swaps up-down)/in-depth(swaps back-forward). Boolean values.
Set if input video is flipped horizontally/vertically. Boolean values.
Set if input video is transposed. Boolean value, by default disabled.
Set if output video needs to be transposed. Boolean value, by default disabled.
Set output horizontal/vertical off-axis offset. Default is set to 0. Allowed range is from -1 to 1.
Build mask in alpha plane for all unmapped pixels by marking them fully transparent. Boolean value, by default disabled.
Reset rotation of output video. Boolean value, by default disabled.

Examples

  • Convert equirectangular video to cubemap with 3x2 layout and 1% padding using bicubic interpolation:
    ffmpeg -i input.mkv -vf v360=e:c3x2:cubic:out_pad=0.01 output.mkv
    
  • Extract back view of Equi-Angular Cubemap:
    ffmpeg -i input.mkv -vf v360=eac:flat:yaw=180 output.mkv
    
  • Convert transposed and horizontally flipped Equi-Angular Cubemap in side-by-side stereo format to equirectangular top-bottom stereo format:
    v360=eac:equirect:in_stereo=sbs:in_trans=1:ih_flip=1:out_stereo=tb
    

Commands

This filter supports subset of above options as commands.

Apply a wavelet based denoiser.

It transforms each frame from the video input into the wavelet domain, using Cohen-Daubechies-Feauveau 9/7. Then it applies some filtering to the obtained coefficients. It does an inverse wavelet transform after. Due to wavelet properties, it should give a nice smoothed result, and reduced noise, without blurring picture features.

This filter accepts the following options:

threshold
The filtering strength. The higher, the more filtered the video will be. Hard thresholding can use a higher threshold than soft thresholding before the video looks overfiltered. Default value is 2.
The filtering method the filter will use.

It accepts the following values:

All values under the threshold will be zeroed.
All values under the threshold will be zeroed. All values above will be reduced by the threshold.
Scales or nullifies coefficients - intermediary between (more) soft and (less) hard thresholding.

Default is garrote.

Number of times, the wavelet will decompose the picture. Picture can't be decomposed beyond a particular point (typically, 8 for a 640x480 frame - as 2^9 = 512 > 480). Valid values are integers between 1 and 32. Default value is 6.
Partial of full denoising (limited coefficients shrinking), from 0 to 100. Default value is 85.
A list of the planes to process. By default all planes are processed.
The threshold type the filter will use.

It accepts the following values:

Threshold used is same for all decompositions.
Threshold used depends also on each decomposition coefficients.

Default is universal.

Apply variable blur filter by using 2nd video stream to set blur radius. The 2nd stream must have the same dimensions.

This filter accepts the following options:

Set min allowed radius. Allowed range is from 0 to 254. Default is 0.
Set max allowed radius. Allowed range is from 1 to 255. Default is 8.
Set which planes to process. By default, all are used.

The "varblur" filter also supports the framesync options.

Commands

This filter supports all the above options as commands.

Display 2 color component values in the two dimensional graph (which is called a vectorscope).

This filter accepts the following options:

Set vectorscope mode.

It accepts the following values:

Gray values are displayed on graph, higher brightness means more pixels have same component color value on location in graph. This is the default mode.
Gray values are displayed on graph. Surrounding pixels values which are not present in video frame are drawn in gradient of 2 color components which are set by option "x" and "y". The 3rd color component is static.
Actual color components values present in video frame are displayed on graph.
Similar as color2 but higher frequency of same values "x" and "y" on graph increases value of another color component, which is luminance by default values of "x" and "y".
Actual colors present in video frame are displayed on graph. If two different colors map to same position on graph then color with higher value of component not present in graph is picked.
Gray values are displayed on graph. Similar to "color" but with 3rd color component picked from radial gradient.
Set which color component will be represented on X-axis. Default is 1.
Set which color component will be represented on Y-axis. Default is 2.
Set intensity, used by modes: gray, color, color3 and color5 for increasing brightness of color component which represents frequency of (X, Y) location in graph.
No envelope, this is default.
Instant envelope, even darkest single pixel will be clearly highlighted.
Hold maximum and minimum values presented in graph over time. This way you can still spot out of range values without constantly looking at vectorscope.
Peak and instant envelope combined together.
Set what kind of graticule to draw.
Set graticule opacity.
Set graticule flags.
Draw graticule for white point.
Draw graticule for black point.
Draw color points short names.
Set background opacity.
Set low threshold for color component not represented on X or Y axis. Values lower than this value will be ignored. Default is 0. Note this value is multiplied with actual max possible value one pixel component can have. So for 8-bit input and low threshold value of 0.1 actual threshold is 0.1 * 255 = 25.
Set high threshold for color component not represented on X or Y axis. Values higher than this value will be ignored. Default is 1. Note this value is multiplied with actual max possible value one pixel component can have. So for 8-bit input and high threshold value of 0.9 actual threshold is 0.9 * 255 = 230.
Set what kind of colorspace to use when drawing graticule.
601
709

Default is auto.

Set color tint for gray/tint vectorscope mode. By default both options are zero. This means no tint, and output will remain gray.

Analyze video stabilization/deshaking. Perform pass 1 of 2, see vidstabtransform for pass 2.

This filter generates a file with relative translation and rotation transform information about subsequent frames, which is then used by the vidstabtransform filter.

To enable compilation of this filter you need to configure FFmpeg with "--enable-libvidstab".

This filter accepts the following options:

Set the path to the file used to write the transforms information. Default value is transforms.trf.
Set how shaky the video is and how quick the camera is. It accepts an integer in the range 1-10, a value of 1 means little shakiness, a value of 10 means strong shakiness. Default value is 5.
Set the accuracy of the detection process. It must be a value in the range 1-15. A value of 1 means low accuracy, a value of 15 means high accuracy. Default value is 15.
Set stepsize of the search process. The region around minimum is scanned with 1 pixel resolution. Default value is 6.
Set minimum contrast. Below this value a local measurement field is discarded. Must be a floating point value in the range 0-1. Default value is 0.3.
Set reference frame number for tripod mode.

If enabled, the motion of the frames is compared to a reference frame in the filtered stream, identified by the specified number. The idea is to compensate all movements in a more-or-less static scene and keep the camera view absolutely still.

If set to 0, it is disabled. The frames are counted starting from 1.

Show fields and transforms in the resulting frames. It accepts an integer in the range 0-2. Default value is 0, which disables any visualization.
Format for the transforms data file to be written. Acceptable values are
Human-readable plain text
Binary format, roughly 40% smaller than "ascii". (default)

Examples

  • Use default values:
    vidstabdetect
    
  • Analyze strongly shaky movie and put the results in file mytransforms.trf:
    vidstabdetect=shakiness=10:accuracy=15:result="mytransforms.trf"
    
  • Visualize the result of internal transformations in the resulting video:
    vidstabdetect=show=1
    
  • Analyze a video with medium shakiness using ffmpeg:
    ffmpeg -i input -vf vidstabdetect=shakiness=5:show=1 dummy.avi
    

Video stabilization/deshaking: pass 2 of 2, see vidstabdetect for pass 1.

Read a file with transform information for each frame and apply/compensate them. Together with the vidstabdetect filter this can be used to deshake videos. See also http://public.hronopik.de/vid.stab. It is important to also use the unsharp filter, see below.

To enable compilation of this filter you need to configure FFmpeg with "--enable-libvidstab".

Options

Set path to the file used to read the transforms. Default value is transforms.trf.
Set the number of frames (value*2 + 1) used for lowpass filtering the camera movements. Default value is 10.

For example a number of 10 means that 21 frames are used (10 in the past and 10 in the future) to smoothen the motion in the video. A larger value leads to a smoother video, but limits the acceleration of the camera (pan/tilt movements). 0 is a special case where a static camera is simulated.

Set the camera path optimization algorithm.

Accepted values are:

gaussian kernel low-pass filter on camera motion (default)
averaging on transformations
Set maximal number of pixels to translate frames. Default value is -1, meaning no limit.
Set maximal angle in radians (degree*PI/180) to rotate frames. Default value is -1, meaning no limit.
crop
Specify how to deal with borders that may be visible due to movement compensation.

Available values are:

keep image information from previous frame (default)
fill the border black
Invert transforms if set to 1. Default value is 0.
Consider transforms as relative to previous frame if set to 1, absolute if set to 0. Default value is 0.
Set percentage to zoom. A positive value will result in a zoom-in effect, a negative value in a zoom-out effect. Default value is 0 (no zoom).
Set optimal zooming to avoid borders.

Accepted values are:

0
disabled
1
optimal static zoom value is determined (only very strong movements will lead to visible borders) (default)
2
optimal adaptive zoom value is determined (no borders will be visible), see zoomspeed

Note that the value given at zoom is added to the one calculated here.

Set percent to zoom maximally each frame (enabled when optzoom is set to 2). Range is from 0 to 5, default value is 0.25.
Specify type of interpolation.

Available values are:

no interpolation
linear only horizontal
linear in both directions (default)
cubic in both directions (slow)
Enable virtual tripod mode if set to 1, which is equivalent to "relative=0:smoothing=0". Default value is 0.

Use also "tripod" option of vidstabdetect.

Increase log verbosity if set to 1. Also the detected global motions are written to the temporary file global_motions.trf. Default value is 0.

Examples

  • Use ffmpeg for a typical stabilization with default values:
    ffmpeg -i inp.mpeg -vf vidstabtransform,unsharp=5:5:0.8:3:3:0.4 inp_stabilized.mpeg
    

    Note the use of the unsharp filter which is always recommended.

  • Zoom in a bit more and load transform data from a given file:
    vidstabtransform=zoom=5:input="mytransforms.trf"
    
  • Smoothen the video even more:
    vidstabtransform=smoothing=30
    

Flip the input video vertically.

For example, to vertically flip a video with ffmpeg:

ffmpeg -i in.avi -vf "vflip" out.avi

Detect variable frame rate video.

This filter tries to detect if the input is variable or constant frame rate.

At end it will output number of frames detected as having variable delta pts, and ones with constant delta pts. If there was frames with variable delta, than it will also show min, max and average delta encountered.

Boost or alter saturation.

The filter accepts the following options:

Set strength of boost if positive value or strength of alter if negative value. Default is 0. Allowed range is from -2 to 2.
Set the red balance. Default is 1. Allowed range is from -10 to 10.
Set the green balance. Default is 1. Allowed range is from -10 to 10.
Set the blue balance. Default is 1. Allowed range is from -10 to 10.
Set the red luma coefficient.
Set the green luma coefficient.
Set the blue luma coefficient.
If "intensity" is negative and this is set to 1, colors will change, otherwise colors will be less saturated, more towards gray.

Commands

This filter supports the all above options as commands.

Obtain the average VIF (Visual Information Fidelity) between two input videos.

This filter takes two input videos.

Both input videos must have the same resolution and pixel format for this filter to work correctly. Also it assumes that both inputs have the same number of frames, which are compared one by one.

The obtained average VIF score is printed through the logging system.

The filter stores the calculated VIF score of each frame.

This filter also supports the framesync options.

In the below example the input file main.mpg being processed is compared with the reference file ref.mpg.

ffmpeg -i main.mpg -i ref.mpg -lavfi vif -f null -

Make or reverse a natural vignetting effect.

The filter accepts the following options:

Set lens angle expression as a number of radians.

The value is clipped in the "[0,PI/2]" range.

Default value: "PI/5"

Set center coordinates expressions. Respectively "w/2" and "h/2" by default.
Set forward/backward mode.

Available modes are:

The larger the distance from the central point, the darker the image becomes.
The larger the distance from the central point, the brighter the image becomes. This can be used to reverse a vignette effect, though there is no automatic detection to extract the lens angle and other settings (yet). It can also be used to create a burning effect.

Default value is forward.

Set evaluation mode for the expressions (angle, x0, y0).

It accepts the following values:

Evaluate expressions only once during the filter initialization.
Evaluate expressions for each incoming frame. This is way slower than the init mode since it requires all the scalers to be re-computed, but it allows advanced dynamic expressions.

Default value is init.

Set dithering to reduce the circular banding effects. Default is 1 (enabled).
Set vignette aspect. This setting allows one to adjust the shape of the vignette. Setting this value to the SAR of the input will make a rectangular vignetting following the dimensions of the video.

Default is "1/1".

Expressions

The alpha, x0 and y0 expressions can contain the following parameters.

input width and height
the number of input frame, starting from 0
the PTS (Presentation TimeStamp) time of the filtered video frame, expressed in TB units, NAN if undefined
frame rate of the input video, NAN if the input frame rate is unknown
the PTS (Presentation TimeStamp) of the filtered video frame, expressed in seconds, NAN if undefined
time base of the input video

Examples

  • Apply simple strong vignetting effect:
    vignette=PI/4
    
  • Make a flickering vignetting:
    vignette='PI/4+random(1)*PI/50':eval=frame
    

Obtain the average VMAF motion score of a video. It is one of the component metrics of VMAF.

The obtained average motion score is printed through the logging system.

The filter accepts the following options:

If specified, the filter will use the named file to save the motion score of each frame with respect to the previous frame. When filename equals "-" the data is sent to standard output.

Example:

ffmpeg -i ref.mpg -vf vmafmotion -f null -

Stack input videos vertically.

All streams must be of same pixel format and of same width.

Note that this filter is faster than using overlay and pad filter to create same output.

The filter accepts the following options:

Set number of input streams. Default is 2.
If set to 1, force the output to terminate when the shortest input terminates. Default value is 0.

Deinterlace the input video ("w3fdif" stands for "Weston 3 Field Deinterlacing Filter").

Based on the process described by Martin Weston for BBC R&D, and implemented based on the de-interlace algorithm written by Jim Easterbrook for BBC R&D, the Weston 3 field deinterlacing filter uses filter coefficients calculated by BBC R&D.

This filter uses field-dominance information in frame to decide which of each pair of fields to place first in the output. If it gets it wrong use setfield filter before "w3fdif" filter.

There are two sets of filter coefficients, so called "simple" and "complex". Which set of filter coefficients is used can be set by passing an optional parameter:

Set the interlacing filter coefficients. Accepts one of the following values:
Simple filter coefficient set.
More-complex filter coefficient set.

Default value is complex.

The interlacing mode to adopt. It accepts one of the following values:
Output one frame for each frame.
field
Output one frame for each field.

The default value is "field".

The picture field parity assumed for the input interlaced video. It accepts one of the following values:
Assume the top field is first.
Assume the bottom field is first.
Enable automatic detection of field parity.

The default value is "auto". If the interlacing is unknown or the decoder does not export this information, top field first will be assumed.

Specify which frames to deinterlace. Accepts one of the following values:
Deinterlace all frames,
Only deinterlace frames marked as interlaced.

Default value is all.

Commands

This filter supports same commands as options.

Video waveform monitor.

The waveform monitor plots color component intensity. By default luma only. Each column of the waveform corresponds to a column of pixels in the source video.

It accepts the following options:

Can be either "row", or "column". Default is "column". In row mode, the graph on the left side represents color component value 0 and the right side represents value = 255. In column mode, the top side represents color component value = 0 and bottom side represents value = 255.
Set intensity. Smaller values are useful to find out how many values of the same luminance are distributed across input rows/columns. Default value is 0.04. Allowed range is [0, 1].
Set mirroring mode. 0 means unmirrored, 1 means mirrored. In mirrored mode, higher values will be represented on the left side for "row" mode and at the top for "column" mode. Default is 1 (mirrored).
Set display mode. It accepts the following values:
overlay
Presents information identical to that in the "parade", except that the graphs representing color components are superimposed directly over one another.

This display mode makes it easier to spot relative differences or similarities in overlapping areas of the color components that are supposed to be identical, such as neutral whites, grays, or blacks.

Display separate graph for the color components side by side in "row" mode or one below the other in "column" mode.
Display separate graph for the color components side by side in "column" mode or one below the other in "row" mode.

Using this display mode makes it easy to spot color casts in the highlights and shadows of an image, by comparing the contours of the top and the bottom graphs of each waveform. Since whites, grays, and blacks are characterized by exactly equal amounts of red, green, and blue, neutral areas of the picture should display three waveforms of roughly equal width/height. If not, the correction is easy to perform by making level adjustments the three waveforms.

Default is "stack".

Set which color components to display. Default is 1, which means only luma or red color component if input is in RGB colorspace. If is set for example to 7 it will display all 3 (if) available color components.
No envelope, this is default.
Instant envelope, minimum and maximum values presented in graph will be easily visible even with small "step" value.
Hold minimum and maximum values presented in graph across time. This way you can still spot out of range values without constantly looking at waveforms.
Peak and instant envelope combined together.
lowpass
No filtering, this is default.
Luma and chroma combined together.
Similar as above, but shows difference between blue and red chroma.
Similar as above, but use different colors.
Similar as above, but again with different colors.
Displays only chroma.
Displays actual color value on waveform.
Similar as above, but with luma showing frequency of chroma values.
Set which graticule to display.
Do not display graticule.
Display green graticule showing legal broadcast ranges.
Display orange graticule showing legal broadcast ranges.
Display invert graticule showing legal broadcast ranges.
Set graticule opacity.
Set graticule flags.
Draw numbers above lines. By default enabled.
Draw dots instead of lines.
Set scale used for displaying graticule.

Default is digital.

Set background opacity.
Set tint for output. Only used with lowpass filter and when display is not overlay and input pixel formats are not RGB.
Set sample aspect ratio of video output frames. Can be used to configure waveform so it is not streched too much in one of directions.
Set sample aspect ration to 1/1.
Set sample aspect ratio to match input size of video

Default is none.

Set input formats for filter to pick from. Can be all, for selecting from all available formats, or first, for selecting first available format. Default is first.

The "weave" takes a field-based video input and join each two sequential fields into single frame, producing a new double height clip with half the frame rate and half the frame count.

The "doubleweave" works same as "weave" but without halving frame rate and frame count.

It accepts the following option:

Set first field. Available values are:
Set the frame as top-field-first.
Set the frame as bottom-field-first.

Examples

Interlace video using select and separatefields filter:
separatefields,select=eq(mod(n,4),0)+eq(mod(n,4),3),weave

Apply the xBR high-quality magnification filter which is designed for pixel art. It follows a set of edge-detection rules, see https://forums.libretro.com/t/xbr-algorithm-tutorial/123.

It accepts the following option:

Set the scaling dimension: 2 for "2xBR", 3 for "3xBR" and 4 for "4xBR". Default is 3.

Apply normalized cross-correlation between first and second input video stream.

Second input video stream dimensions must be lower than first input video stream.

The filter accepts the following options:

Set which planes to process.
Set which secondary video frames will be processed from second input video stream, can be first or all. Default is all.

The "xcorrelate" filter also supports the framesync options.

Apply cross fade from one input video stream to another input video stream. The cross fade is applied for specified duration.

Both inputs must be constant frame-rate and have the same resolution, pixel format, frame rate and timebase.

The filter accepts the following options:

Set one of available transition effects:

Default transition effect is fade.

Set cross fade duration in seconds. Range is 0 to 60 seconds. Default duration is 1 second.
Set cross fade start relative to first input stream in seconds. Default offset is 0.
Set expression for custom transition effect.

The expressions can use the following variables and functions:

The coordinates of the current sample.
The width and height of the image.
Progress of transition effect.
Currently processed plane.
Return value of first input at current location and plane.
Return value of second input at current location and plane.
Return the value of the pixel at location (x,y) of the first/second/third/fourth component of first input.
Return the value of the pixel at location (x,y) of the first/second/third/fourth component of second input.

Examples

Cross fade from one input video to another input video, with fade transition and duration of transition of 2 seconds starting at offset of 5 seconds:
ffmpeg -i first.mp4 -i second.mp4 -filter_complex xfade=transition=fade:duration=2:offset=5 output.mp4

Pick median pixels from several input videos.

The filter accepts the following options:

Set number of inputs. Default is 3. Allowed range is from 3 to 255. If number of inputs is even number, than result will be mean value between two median values.
Set which planes to filter. Default value is 15, by which all planes are processed.
Set median percentile. Default value is 0.5. Default value of 0.5 will pick always median values, while 0 will pick minimum values, and 1 maximum values.

Commands

This filter supports all above options as commands, excluding option "inputs".

Obtain the average (across all input frames) and minimum (across all color plane averages) eXtended Perceptually weighted peak Signal-to-Noise Ratio (XPSNR) between two input videos.

The XPSNR is a low-complexity psychovisually motivated distortion measurement algorithm for assessing the difference between two video streams or images. This is especially useful for objectively quantifying the distortions caused by video and image codecs, as an alternative to a formal subjective test. The logarithmic XPSNR output values are in a similar range as those of traditional psnr assessments but better reflect human impressions of visual coding quality. More details on the XPSNR measure, which essentially represents a blockwise weighted variant of the PSNR measure, can be found in the following freely available papers:

  • C. R. Helmrich, M. Siekmann, S. Becker, S. Bosse, D. Marpe, and T. Wiegand, "XPSNR: A Low-Complexity Extension of the Perceptually Weighted Peak Signal-to-Noise Ratio for High-Resolution Video Quality Assessment," in Proc. IEEE Int. Conf. Acoustics, Speech, Sig. Process. (ICASSP), virt./online, May 2020. <www.ecodis.de/xpsnr.htm>
  • C. R. Helmrich, S. Bosse, H. Schwarz, D. Marpe, and T. Wiegand, "A Study of the Extended Perceptually Weighted Peak Signal-to-Noise Ratio (XPSNR) for Video Compression with Different Resolutions and Bit Depths," ITU Journal: ICT Discoveries, vol. 3, no. 1, pp. 65 - 72, May 2020. http://handle.itu.int/11.1002/pub/8153d78b-en

When publishing the results of XPSNR assessments obtained using, e.g., this FFmpeg filter, a reference to the above papers as a means of documentation is strongly encouraged. The filter requires two input videos. The first input is considered a (usually not distorted) reference source and is passed unchanged to the output, whereas the second input is a (distorted) test signal. Except for the bit depth, these two video inputs must have the same pixel format. In addition, for best performance, both compared input videos should be in YCbCr color format.

The obtained overall XPSNR values mentioned above are printed through the logging system. In case of input with multiple color planes, we suggest reporting of the minimum XPSNR average.

The following parameter, which behaves like the one for the psnr filter, is accepted:

If specified, the filter will use the named file to save the XPSNR value of each individual frame and color plane. When the file name equals "-", that data is sent to standard output.

This filter also supports the framesync options.

Examples

  • XPSNR analysis of two 1080p HD videos, ref_source.yuv and test_video.yuv, both at 24 frames per second, with color format 4:2:0, bit depth 8, and output of a logfile named "xpsnr.log":
    ffmpeg -s 1920x1080 -framerate 24 -pix_fmt yuv420p -i ref_source.yuv -s 1920x1080 -framerate
    24 -pix_fmt yuv420p -i test_video.yuv -lavfi xpsnr="stats_file=xpsnr.log" -f null -
    
  • XPSNR analysis of two 2160p UHD videos, ref_source.yuv with bit depth 8 and test_video.yuv with bit depth 10, both at 60 frames per second with color format 4:2:0, no logfile output:
    ffmpeg -s 3840x2160 -framerate 60 -pix_fmt yuv420p -i ref_source.yuv -s 3840x2160 -framerate
    60 -pix_fmt yuv420p10le -i test_video.yuv -lavfi xpsnr="stats_file=-" -f null -
    

Stack video inputs into custom layout.

All streams must be of same pixel format.

The filter accepts the following options:

Set number of input streams. Default is 2.
Specify layout of inputs. This option requires the desired layout configuration to be explicitly set by the user. This sets position of each video input in output. Each input is separated by '|'. The first number represents the column, and the second number represents the row. Numbers start at 0 and are separated by '_'. Optionally one can use wX and hX, where X is video input from which to take width or height. Multiple values can be used when separated by '+'. In such case values are summed together.

Note that if inputs are of different sizes gaps may appear, as not all of the output video frame will be filled. Similarly, videos can overlap each other if their position doesn't leave enough space for the full frame of adjoining videos.

For 2 inputs, a default layout of "0_0|w0_0" (equivalent to "grid=2x1") is set. In all other cases, a layout or a grid must be set by the user. Either "grid" or "layout" can be specified at a time. Specifying both will result in an error.

Specify a fixed size grid of inputs. This option is used to create a fixed size grid of the input streams. Set the grid size in the form "COLUMNSxROWS". There must be "ROWS * COLUMNS" input streams and they will be arranged as a grid with "ROWS" rows and "COLUMNS" columns. When using this option, each input stream within a row must have the same height and all the rows must have the same width.

If "grid" is set, then "inputs" option is ignored and is implicitly set to "ROWS * COLUMNS".

For 2 inputs, a default grid of "2x1" (equivalent to "layout=0_0|w0_0") is set. In all other cases, a layout or a grid must be set by the user. Either "grid" or "layout" can be specified at a time. Specifying both will result in an error.

If set to 1, force the output to terminate when the shortest input terminates. Default value is 0.
If set to valid color, all unused pixels will be filled with that color. By default fill is set to none, so it is disabled.

Examples

  • Display 4 inputs into 2x2 grid.

    Layout:

    input1(0, 0)  | input3(w0, 0)
    input2(0, h0) | input4(w0, h0)
    
    xstack=inputs=4:layout=0_0|0_h0|w0_0|w0_h0
    

    Note that if inputs are of different sizes, gaps or overlaps may occur.

  • Display 4 inputs into 1x4 grid.

    Layout:

    input1(0, 0)
    input2(0, h0)
    input3(0, h0+h1)
    input4(0, h0+h1+h2)
    
    xstack=inputs=4:layout=0_0|0_h0|0_h0+h1|0_h0+h1+h2
    

    Note that if inputs are of different widths, unused space will appear.

  • Display 9 inputs into 3x3 grid.

    Layout:

    input1(0, 0)       | input4(w0, 0)      | input7(w0+w3, 0)
    input2(0, h0)      | input5(w0, h0)     | input8(w0+w3, h0)
    input3(0, h0+h1)   | input6(w0, h0+h1)  | input9(w0+w3, h0+h1)
    
    xstack=inputs=9:layout=0_0|0_h0|0_h0+h1|w0_0|w0_h0|w0_h0+h1|w0+w3_0|w0+w3_h0|w0+w3_h0+h1
    

    Note that if inputs are of different sizes, gaps or overlaps may occur.

  • Display 16 inputs into 4x4 grid.

    Layout:

    input1(0, 0)       | input5(w0, 0)       | input9 (w0+w4, 0)       | input13(w0+w4+w8, 0)
    input2(0, h0)      | input6(w0, h0)      | input10(w0+w4, h0)      | input14(w0+w4+w8, h0)
    input3(0, h0+h1)   | input7(w0, h0+h1)   | input11(w0+w4, h0+h1)   | input15(w0+w4+w8, h0+h1)
    input4(0, h0+h1+h2)| input8(w0, h0+h1+h2)| input12(w0+w4, h0+h1+h2)| input16(w0+w4+w8, h0+h1+h2)
    
    xstack=inputs=16:layout=0_0|0_h0|0_h0+h1|0_h0+h1+h2|w0_0|w0_h0|w0_h0+h1|w0_h0+h1+h2|w0+w4_0|
    w0+w4_h0|w0+w4_h0+h1|w0+w4_h0+h1+h2|w0+w4+w8_0|w0+w4+w8_h0|w0+w4+w8_h0+h1|w0+w4+w8_h0+h1+h2
    

    Note that if inputs are of different sizes, gaps or overlaps may occur.

Deinterlace the input video ("yadif" means "yet another deinterlacing filter").

It accepts the following parameters:

The interlacing mode to adopt. It accepts one of the following values:
0, send_frame
Output one frame for each frame.
1, send_field
Output one frame for each field.
2, send_frame_nospatial
Like "send_frame", but it skips the spatial interlacing check.
3, send_field_nospatial
Like "send_field", but it skips the spatial interlacing check.

The default value is "send_frame".

The picture field parity assumed for the input interlaced video. It accepts one of the following values:
0, tff
Assume the top field is first.
1, bff
Assume the bottom field is first.
-1, auto
Enable automatic detection of field parity.

The default value is "auto". If the interlacing is unknown or the decoder does not export this information, top field first will be assumed.

Specify which frames to deinterlace. Accepts one of the following values:
0, all
Deinterlace all frames.
1, interlaced
Only deinterlace frames marked as interlaced.

The default value is "all".

Deinterlace the input video using the yadif algorithm, but implemented in CUDA so that it can work as part of a GPU accelerated pipeline with nvdec and/or nvenc.

It accepts the following parameters:

The interlacing mode to adopt. It accepts one of the following values:
0, send_frame
Output one frame for each frame.
1, send_field
Output one frame for each field.
2, send_frame_nospatial
Like "send_frame", but it skips the spatial interlacing check.
3, send_field_nospatial
Like "send_field", but it skips the spatial interlacing check.

The default value is "send_frame".

The picture field parity assumed for the input interlaced video. It accepts one of the following values:
0, tff
Assume the top field is first.
1, bff
Assume the bottom field is first.
-1, auto
Enable automatic detection of field parity.

The default value is "auto". If the interlacing is unknown or the decoder does not export this information, top field first will be assumed.

Specify which frames to deinterlace. Accepts one of the following values:
0, all
Deinterlace all frames.
1, interlaced
Only deinterlace frames marked as interlaced.

The default value is "all".

Apply blur filter while preserving edges ("yaepblur" means "yet another edge preserving blur filter"). The algorithm is described in "J. S. Lee, Digital image enhancement and noise filtering by use of local statistics, IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 1980."

It accepts the following parameters:

Set the window radius. Default value is 3.
Set which planes to filter. Default is only the first plane.
Set blur strength. Default value is 128.

Commands

This filter supports same commands as options.

Apply Zoom & Pan effect.

This filter accepts the following options:

Set the zoom expression. Range is 1-10. Default is 1.
Set the x and y expression. Default is 0.
Set the duration expression in number of frames. This sets for how many number of frames effect will last for single input image. Default is 90.
Set the output image size, default is 'hd720'.
fps
Set the output frame rate, default is '25'.

Each expression can contain the following constants:

Input width.
Input height.
Output width.
Output height.
Input frame count.
Output frame count.
The input timestamp expressed in seconds. It's NAN if the input timestamp is unknown.
The output timestamp expressed in seconds.
Last calculated 'x' and 'y' position from 'x' and 'y' expression for current input frame.
'x' and 'y' of last output frame of previous input frame or 0 when there was not yet such frame (first input frame).
Last calculated zoom from 'z' expression for current input frame.
Last calculated zoom of last output frame of previous input frame.
Number of output frames for current input frame. Calculated from 'd' expression for each input frame.
number of output frames created for previous input frame
Rational number: input width / input height
sample aspect ratio
display aspect ratio

Examples

  • Zoom in up to 1.5x and pan at same time to some spot near center of picture:
    zoompan=z='min(zoom+0.0015,1.5)':d=700:x='if(gte(zoom,1.5),x,x+1/a)':y='if(gte(zoom,1.5),y,y+1)':s=640x360
    
  • Zoom in up to 1.5x and pan always at center of picture:
    zoompan=z='min(zoom+0.0015,1.5)':d=700:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'
    
  • Same as above but without pausing:
    zoompan=z='min(max(zoom,pzoom)+0.0015,1.5)':d=1:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'
    
  • Zoom in 2x into center of picture only for the first second of the input video:
    zoompan=z='if(between(in_time,0,1),2,1)':d=1:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'
    

Scale (resize) the input video, using the z.lib library: https://github.com/sekrit-twc/zimg. To enable compilation of this filter, you need to configure FFmpeg with "--enable-libzimg".

The zscale filter forces the output display aspect ratio to be the same as the input, by changing the output sample aspect ratio.

If the input image format is different from the format requested by the next filter, the zscale filter will convert the input to the requested format.

Options

The filter accepts the following options.

Set the output video dimension expression. Default value is the input dimension.

If the width or w value is 0, the input width is used for the output. If the height or h value is 0, the input height is used for the output.

If one and only one of the values is -n with n >= 1, the zscale filter will use a value that maintains the aspect ratio of the input image, calculated from the other specified dimension. After that it will, however, make sure that the calculated dimension is divisible by n and adjust the value if necessary.

If both values are -n with n >= 1, the behavior will be identical to both values being set to 0 as previously detailed.

See below for the list of accepted constants for use in the dimension expression.

Set the video size. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual.
Set the dither type.

Possible values are:

Default is none.

Set the resize filter type.

Possible values are:

Default is bilinear.

Set the color range.

Possible values are:

Default is same as input.

Set the color primaries.

Possible values are:

709
170m
240m
2020

Default is same as input.

Set the transfer characteristics.

Possible values are:

Default is same as input.

Set the colorspace matrix.

Possible value are:

709
470bg
170m
2020_ncl
2020_cl

Default is same as input.

Set the input color range.

Possible values are:

Default is same as input.

Set the input color primaries.

Possible values are:

709
170m
240m
2020

Default is same as input.

Set the input transfer characteristics.

Possible values are:

709
601
2020_10
2020_12

Default is same as input.

Set the input colorspace matrix.

Possible value are:

709
470bg
170m
2020_ncl
2020_cl
Set the output chroma location.

Possible values are:

Set the input chroma location.

Possible values are:

Set the nominal peak luminance.
Parameter A for scaling filters. Parameter "b" for bicubic, and the number of filter taps for lanczos.
Parameter B for scaling filters. Parameter "c" for bicubic.

The values of the w and h options are expressions containing the following constants:

The input width and height
These are the same as in_w and in_h.
The output (scaled) width and height
These are the same as out_w and out_h
The same as iw / ih
input sample aspect ratio
The input display aspect ratio. Calculated from "(iw / ih) * sar".
horizontal and vertical input chroma subsample values. For example for the pixel format "yuv422p" hsub is 2 and vsub is 1.
horizontal and vertical output chroma subsample values. For example for the pixel format "yuv422p" hsub is 2 and vsub is 1.

Commands

This filter supports the following commands:

Set the output video dimension expression. The command accepts the same syntax of the corresponding option.

If the specified expression is not valid, it is kept at its current value.

Below is a description of the currently available OpenCL video filters.

To enable compilation of these filters you need to configure FFmpeg with "--enable-opencl".

Running OpenCL filters requires you to initialize a hardware device and to pass that device to all filters in any filter graph.

Initialise a new hardware device of type opencl called name, using the given device parameters.
Pass the hardware device called name to all filters in any filter graph.

For more detailed information see https://www.ffmpeg.org/ffmpeg.html#Advanced-Video-options

Example of choosing the first device on the second platform and running avgblur_opencl filter with default parameters on it.
-init_hw_device opencl=gpu:1.0 -filter_hw_device gpu -i INPUT -vf "hwupload, avgblur_opencl, hwdownload" OUTPUT

Since OpenCL filters are not able to access frame data in normal memory, all frame data needs to be uploaded(hwupload) to hardware surfaces connected to the appropriate device before being used and then downloaded(hwdownload) back to normal memory. Note that hwupload will upload to a surface with the same layout as the software frame, so it may be necessary to add a format filter immediately before to get the input into the right format and hwdownload does not support all formats on the output - it may be necessary to insert an additional format filter immediately following in the graph to get the output in a supported format.

Apply average blur filter.

The filter accepts the following options:

Set horizontal radius size. Range is "[1, 1024]" and default value is 1.
Set which planes to filter. Default value is 0xf, by which all planes are processed.
Set vertical radius size. Range is "[1, 1024]" and default value is 0. If zero, "sizeX" value will be used.

Example

Apply average blur filter with horizontal and vertical size of 3, setting each pixel of the output to the average value of the 7x7 region centered on it in the input. For pixels on the edges of the image, the region does not extend beyond the image boundaries, and so out-of-range coordinates are not used in the calculations.
-i INPUT -vf "hwupload, avgblur_opencl=3, hwdownload" OUTPUT

Apply a boxblur algorithm to the input video.

It accepts the following parameters:

A description of the accepted options follows.

Set an expression for the box radius in pixels used for blurring the corresponding input plane.

The radius value must be a non-negative number, and must not be greater than the value of the expression "min(w,h)/2" for the luma and alpha planes, and of "min(cw,ch)/2" for the chroma planes.

Default value for luma_radius is "2". If not specified, chroma_radius and alpha_radius default to the corresponding value set for luma_radius.

The expressions can contain the following constants:

The input width and height in pixels.
The input chroma image width and height in pixels.
The horizontal and vertical chroma subsample values. For example, for the pixel format "yuv422p", hsub is 2 and vsub is 1.
Specify how many times the boxblur filter is applied to the corresponding plane.

Default value for luma_power is 2. If not specified, chroma_power and alpha_power default to the corresponding value set for luma_power.

A value of 0 will disable the effect.

Examples

Apply boxblur filter, setting each pixel of the output to the average value of box-radiuses luma_radius, chroma_radius, alpha_radius for each plane respectively. The filter will apply luma_power, chroma_power, alpha_power times onto the corresponding plane. For pixels on the edges of the image, the radius does not extend beyond the image boundaries, and so out-of-range coordinates are not used in the calculations.

  • Apply a boxblur filter with the luma, chroma, and alpha radius set to 2 and luma, chroma, and alpha power set to 3. The filter will run 3 times with box-radius set to 2 for every plane of the image.
    -i INPUT -vf "hwupload, boxblur_opencl=luma_radius=2:luma_power=3, hwdownload" OUTPUT
    -i INPUT -vf "hwupload, boxblur_opencl=2:3, hwdownload" OUTPUT
    
  • Apply a boxblur filter with luma radius set to 2, luma_power to 1, chroma_radius to 4, chroma_power to 5, alpha_radius to 3 and alpha_power to 7.

    For the luma plane, a 2x2 box radius will be run once.

    For the chroma plane, a 4x4 box radius will be run 5 times.

    For the alpha plane, a 3x3 box radius will be run 7 times.

    -i INPUT -vf "hwupload, boxblur_opencl=2:1:4:5:3:7, hwdownload" OUTPUT
    

RGB colorspace color keying.

The filter accepts the following options:

The color which will be replaced with transparency.
Similarity percentage with the key color.

0.01 matches only the exact key color, while 1.0 matches everything.

blend
Blend percentage.

0.0 makes pixels either fully transparent, or not transparent at all.

Higher values result in semi-transparent pixels, with a higher transparency the more similar the pixels color is to the key color.

Examples

Make every semi-green pixel in the input transparent with some slight blending:
-i INPUT -vf "hwupload, colorkey_opencl=green:0.3:0.1, hwdownload" OUTPUT

Apply convolution of 3x3, 5x5, 7x7 matrix.

The filter accepts the following options:

0m
1m
2m
3m
Set matrix for each plane. Matrix is sequence of 9, 25 or 49 signed numbers. Default value for each plane is "0 0 0 0 1 0 0 0 0".
0rdiv
1rdiv
2rdiv
3rdiv
Set multiplier for calculated value for each plane. If unset or 0, it will be sum of all matrix elements. The option value must be a float number greater or equal to 0.0. Default value is 1.0.
0bias
1bias
2bias
3bias
Set bias for each plane. This value is added to the result of the multiplication. Useful for making the overall image brighter or darker. The option value must be a float number greater or equal to 0.0. Default value is 0.0.

Examples

  • Apply sharpen:
    -i INPUT -vf "hwupload, convolution_opencl=0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0, hwdownload" OUTPUT
    
  • Apply blur:
    -i INPUT -vf "hwupload, convolution_opencl=1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1/9:1/9:1/9:1/9, hwdownload" OUTPUT
    
  • Apply edge enhance:
    -i INPUT -vf "hwupload, convolution_opencl=0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:5:1:1:1:0:128:128:128, hwdownload" OUTPUT
    
  • Apply edge detect:
    -i INPUT -vf "hwupload, convolution_opencl=0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:5:5:5:1:0:128:128:128, hwdownload" OUTPUT
    
  • Apply laplacian edge detector which includes diagonals:
    -i INPUT -vf "hwupload, convolution_opencl=1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:5:5:5:1:0:128:128:0, hwdownload" OUTPUT
    
  • Apply emboss:
    -i INPUT -vf "hwupload, convolution_opencl=-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2, hwdownload" OUTPUT
    

Apply erosion effect to the video.

This filter replaces the pixel by the local(3x3) minimum.

It accepts the following options:

Limit the maximum change for each plane. Range is "[0, 65535]" and default value is 65535. If 0, plane will remain unchanged.
Flag which specifies the pixel to refer to. Range is "[0, 255]" and default value is 255, i.e. all eight pixels are used.

Flags to local 3x3 coordinates region centered on "x":

1 2 3
4 x 5
6 7 8

Example

Apply erosion filter with threshold0 set to 30, threshold1 set 40, threshold2 set to 50 and coordinates set to 231, setting each pixel of the output to the local minimum between pixels: 1, 2, 3, 6, 7, 8 of the 3x3 region centered on it in the input. If the difference between input pixel and local minimum is more then threshold of the corresponding plane, output pixel will be set to input pixel - threshold of corresponding plane.
-i INPUT -vf "hwupload, erosion_opencl=30:40:50:coordinates=231, hwdownload" OUTPUT

Feature-point based video stabilization filter.

The filter accepts the following options:

Simulates a tripod by preventing any camera movement whatsoever from the original frame. Defaults to 0.
Whether or not additional debug info should be displayed, both in the processed output and in the console.

Note that in order to see console debug output you will also need to pass "-v verbose" to ffmpeg.

Viewing point matches in the output video is only supported for RGB input.

Defaults to 0.

Whether or not to do a tiny bit of cropping at the borders to cut down on the amount of mirrored pixels.

Defaults to 1.

Whether or not feature points should be refined at a sub-pixel level.

This can be turned off for a slight performance gain at the cost of precision.

Defaults to 1.

The strength of the smoothing applied to the camera path from 0.0 to 1.0.

1.0 is the maximum smoothing strength while values less than that result in less smoothing.

0.0 causes the filter to adaptively choose a smoothing strength on a per-frame basis.

Defaults to 0.0.

Controls the size of the smoothing window (the number of frames buffered to determine motion information from).

The size of the smoothing window is determined by multiplying the framerate of the video by this number.

Acceptable values range from 0.1 to 10.0.

Larger values increase the amount of motion data available for determining how to smooth the camera path, potentially improving smoothness, but also increase latency and memory usage.

Defaults to 2.0.

Examples

  • Stabilize a video with a fixed, medium smoothing strength:
    -i INPUT -vf "hwupload, deshake_opencl=smooth_strength=0.5, hwdownload" OUTPUT
    
  • Stabilize a video with debugging (both in console and in rendered video):
    -i INPUT -filter_complex "[0:v]format=rgba, hwupload, deshake_opencl=debug=1, hwdownload, format=rgba, format=yuv420p" -v verbose OUTPUT
    

Apply dilation effect to the video.

This filter replaces the pixel by the local(3x3) maximum.

It accepts the following options:

Limit the maximum change for each plane. Range is "[0, 65535]" and default value is 65535. If 0, plane will remain unchanged.
Flag which specifies the pixel to refer to. Range is "[0, 255]" and default value is 255, i.e. all eight pixels are used.

Flags to local 3x3 coordinates region centered on "x":

1 2 3
4 x 5
6 7 8

Example

Apply dilation filter with threshold0 set to 30, threshold1 set 40, threshold2 set to 50 and coordinates set to 231, setting each pixel of the output to the local maximum between pixels: 1, 2, 3, 6, 7, 8 of the 3x3 region centered on it in the input. If the difference between input pixel and local maximum is more then threshold of the corresponding plane, output pixel will be set to input pixel + threshold of corresponding plane.
-i INPUT -vf "hwupload, dilation_opencl=30:40:50:coordinates=231, hwdownload" OUTPUT

Non-local Means denoise filter through OpenCL, this filter accepts same options as nlmeans.

Overlay one video on top of another.

It takes two inputs and has one output. The first input is the "main" video on which the second input is overlaid. This filter requires same memory layout for all the inputs. So, format conversion may be needed.

The filter accepts the following options:

Set the x coordinate of the overlaid video on the main video. Default value is 0.
Set the y coordinate of the overlaid video on the main video. Default value is 0.

Examples

  • Overlay an image LOGO at the top-left corner of the INPUT video. Both inputs are yuv420p format.
    -i INPUT -i LOGO -filter_complex "[0:v]hwupload[a], [1:v]format=yuv420p, hwupload[b], [a][b]overlay_opencl, hwdownload" OUTPUT
    
  • The inputs have same memory layout for color channels , the overlay has additional alpha plane, like INPUT is yuv420p, and the LOGO is yuva420p.
    -i INPUT -i LOGO -filter_complex "[0:v]hwupload[a], [1:v]format=yuva420p, hwupload[b], [a][b]overlay_opencl, hwdownload" OUTPUT
    

Add paddings to the input image, and place the original input at the provided x, y coordinates.

It accepts the following options:

Specify an expression for the size of the output image with the paddings added. If the value for width or height is 0, the corresponding input size is used for the output.

The width expression can reference the value set by the height expression, and vice versa.

The default value of width and height is 0.

Specify the offsets to place the input image at within the padded area, with respect to the top/left border of the output image.

The x expression can reference the value set by the y expression, and vice versa.

The default value of x and y is 0.

If x or y evaluate to a negative number, they'll be changed so the input image is centered on the padded area.

Specify the color of the padded area. For the syntax of this option, check the "Color" section in the ffmpeg-utils manual.
Pad to an aspect instead to a resolution.

The value for the width, height, x, and y options are expressions containing the following constants:

The input video width and height.
These are the same as in_w and in_h.
The output width and height (the size of the padded area), as specified by the width and height expressions.
These are the same as out_w and out_h.
The x and y offsets as specified by the x and y expressions, or NAN if not yet specified.
same as iw / ih
input sample aspect ratio
input display aspect ratio, it is the same as (iw / ih) * sar

Apply the Prewitt operator (https://en.wikipedia.org/wiki/Prewitt_operator) to input video stream.

The filter accepts the following option:

Set which planes to filter. Default value is 0xf, by which all planes are processed.
scale
Set value which will be multiplied with filtered result. Range is "[0.0, 65535]" and default value is 1.0.
Set value which will be added to filtered result. Range is "[-65535, 65535]" and default value is 0.0.

Example

Apply the Prewitt operator with scale set to 2 and delta set to 10.
-i INPUT -vf "hwupload, prewitt_opencl=scale=2:delta=10, hwdownload" OUTPUT

Filter video using an OpenCL program.

OpenCL program source file.
Kernel name in program.
Number of inputs to the filter. Defaults to 1.
Size of output frames. Defaults to the same as the first input.

The "program_opencl" filter also supports the framesync options.

The program source file must contain a kernel function with the given name, which will be run once for each plane of the output. Each run on a plane gets enqueued as a separate 2D global NDRange with one work-item for each pixel to be generated. The global ID offset for each work-item is therefore the coordinates of a pixel in the destination image.

The kernel function needs to take the following arguments:

  • Destination image, __write_only image2d_t.

    This image will become the output; the kernel should write all of it.

  • Frame index, unsigned int.

    This is a counter starting from zero and increasing by one for each frame.

  • Source images, __read_only image2d_t.

    These are the most recent images on each input. The kernel may read from them to generate the output, but they can't be written to.

Example programs:

  • Copy the input to the output (output must be the same size as the input).
    __kernel void copy(__write_only image2d_t destination,
                       unsigned int index,
                       __read_only  image2d_t source)
    {
        const sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE;
    
        int2 location = (int2)(get_global_id(0), get_global_id(1));
    
        float4 value = read_imagef(source, sampler, location);
    
        write_imagef(destination, location, value);
    }
    
  • Apply a simple transformation, rotating the input by an amount increasing with the index counter. Pixel values are linearly interpolated by the sampler, and the output need not have the same dimensions as the input.
    __kernel void rotate_image(__write_only image2d_t dst,
                               unsigned int index,
                               __read_only  image2d_t src)
    {
        const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
                                   CLK_FILTER_LINEAR);
    
        float angle = (float)index / 100.0f;
    
        float2 dst_dim = convert_float2(get_image_dim(dst));
        float2 src_dim = convert_float2(get_image_dim(src));
    
        float2 dst_cen = dst_dim / 2.0f;
        float2 src_cen = src_dim / 2.0f;
    
        int2   dst_loc = (int2)(get_global_id(0), get_global_id(1));
    
        float2 dst_pos = convert_float2(dst_loc) - dst_cen;
        float2 src_pos = {
            cos(angle) * dst_pos.x - sin(angle) * dst_pos.y,
            sin(angle) * dst_pos.x + cos(angle) * dst_pos.y
        };
        src_pos = src_pos * src_dim / dst_dim;
    
        float2 src_loc = src_pos + src_cen;
    
        if (src_loc.x < 0.0f      || src_loc.y < 0.0f ||
            src_loc.x > src_dim.x || src_loc.y > src_dim.y)
            write_imagef(dst, dst_loc, 0.5f);
        else
            write_imagef(dst, dst_loc, read_imagef(src, sampler, src_loc));
    }
    
  • Blend two inputs together, with the amount of each input used varying with the index counter.
    __kernel void blend_images(__write_only image2d_t dst,
                               unsigned int index,
                               __read_only  image2d_t src1,
                               __read_only  image2d_t src2)
    {
        const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
                                   CLK_FILTER_LINEAR);
    
        float blend = (cos((float)index / 50.0f) + 1.0f) / 2.0f;
    
        int2  dst_loc = (int2)(get_global_id(0), get_global_id(1));
        int2 src1_loc = dst_loc * get_image_dim(src1) / get_image_dim(dst);
        int2 src2_loc = dst_loc * get_image_dim(src2) / get_image_dim(dst);
    
        float4 val1 = read_imagef(src1, sampler, src1_loc);
        float4 val2 = read_imagef(src2, sampler, src2_loc);
    
        write_imagef(dst, dst_loc, val1 * blend + val2 * (1.0f - blend));
    }
    

Remap pixels using 2nd: Xmap and 3rd: Ymap input video stream.

Destination pixel at position (X, Y) will be picked from source (x, y) position where x = Xmap(X, Y) and y = Ymap(X, Y). If mapping values are out of range, zero value for pixel will be used for destination pixel.

Xmap and Ymap input video streams must be of same dimensions. Output video stream will have Xmap/Ymap video stream dimensions. Xmap and Ymap input video streams are 32bit float pixel format, single channel.

Specify interpolation used for remapping of pixels. Allowed values are "near" and "linear". Default value is "linear".
Specify the color of the unmapped pixels. For the syntax of this option, check the "Color" section in the ffmpeg-utils manual. Default color is "black".

Apply the Roberts cross operator (https://en.wikipedia.org/wiki/Roberts_cross) to input video stream.

The filter accepts the following option:

Set which planes to filter. Default value is 0xf, by which all planes are processed.
scale
Set value which will be multiplied with filtered result. Range is "[0.0, 65535]" and default value is 1.0.
Set value which will be added to filtered result. Range is "[-65535, 65535]" and default value is 0.0.

Example

Apply the Roberts cross operator with scale set to 2 and delta set to 10
-i INPUT -vf "hwupload, roberts_opencl=scale=2:delta=10, hwdownload" OUTPUT

Apply the Sobel operator (https://en.wikipedia.org/wiki/Sobel_operator) to input video stream.

The filter accepts the following option:

Set which planes to filter. Default value is 0xf, by which all planes are processed.
scale
Set value which will be multiplied with filtered result. Range is "[0.0, 65535]" and default value is 1.0.
Set value which will be added to filtered result. Range is "[-65535, 65535]" and default value is 0.0.

Example

Apply sobel operator with scale set to 2 and delta set to 10
-i INPUT -vf "hwupload, sobel_opencl=scale=2:delta=10, hwdownload" OUTPUT

Perform HDR(PQ/HLG) to SDR conversion with tone-mapping.

It accepts the following parameters:

tonemap
Specify the tone-mapping operator to be used. Same as tonemap option in tonemap.
Tune the tone mapping algorithm. same as param option in tonemap.
Apply desaturation for highlights that exceed this level of brightness. The higher the parameter, the more color information will be preserved. This setting helps prevent unnaturally blown-out colors for super-highlights, by (smoothly) turning into white instead. This makes images feel more natural, at the cost of reducing information about out-of-range colors.

The default value is 0.5, and the algorithm here is a little different from the cpu version tonemap currently. A setting of 0.0 disables this option.

threshold
The tonemapping algorithm parameters is fine-tuned per each scene. And a threshold is used to detect whether the scene has changed or not. If the distance between the current frame average brightness and the current running average exceeds a threshold value, we would re-calculate scene average and peak brightness. The default value is 0.2.
format
Specify the output pixel format.

Currently supported formats are:

Set the output color range.

Possible values are:

Default is same as input.

Set the output color primaries.

Possible values are:

Default is same as input.

Set the output transfer characteristics.

Possible values are:

Default is bt709.

Set the output colorspace matrix.

Possible value are:

Default is same as input.

Example

Convert HDR(PQ/HLG) video to bt2020-transfer-characteristic p010 format using linear operator.
-i INPUT -vf "format=p010,hwupload,tonemap_opencl=t=bt2020:tonemap=linear:format=p010,hwdownload,format=p010" OUTPUT

Sharpen or blur the input video.

It accepts the following parameters:

Set the luma matrix horizontal size. Range is "[1, 23]" and default value is 5.
Set the luma matrix vertical size. Range is "[1, 23]" and default value is 5.
Set the luma effect strength. Range is "[-10, 10]" and default value is 1.0.

Negative values will blur the input video, while positive values will sharpen it, a value of zero will disable the effect.

Set the chroma matrix horizontal size. Range is "[1, 23]" and default value is 5.
Set the chroma matrix vertical size. Range is "[1, 23]" and default value is 5.
Set the chroma effect strength. Range is "[-10, 10]" and default value is 0.0.

Negative values will blur the input video, while positive values will sharpen it, a value of zero will disable the effect.

All parameters are optional and default to the equivalent of the string '5:5:1.0:5:5:0.0'.

Examples

  • Apply strong luma sharpen effect:
    -i INPUT -vf "hwupload, unsharp_opencl=luma_msize_x=7:luma_msize_y=7:luma_amount=2.5, hwdownload" OUTPUT
    
  • Apply a strong blur of both luma and chroma parameters:
    -i INPUT -vf "hwupload, unsharp_opencl=7:7:-2:7:7:-2, hwdownload" OUTPUT
    

Cross fade two videos with custom transition effect by using OpenCL.

It accepts the following options:

Set one of possible transition effects.
Select custom transition effect, the actual transition description will be picked from source and kernel options.
fade
Default transition is fade.
OpenCL program source file for custom transition.
Set name of kernel to use for custom transition from program source file.
Set duration of video transition.
Set time of start of transition relative to first video.

The program source file must contain a kernel function with the given name, which will be run once for each plane of the output. Each run on a plane gets enqueued as a separate 2D global NDRange with one work-item for each pixel to be generated. The global ID offset for each work-item is therefore the coordinates of a pixel in the destination image.

The kernel function needs to take the following arguments:

  • Destination image, __write_only image2d_t.

    This image will become the output; the kernel should write all of it.

  • First Source image, __read_only image2d_t. Second Source image, __read_only image2d_t.

    These are the most recent images on each input. The kernel may read from them to generate the output, but they can't be written to.

  • Transition progress, float. This value is always between 0 and 1 inclusive.

Example programs:

Apply dots curtain transition effect:
__kernel void blend_images(__write_only image2d_t dst,
                           __read_only  image2d_t src1,
                           __read_only  image2d_t src2,
                           float progress)
{
    const sampler_t sampler = (CLK_NORMALIZED_COORDS_FALSE |
                               CLK_FILTER_LINEAR);
    int2  p = (int2)(get_global_id(0), get_global_id(1));
    float2 rp = (float2)(get_global_id(0), get_global_id(1));
    float2 dim = (float2)(get_image_dim(src1).x, get_image_dim(src1).y);
    rp = rp / dim;

    float2 dots = (float2)(20.0, 20.0);
    float2 center = (float2)(0,0);
    float2 unused;

    float4 val1 = read_imagef(src1, sampler, p);
    float4 val2 = read_imagef(src2, sampler, p);
    bool next = distance(fract(rp * dots, &unused), (float2)(0.5, 0.5)) < (progress / distance(rp, center));

    write_imagef(dst, p, next ? val1 : val2);
}

VAAPI Video filters are usually used with VAAPI decoder and VAAPI encoder. Below is a description of VAAPI video filters.

To enable compilation of these filters you need to configure FFmpeg with "--enable-vaapi".

To use vaapi filters, you need to setup the vaapi device correctly. For more information, please read https://trac.ffmpeg.org/wiki/Hardware/VAAPI

Overlay one video on the top of another.

It takes two inputs and has one output. The first input is the "main" video on which the second input is overlaid.

The filter accepts the following options:

Set expressions for the x and y coordinates of the overlaid video on the main video.

Default value is "0" for both expressions.

Set expressions for the width and height the overlaid video on the main video.

Default values are 'overlay_iw' for 'w' and 'overlay_ih*w/overlay_iw' for 'h'.

The expressions can contain the following parameters:

The main input width and height.
The overlay input width and height.
The overlay output width and height.
Position of the overlay layer inside of main
Set transparency of overlaid video. Allowed range is 0.0 to 1.0. Higher value means lower transparency. Default value is 1.0.
See framesync.
See framesync.
See framesync.

This filter also supports the framesync options.

Examples

  • Overlay an image LOGO at the top-left corner of the INPUT video. Both inputs for this filter are yuv420p format.
    -i INPUT -i LOGO -filter_complex "[0:v]hwupload[a], [1:v]format=yuv420p, hwupload[b], [a][b]overlay_vaapi" OUTPUT
    
  • Overlay an image LOGO at the offset (200, 100) from the top-left corner of the INPUT video. The inputs have same memory layout for color channels, the overlay has additional alpha plane, like INPUT is yuv420p, and the LOGO is yuva420p.
    -i INPUT -i LOGO -filter_complex "[0:v]hwupload[a], [1:v]format=yuva420p, hwupload[b], [a][b]overlay_vaapi=x=200:y=100:w=400:h=300:alpha=1.0, hwdownload, format=nv12" OUTPUT
    

Perform HDR-to-SDR or HDR-to-HDR tone-mapping. It currently only accepts HDR10 as input.

It accepts the following parameters:

format
Specify the output pixel format.

Default is nv12 for HDR-to-SDR tone-mapping and p010 for HDR-to-HDR tone-mapping.

Set the output color primaries.

Default is bt709 for HDR-to-SDR tone-mapping and same as input for HDR-to-HDR tone-mapping.

Set the output transfer characteristics.

Default is bt709 for HDR-to-SDR tone-mapping and same as input for HDR-to-HDR tone-mapping.

Set the output colorspace matrix.

Default is bt709 for HDR-to-SDR tone-mapping and same as input for HDR-to-HDR tone-mapping.

Set the output mastering display colour volume. It is given by a '|'-separated list of two values, two values are space separated. It set display primaries x & y in G, B, R order, then white point x & y, the nominal minimum & maximum display luminances.

HDR-to-HDR tone-mapping will be performed when this option is set.

Set the output content light level information. It accepts 2 space-separated values, the first input is the maximum light level and the second input is the maximum average light level.

It is ignored for HDR-to-SDR tone-mapping, and optional for HDR-to-HDR tone-mapping.

Example

  • Convert HDR(HDR10) video to bt2020-transfer-characteristic p010 format
    tonemap_vaapi=format=p010:t=bt2020-10
    
  • Convert HDR video to HDR video
    tonemap_vaapi=display=7500\ 3000|34000\ 16000|13250\ 34500|15635\ 16450|500\ 10000000
    

Stack input videos horizontally.

This is the VA-API variant of the hstack filter, each input stream may have different height, this filter will scale down/up each input stream while keeping the original aspect.

It accepts the following options:

See hstack.
See hstack.
Set height of output. If set to 0, this filter will set height of output to height of the first input stream. Default value is 0.

Stack input videos vertically.

This is the VA-API variant of the vstack filter, each input stream may have different width, this filter will scale down/up each input stream while keeping the original aspect.

It accepts the following options:

See vstack.
See vstack.
Set width of output. If set to 0, this filter will set width of output to width of the first input stream. Default value is 0.

Stack video inputs into custom layout.

This is the VA-API variant of the xstack filter, each input stream may have different size, this filter will scale down/up each input stream to the given output size, or the size of the first input stream.

It accepts the following options:

See xstack.
See xstack.
See xstack. Moreover, this permits the user to supply output size for each input stream.
xstack_vaapi=inputs=4:layout=0_0_1920x1080|0_h0_1920x1080|w0_0_1920x1080|w0_h0_1920x1080
See xstack.
Set output size for each input stream when grid is set. If this option is not set, this filter will set output size by default to the size of the first input stream. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual.
See xstack.

Add paddings to the input image, and place the original input at the provided x, y coordinates.

It accepts the following options:

Specify an expression for the size of the output image with the paddings added. If the value for width or height is 0, the corresponding input size is used for the output.

The width expression can reference the value set by the height expression, and vice versa.

The default value of width and height is 0.

Specify the offsets to place the input image at within the padded area, with respect to the top/left border of the output image.

The x expression can reference the value set by the y expression, and vice versa.

The default value of x and y is 0.

If x or y evaluate to a negative number, they'll be changed so the input image is centered on the padded area.

Specify the color of the padded area. For the syntax of this option, check the "Color" section in the ffmpeg-utils manual.
Pad to an aspect instead to a resolution.

The value for the width, height, x, and y options are expressions containing the following constants:

The input video width and height.
These are the same as in_w and in_h.
The output width and height (the size of the padded area), as specified by the width and height expressions.
These are the same as out_w and out_h.
The x and y offsets as specified by the x and y expressions, or NAN if not yet specified.
same as iw / ih
input sample aspect ratio
input display aspect ratio, it is the same as (iw / ih) * sar

Draw a colored box on the input image.

It accepts the following parameters:

The expressions which specify the top left corner coordinates of the box. It defaults to 0.
The expressions which specify the width and height of the box; if 0 they are interpreted as the input width and height. It defaults to 0.
Specify the color of the box to write. For the general syntax of this option, check the "Color" section in the ffmpeg-utils manual.
The expression which sets the thickness of the box edge. A value of "fill" will create a filled box. Default value is 3.

See below for the list of accepted constants.

With value 1, the pixels of the painted box will overwrite the video's color and alpha pixels. Default is 0, which composites the box onto the input video.

The parameters for x, y, w and h and t are expressions containing the following constants:

The input width and height.
The x and y offset coordinates where the box is drawn.
The width and height of the drawn box.
The thickness of the drawn box.

Examples

  • Draw a black box around the edge of the input image:
    drawbox
    
  • Draw a box with color red and an opacity of 50%:
    drawbox=10:20:200:60:red@0.5
    

    The previous example can be specified as:

    drawbox=x=10:y=20:w=200:h=60:color=red@0.5
    
  • Fill the box with pink color:
    drawbox=x=10:y=10:w=100:h=100:color=pink@0.5:t=fill
    
  • Draw a 2-pixel red 2.40:1 mask:
    drawbox=x=-t:y=0.5*(ih-iw/2.4)-t:w=iw+t*2:h=iw/2.4+t*2:t=2:c=red
    

Below is a description of the currently available Vulkan video filters.

To enable compilation of these filters you need to configure FFmpeg with "--enable-vulkan" and either "--enable-libglslang" or "--enable-libshaderc".

Running Vulkan filters requires you to initialize a hardware device and to pass that device to all filters in any filter graph.

Initialise a new hardware device of type vulkan called name, using the given device parameters and options in key=value. The following options are supported:
Switches validation layers on if set to 1.
Allocates linear images. Does not apply to decoding.
Disables multiplane images. Does not apply to decoding.
Pass the hardware device called name to all filters in any filter graph.

For more detailed information see https://www.ffmpeg.org/ffmpeg.html#Advanced-Video-options

Example of choosing the first device and running nlmeans_vulkan filter with default parameters on it.
-init_hw_device vulkan=vk:0 -filter_hw_device vk -i INPUT -vf "hwupload,nlmeans_vulkan,hwdownload" OUTPUT

As Vulkan filters are not able to access frame data in normal memory, all frame data needs to be uploaded (hwupload) to hardware surfaces connected to the appropriate device before being used and then downloaded (hwdownload) back to normal memory. Note that hwupload will upload to a frame with the same layout as the software frame, so it may be necessary to add a format filter immediately before to get the input into the right format and hwdownload does not support all formats on the output - it is usually necessary to insert an additional format filter immediately following in the graph to get the output in a supported format.

Apply an average blur filter, implemented on the GPU using Vulkan.

The filter accepts the following options:

Set horizontal radius size. Range is "[1, 32]" and default value is 3.
Set vertical radius size. Range is "[1, 32]" and default value is 3.
Set which planes to filter. Default value is 0xf, by which all planes are processed.

Blend two Vulkan frames into each other.

The "blend" filter takes two input streams and outputs one stream, the first input is the "top" layer and second input is "bottom" layer. By default, the output terminates when the longest input terminates.

A description of the accepted options follows.

Set blend mode for specific pixel component or all pixel components in case of all_mode. Default value is "normal".

Available values for component modes are:

multiply

Deinterlacer using bwdif, the "Bob Weaver Deinterlacing Filter" algorithm, implemented on the GPU using Vulkan.

It accepts the following parameters:

The interlacing mode to adopt. It accepts one of the following values:
0, send_frame
Output one frame for each frame.
1, send_field
Output one frame for each field.

The default value is "send_field".

The picture field parity assumed for the input interlaced video. It accepts one of the following values:
0, tff
Assume the top field is first.
1, bff
Assume the bottom field is first.
-1, auto
Enable automatic detection of field parity.

The default value is "auto". If the interlacing is unknown or the decoder does not export this information, top field first will be assumed.

Specify which frames to deinterlace. Accepts one of the following values:
0, all
Deinterlace all frames.
1, interlaced
Only deinterlace frames marked as interlaced.

The default value is "all".

Apply an effect that emulates chromatic aberration. Works best with RGB inputs, but provides a similar effect with YCbCr inputs too.

Horizontal displacement multiplier. Each chroma pixel's position will be multiplied by this amount, starting from the center of the image. Default is 0.
Similarly, this sets the vertical displacement multiplier. Default is 0.

Video source that creates a Vulkan frame of a solid color. Useful for benchmarking, or overlaying.

It accepts the following parameters:

The color to use. Either a name, or a hexadecimal value. The default value is "black".
The size of the output frame. Default value is "1920x1080".
The framerate to output at. Default value is 60 frames per second.
The video duration. Default value is -0.000001.
The video signal aspect ratio. Default value is "1/1".
format
The pixel format of the output Vulkan frames. Default value is "yuv444p".
Set the output YCbCr sample range.

This allows the autodetected value to be overridden as well as allows forcing a specific value used for the output and encoder. If not specified, the range depends on the pixel format. Possible values:

Choose automatically.
Set full range (0-255 in case of 8-bit luma).
Set "MPEG" range (16-235 in case of 8-bit luma).

Flips an image vertically.

Flips an image horizontally.

Flips an image along both the vertical and horizontal axis.

Apply Gaussian blur filter on Vulkan frames.

The filter accepts the following options:

Set horizontal sigma, standard deviation of Gaussian blur. Default is 0.5.
Set vertical sigma, if negative it will be same as "sigma". Default is -1.
Set which planes to filter. By default all planes are filtered.
Set the kernel size along the horizontal axis. Default is 19.
Set the kernel size along the vertical axis. Default is 0, which sets to use the same value as size.

Denoise frames using Non-Local Means algorithm, implemented on the GPU using Vulkan. Supports more pixel formats than nlmeans or nlmeans_opencl, including alpha channel support.

The filter accepts the following options.

Set denoising strength for all components. Default is 1.0. Must be in range [1.0, 100.0].
Set patch size for all planes. Default is 7. Must be odd number in range [0, 99].
Set research size. Default is 15. Must be odd number in range [0, 99].
Set parallelism. Default is 36. Must be a number in the range [1, 168]. Larger values may speed up processing, at the cost of more VRAM. Lower values will slow it down, reducing VRAM usage. Only supported on GPUs with atomic float operations (RDNA3+, Ampere+).
Set denoising strength for a specific component. Default is 1, equal to s. Must be odd number in range [1, 100].
Set patch size for a specific component. Default is 7, equal to p. Must be odd number in range [0, 99].

Overlay one video on top of another.

It takes two inputs and has one output. The first input is the "main" video on which the second input is overlaid. This filter requires all inputs to use the same pixel format. So, format conversion may be needed.

The filter accepts the following options:

Set the x coordinate of the overlaid video on the main video. Default value is 0.
Set the y coordinate of the overlaid video on the main video. Default value is 0.

Transpose rows with columns in the input video and optionally flip it. For more in depth examples see the transpose video filter, which shares mostly the same options.

It accepts the following parameters:

Specify the transposition direction.

Can assume the following values:

Rotate by 90 degrees counterclockwise and vertically flip. (default)
Rotate by 90 degrees clockwise.
Rotate by 90 degrees counterclockwise.
Rotate by 90 degrees clockwise and vertically flip.
hflip
Flip the input video horizontally.
vflip
Flip the input video vertically.
Do not apply the transposition if the input geometry matches the one specified by the specified value. It accepts the following values:
Always apply transposition. (default)
Preserve portrait geometry (when height >= width).
Preserve landscape geometry (when width >= height).

Transpose rows with columns in the input video and optionally flip it. For more in depth examples see the transpose video filter, which shares mostly the same options.

It accepts the following parameters:

Specify the transposition direction.

Can assume the following values:

Rotate by 90 degrees counterclockwise and vertically flip. (default)
Rotate by 90 degrees clockwise.
Rotate by 90 degrees counterclockwise.
Rotate by 90 degrees clockwise and vertically flip.
Do not apply the transposition if the input geometry matches the one specified by the specified value. It accepts the following values:
Always apply transposition. (default)
Preserve portrait geometry (when height >= width).
Preserve landscape geometry (when width >= height).

Below is a description of the currently available QSV video filters.

To enable compilation of these filters you need to configure FFmpeg with "--enable-libmfx" or "--enable-libvpl".

To use QSV filters, you need to setup the QSV device correctly. For more information, please read https://trac.ffmpeg.org/wiki/Hardware/QuickSync

Stack input videos horizontally.

This is the QSV variant of the hstack filter, each input stream may have different height, this filter will scale down/up each input stream while keeping the original aspect.

It accepts the following options:

See hstack.
See hstack.
Set height of output. If set to 0, this filter will set height of output to height of the first input stream. Default value is 0.

Stack input videos vertically.

This is the QSV variant of the vstack filter, each input stream may have different width, this filter will scale down/up each input stream while keeping the original aspect.

It accepts the following options:

See vstack.
See vstack.
Set width of output. If set to 0, this filter will set width of output to width of the first input stream. Default value is 0.

Stack video inputs into custom layout.

This is the QSV variant of the xstack filter.

It accepts the following options:

See xstack.
See xstack.
See xstack. Moreover, this permits the user to supply output size for each input stream.
xstack_qsv=inputs=4:layout=0_0_1920x1080|0_h0_1920x1080|w0_0_1920x1080|w0_h0_1920x1080
See xstack.
Set output size for each input stream when grid is set. If this option is not set, this filter will set output size by default to the size of the first input stream. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual.
See xstack.

Below is a description of the currently available video sources.

Buffer video frames, and make them available to the filter chain.

This source is mainly intended for a programmatic use, in particular through the interface defined in libavfilter/buffersrc.h.

It accepts the following parameters:

Specify the size (width and height) of the buffered video frames. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual.
The input video width.
The input video height.
A string representing the pixel format of the buffered video frames. It may be a number corresponding to a pixel format, or a pixel format name.
Specify the timebase assumed by the timestamps of the buffered frames.
Specify the frame rate expected for the video stream.
colorspace
A string representing the color space of the buffered video frames. It may be a number corresponding to a color space, or a color space name.
A string representing the color range of the buffered video frames. It may be a number corresponding to a color range, or a color range name.
The sample (pixel) aspect ratio of the input video.
When using a hardware pixel format, this should be a reference to an AVHWFramesContext describing input frames.

For example:

buffer=width=320:height=240:pix_fmt=yuv410p:time_base=1/24:sar=1

will instruct the source to accept video frames with size 320x240 and with format "yuv410p", assuming 1/24 as the timestamps timebase and square pixels (1:1 sample aspect ratio). Since the pixel format with name "yuv410p" corresponds to the number 6 (check the enum AVPixelFormat definition in libavutil/pixfmt.h), this example corresponds to:

buffer=size=320x240:pixfmt=6:time_base=1/24:pixel_aspect=1/1

Alternatively, the options can be specified as a flat string, but this syntax is deprecated:

width:height:pix_fmt:time_base.num:time_base.den:pixel_aspect.num:pixel_aspect.den

Create a pattern generated by an elementary cellular automaton.

The initial state of the cellular automaton can be defined through the filename and pattern options. If such options are not specified an initial state is created randomly.

At each new frame a new row in the video is filled with the result of the cellular automaton next generation. The behavior when the whole frame is filled is defined by the scroll option.

This source accepts the following options:

Read the initial cellular automaton state, i.e. the starting row, from the specified file. In the file, each non-whitespace character is considered an alive cell, a newline will terminate the row, and further characters in the file will be ignored.
Read the initial cellular automaton state, i.e. the starting row, from the specified string.

Each non-whitespace character in the string is considered an alive cell, a newline will terminate the row, and further characters in the string will be ignored.

Set the video rate, that is the number of frames generated per second. Default is 25.
Set the random fill ratio for the initial cellular automaton row. It is a floating point number value ranging from 0 to 1, defaults to 1/PHI.

This option is ignored when a file or a pattern is specified.

Set the seed for filling randomly the initial row, must be an integer included between 0 and UINT32_MAX. If not specified, or if explicitly set to -1, the filter will try to use a good random seed on a best effort basis.
Set the cellular automaton rule, it is a number ranging from 0 to 255. Default value is 110.
Set the size of the output video. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual.

If filename or pattern is specified, the size is set by default to the width of the specified initial state row, and the height is set to width * PHI.

If size is set, it must contain the width of the specified pattern string, and the specified pattern will be centered in the larger row.

If a filename or a pattern string is not specified, the size value defaults to "320x518" (used for a randomly generated initial state).

scroll
If set to 1, scroll the output upward when all the rows in the output have been already filled. If set to 0, the new generated row will be written over the top row just after the bottom row is filled. Defaults to 1.
If set to 1, completely fill the output with generated rows before outputting the first frame. This is the default behavior, for disabling set the value to 0.
If set to 1, stitch the left and right row edges together. This is the default behavior, for disabling set the value to 0.

Examples

  • Read the initial state from pattern, and specify an output of size 200x400.
    cellauto=f=pattern:s=200x400
    
  • Generate a random initial row with a width of 200 cells, with a fill ratio of 2/3:
    cellauto=ratio=2/3:s=200x200
    
  • Create a pattern generated by rule 18 starting by a single alive cell centered on an initial row with width 100:
    cellauto=p=@s=100x400:full=0:rule=18
    
  • Specify a more elaborated initial pattern:
    cellauto=p='@@ @ @@':s=100x400:full=0:rule=18
    

Video source generated on GPU using Apple's CoreImage API on OSX.

This video source is a specialized version of the coreimage video filter. Use a core image generator at the beginning of the applied filterchain to generate the content.

The coreimagesrc video source accepts the following options:

List all available generators along with all their respective options as well as possible minimum and maximum values along with the default values.
list_generators=true
Specify the size of the sourced video. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. The default value is "320x240".
Specify the frame rate of the sourced video, as the number of frames generated per second. It has to be a string in the format frame_rate_num/frame_rate_den, an integer number, a floating point number or a valid video frame rate abbreviation. The default value is "25".
Set the sample aspect ratio of the sourced video.
Set the duration of the sourced video. See the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax.

If not specified, or the expressed duration is negative, the video is supposed to be generated forever.

Additionally, all options of the coreimage video filter are accepted. A complete filterchain can be used for further processing of the generated input without CPU-HOST transfer. See coreimage documentation and examples for details.

Examples

Use CIQRCodeGenerator to create a QR code for the FFmpeg homepage, given as complete and escaped command-line for Apple's standard bash shell:
ffmpeg -f lavfi -i coreimagesrc=s=100x100:filter=CIQRCodeGenerator@inputMessage=https\\\\\://FFmpeg.org/@inputCorrectionLevel=H -frames:v 1 QRCode.png

This example is equivalent to the QRCode example of coreimage without the need for a nullsrc video source.

Captures the Windows Desktop via Desktop Duplication API.

The filter exclusively returns D3D11 Hardware Frames, for on-gpu encoding or processing. So an explicit hwdownload is needed for any kind of software processing.

It accepts the following options:

DXGI Output Index to capture.

Usually corresponds to the index Windows has given the screen minus one, so it's starting at 0.

Defaults to output 0.

Whether to draw the mouse cursor.

Defaults to true.

Only affects hardware cursors. If a game or application renders its own cursor, it'll always be captured.

framerate
Maximum framerate at which the desktop will be captured - the interval between successive frames will not be smaller than the inverse of the framerate. When dup_frames is true (the default) and the desktop is not being updated often enough, the filter will duplicate a previous frame. Note that there is no background buffering going on, so when the filter is not polled often enough then the actual inter-frame interval may be significantly larger.

Defaults to 30 FPS.

Specify the size of the captured video.

Defaults to the full size of the screen.

Cropped from the bottom/right if smaller than screen size.

Horizontal offset of the captured video.
Vertical offset of the captured video.
Desired filter output format. Defaults to 8 Bit BGRA.

It accepts the following values:

Passes all supported output formats to DDA and returns what DDA decides to use.
8bit
8 Bit formats always work, and DDA will convert to them if necessary.
10bit
Filter initialization will fail if 10 bit format is requested but unavailable.
When this option is set to true (the default), the filter will duplicate frames when the desktop has not been updated in order to maintain approximately constant target framerate. When this option is set to false, the filter will wait for the desktop to be updated (inter-frame intervals may vary significantly in this case).

Examples

Capture primary screen and encode using nvenc:

ffmpeg -f lavfi -i ddagrab -c:v h264_nvenc -cq 18 output.mp4

You can also skip the lavfi device and directly use the filter. Also demonstrates downloading the frame and encoding with libx264. Explicit output format specification is required in this case:

ffmpeg -filter_complex ddagrab=output_idx=1:framerate=60,hwdownload,format=bgra -c:v libx264 -crf 18 output.mp4

If you want to capture only a subsection of the desktop, this can be achieved by specifying a smaller size and its offsets into the screen:

ddagrab=video_size=800x600:offset_x=100:offset_y=100

Generate several gradients.

Set frame size. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "640x480".
Set frame rate, expressed as number of frames per second. Default value is "25".
Set 8 colors. Default values for colors is to pick random one.
Set gradient line source and destination points. If negative or out of range, random ones are picked.
Set number of colors to use at once. Allowed range is from 2 to 8. Default value is 2.
Set seed for picking gradient line points.
Set the duration of the sourced video. See the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax.

If not specified, or the expressed duration is negative, the video is supposed to be generated forever.

Set speed of gradients rotation.
Set type of gradients. Available values are:

Default type is linear.

Commands

This source supports the some above options as commands.

Generate a Mandelbrot set fractal, and progressively zoom towards the point specified with start_x and start_y.

This source accepts the following options:

Set the terminal pts value. Default value is 400.
Set the terminal scale value. Must be a floating point value. Default value is 0.3.
Set the inner coloring mode, that is the algorithm used to draw the Mandelbrot fractal internal region.

It shall assume one of the following values:

Set black mode.
Show time until convergence.
Set color based on point closest to the origin of the iterations.
Set period mode.

Default value is mincol.

Set the bailout value. Default value is 10.0.
Set the maximum of iterations performed by the rendering algorithm. Default value is 7189.
Set outer coloring mode. It shall assume one of following values:
Set iteration count mode.
set normalized iteration count mode.

Default value is normalized_iteration_count.

Set frame rate, expressed as number of frames per second. Default value is "25".
Set frame size. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "640x480".
Set the initial scale value. Default value is 3.0.
Set the initial x position. Must be a floating point value between -100 and 100. Default value is -0.743643887037158704752191506114774.
Set the initial y position. Must be a floating point value between -100 and 100. Default value is -0.131825904205311970493132056385139.

Generate various test patterns, as generated by the MPlayer test filter.

The size of the generated video is fixed, and is 256x256. This source is useful in particular for testing encoding features.

This source accepts the following options:

Specify the frame rate of the sourced video, as the number of frames generated per second. It has to be a string in the format frame_rate_num/frame_rate_den, an integer number, a floating point number or a valid video frame rate abbreviation. The default value is "25".
Set the duration of the sourced video. See the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax.

If not specified, or the expressed duration is negative, the video is supposed to be generated forever.

Set the number or the name of the test to perform. Supported tests are:
Set the maximum number of frames generated for each test, default value is 30.

Default value is "all", which will cycle through the list of all tests.

Some examples:

mptestsrc=t=dc_luma

will generate a "dc_luma" test pattern.

Provide a frei0r source.

To enable compilation of this filter you need to install the frei0r header and configure FFmpeg with "--enable-frei0r".

This source accepts the following parameters:

The size of the video to generate. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual.
framerate
The framerate of the generated video. It may be a string of the form num/den or a frame rate abbreviation.
The name to the frei0r source to load. For more information regarding frei0r and how to set the parameters, read the frei0r section in the video filters documentation.
A '|'-separated list of parameters to pass to the frei0r source.

For example, to generate a frei0r partik0l source with size 200x200 and frame rate 10 which is overlaid on the overlay filter main input:

frei0r_src=size=200x200:framerate=10:filter_name=partik0l:filter_params=1234 [overlay]; [in][overlay] overlay

Generate a life pattern.

This source is based on a generalization of John Conway's life game.

The sourced input represents a life grid, each pixel represents a cell which can be in one of two possible states, alive or dead. Every cell interacts with its eight neighbours, which are the cells that are horizontally, vertically, or diagonally adjacent.

At each interaction the grid evolves according to the adopted rule, which specifies the number of neighbor alive cells which will make a cell stay alive or born. The rule option allows one to specify the rule to adopt.

This source accepts the following options:

Set the file from which to read the initial grid state. In the file, each non-whitespace character is considered an alive cell, and newline is used to delimit the end of each row.

If this option is not specified, the initial grid is generated randomly.

Set the video rate, that is the number of frames generated per second. Default is 25.
Set the random fill ratio for the initial random grid. It is a floating point number value ranging from 0 to 1, defaults to 1/PHI. It is ignored when a file is specified.
Set the seed for filling the initial random grid, must be an integer included between 0 and UINT32_MAX. If not specified, or if explicitly set to -1, the filter will try to use a good random seed on a best effort basis.
Set the life rule.

A rule can be specified with a code of the kind "SNS/BNB", where NS and NB are sequences of numbers in the range 0-8, NS specifies the number of alive neighbor cells which make a live cell stay alive, and NB the number of alive neighbor cells which make a dead cell to become alive (i.e. to "born"). "s" and "b" can be used in place of "S" and "B", respectively.

Alternatively a rule can be specified by an 18-bits integer. The 9 high order bits are used to encode the next cell state if it is alive for each number of neighbor alive cells, the low order bits specify the rule for "borning" new cells. Higher order bits encode for an higher number of neighbor cells. For example the number 6153 = "(12<<9)+9" specifies a stay alive rule of 12 and a born rule of 9, which corresponds to "S23/B03".

Default value is "S23/B3", which is the original Conway's game of life rule, and will keep a cell alive if it has 2 or 3 neighbor alive cells, and will born a new cell if there are three alive cells around a dead cell.

Set the size of the output video. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual.

If filename is specified, the size is set by default to the same size of the input file. If size is set, it must contain the size specified in the input file, and the initial grid defined in that file is centered in the larger resulting area.

If a filename is not specified, the size value defaults to "320x240" (used for a randomly generated initial grid).

If set to 1, stitch the left and right grid edges together, and the top and bottom edges also. Defaults to 1.
Set cell mold speed. If set, a dead cell will go from death_color to mold_color with a step of mold. mold can have a value from 0 to 255.
Set the color of living (or new born) cells.
Set the color of dead cells. If mold is set, this is the first color used to represent a dead cell.
Set mold color, for definitely dead and moldy cells.

For the syntax of these 3 color options, check the "Color" section in the ffmpeg-utils manual.

Examples

  • Read a grid from pattern, and center it on a grid of size 300x300 pixels:
    life=f=pattern:s=300x300
    
  • Generate a random grid of size 200x200, with a fill ratio of 2/3:
    life=ratio=2/3:s=200x200
    
  • Specify a custom rule for evolving a randomly generated grid:
    life=rule=S14/B34
    
  • Full example with slow death effect (mold) using ffplay:
    ffplay -f lavfi life=s=300x200:mold=10:r=60:ratio=0.1:death_color=#C83232:life_color=#00ff00,scale=1200:800:flags=16
    

Generate Perlin noise.

Perlin noise is a kind of noise with local continuity in space. This can be used to generate patterns with continuity in space and time, e.g. to simulate smoke, fluids, or terrain.

In case more than one octave is specified through the octaves option, Perlin noise is generated as a sum of components, each one with doubled frequency. In this case the persistence option specify the ratio of the amplitude with respect to the previous component. More octave components enable to specify more high frequency details in the generated noise (e.g. small size variations due to boulders in a generated terrain).

Options

Specify the size (width and height) of the buffered video frames. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "320x240".
Specify the frame rate expected for the video stream, expressed as a number of frames per second. Default value is 25.
Specify the total number of components making up the noise, each one with doubled frequency. Default value is 1.
Set the ratio used to compute the amplitude of the next octave component with respect to the previous component amplitude. Default value is 1.
Define a scale factor used to multiple the x, y coordinates. This can be useful to define an effect with a pattern stretched along the x or y axis. Default value is 1.
Define a scale factor used to multiple the time coordinate. This can be useful to change the time variation speed. Default value is 1.
Set random mode used to compute initial pattern.

Supported values are:

random
Compute and use random seed.
Use the predefined initial pattern defined by Ken Perlin in the original article, can be useful to compare the output with other sources.
Use the value specified by random_seed option.

Default value is "random".

When random_mode is set to random_seed, use this value to compute the initial pattern. Default value is 0.

Examples

  • Generate single component:
    perlin
    
  • Use Perlin noise with 7 components, each one with a halved contribution to total amplitude:
    perlin=octaves=7:persistence=0.5
    
  • Chain Perlin noise with the lutyuv to generate a black&white effect:
    perlin=octaves=3:tscale=0.3,lutyuv=y='if(lt(val\,128)\,255\,0)'
    
  • Stretch noise along the y axis, and convert gray level to red-only signal:
    perlin=octaves=7:tscale=0.4:yscale=0.3,lutrgb=r=val:b=0:g=0
    

Generate a QR code using the libqrencode library (see https://fukuchi.org/works/qrencode/).

To enable the compilation of this source, you need to configure FFmpeg with "--enable-libqrencode".

The QR code is generated from the provided text or text pattern. The corresponding QR code is scaled and put in the video output according to the specified output size options.

In case no text is specified, the QR code is not generated, but an empty colored output is returned instead.

This source accepts the following options:

Specify an expression for the width of the rendered QR code, with and without padding. The qrcode_width expression can reference the value set by the padded_qrcode_width expression, and vice versa. By default padded_qrcode_width is set to qrcode_width, meaning that there is no padding.

These expressions are evaluated only once, when initializing the source. See the qrencode Expressions section for details.

Note that some of the constants are missing for the source (for example the x or t or ¸n), since they only makes sense when evaluating the expression for each frame rather than at initialization time.

Specify the frame rate of the sourced video, as the number of frames generated per second. It has to be a string in the format frame_rate_num/frame_rate_den, an integer number, a floating point number or a valid video frame rate abbreviation. The default value is "25".
Instruct libqrencode to use case sensitive encoding. This is enabled by default. This can be disabled to reduce the QR encoding size.
Specify the QR encoding error correction level. With an higher correction level, the encoding size will increase but the code will be more robust to corruption. Lower level is L.

It accepts the following values:

Select how the input text is expanded. Can be either "none", or "normal" (default). See the qrencode Text expansion section for details.
Define the text to be rendered. In case neither is specified, no QR is encoded (just an empty colored frame).

In case expansion is enabled, the text is treated as a text template, using the qrencode expansion mechanism. See the qrencode Text expansion section for details.

Set the QR code and background color. The default value of foreground_color is "black", the default value of background_color is "white".

For the syntax of the color options, check the "Color" section in the ffmpeg-utils manual.

Examples

  • Generate a QR code encoding the specified text with the default size:
    qrencodesrc=text=www.ffmpeg.org
    
  • Same as below, but select blue on pink colors:
    qrencodesrc=text=www.ffmpeg.org:bc=pink:fc=blue
    
  • Generate a QR code with width of 200 pixels and padding, making the padded width 4/3 of the QR code width:
    qrencodesrc=text=www.ffmpeg.org:q=200:Q=4/3*q
    
  • Generate a QR code with padded width of 200 pixels and padding, making the QR code width 3/4 of the padded width:
    qrencodesrc=text=www.ffmpeg.org:Q=200:q=3/4*Q
    
  • Generate a QR code encoding the frame number:
    qrencodesrc=text=%{n}
    
  • Generate a QR code encoding the GMT timestamp:
    qrencodesrc=text=%{gmtime}
    
  • Generate a QR code encoding the timestamp expressed as a float:
    qrencodesrc=text=%{pts}
    

The "allrgb" source returns frames of size 4096x4096 of all rgb colors.

The "allyuv" source returns frames of size 4096x4096 of all yuv colors.

The "color" source provides an uniformly colored input.

The "colorchart" source provides a colors checker chart.

The "colorspectrum" source provides a color spectrum input.

The "haldclutsrc" source provides an identity Hald CLUT. See also haldclut filter.

The "nullsrc" source returns unprocessed video frames. It is mainly useful to be employed in analysis / debugging tools, or as the source for filters which ignore the input data.

The "pal75bars" source generates a color bars pattern, based on EBU PAL recommendations with 75% color levels.

The "pal100bars" source generates a color bars pattern, based on EBU PAL recommendations with 100% color levels.

The "rgbtestsrc" source generates an RGB test pattern useful for detecting RGB vs BGR issues. You should see a red, green and blue stripe from top to bottom.

The "smptebars" source generates a color bars pattern, based on the SMPTE Engineering Guideline EG 1-1990.

The "smptehdbars" source generates a color bars pattern, based on the SMPTE RP 219-2002.

The "testsrc" source generates a test video pattern, showing a color pattern, a scrolling gradient and a timestamp. This is mainly intended for testing purposes.

The "testsrc2" source is similar to testsrc, but supports more pixel formats instead of just "rgb24". This allows using it as an input for other tests without requiring a format conversion.

The "yuvtestsrc" source generates an YUV test pattern. You should see a y, cb and cr stripe from top to bottom.

The sources accept the following parameters:

Specify the level of the Hald CLUT, only available in the "haldclutsrc" source. A level of "N" generates a picture of "N*N*N" by "N*N*N" pixels to be used as identity matrix for 3D lookup tables. Each component is coded on a "1/(N*N)" scale.
Specify the color of the source, only available in the "color" source. For the syntax of this option, check the "Color" section in the ffmpeg-utils manual.
Specify the size of the sourced video. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. The default value is "320x240".

This option is not available with the "allrgb", "allyuv", and "haldclutsrc" filters.

Specify the frame rate of the sourced video, as the number of frames generated per second. It has to be a string in the format frame_rate_num/frame_rate_den, an integer number, a floating point number or a valid video frame rate abbreviation. The default value is "25".
Set the duration of the sourced video. See the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax.

If not specified, or the expressed duration is negative, the video is supposed to be generated forever.

Since the frame rate is used as time base, all frames including the last one will have their full duration. If the specified duration is not a multiple of the frame duration, it will be rounded up.

Set the sample aspect ratio of the sourced video.
Specify the alpha (opacity) of the background, only available in the "testsrc2" source. The value must be between 0 (fully transparent) and 255 (fully opaque, the default).
Set the number of decimals to show in the timestamp, only available in the "testsrc" source.

The displayed timestamp value will correspond to the original timestamp value multiplied by the power of 10 of the specified value. Default value is 0.

Set the type of the color spectrum, only available in the "colorspectrum" source. Can be one of the following:
Set patch size of single color patch, only available in the "colorchart" source. Default is "64x64".
Set colorchecker colors preset, only available in the "colorchart" source.

Available values are:

Default value is "reference".

Examples

  • Generate a video with a duration of 5.3 seconds, with size 176x144 and a frame rate of 10 frames per second:
    testsrc=duration=5.3:size=qcif:rate=10
    
  • The following graph description will generate a red source with an opacity of 0.2, with size "qcif" and a frame rate of 10 frames per second:
    color=c=red@0.2:s=qcif:r=10
    
  • If the input content is to be ignored, "nullsrc" can be used. The following command generates noise in the luma plane by employing the "geq" filter:
    nullsrc=s=256x256, geq=random(1)*255:128:128
    

Commands

The "color" source supports the following commands:

Set the color of the created image. Accepts the same syntax of the corresponding color option.

Generate video using an OpenCL program.

OpenCL program source file.
Kernel name in program.
Size of frames to generate. This must be set.
format
Pixel format to use for the generated frames. This must be set.
Number of frames generated every second. Default value is '25'.

For details of how the program loading works, see the program_opencl filter.

Example programs:

  • Generate a colour ramp by setting pixel values from the position of the pixel in the output image. (Note that this will work with all pixel formats, but the generated output will not be the same.)
    __kernel void ramp(__write_only image2d_t dst,
                       unsigned int index)
    {
        int2 loc = (int2)(get_global_id(0), get_global_id(1));
    
        float4 val;
        val.xy = val.zw = convert_float2(loc) / convert_float2(get_image_dim(dst));
    
        write_imagef(dst, loc, val);
    }
    
  • Generate a Sierpinski carpet pattern, panning by a single pixel each frame.
    __kernel void sierpinski_carpet(__write_only image2d_t dst,
                                    unsigned int index)
    {
        int2 loc = (int2)(get_global_id(0), get_global_id(1));
    
        float4 value = 0.0f;
        int x = loc.x + index;
        int y = loc.y + index;
        while (x > 0 || y > 0) {
            if (x % 3 == 1 && y % 3 == 1) {
                value = 1.0f;
                break;
            }
            x /= 3;
            y /= 3;
        }
    
        write_imagef(dst, loc, value);
    }
    

Generate a Sierpinski carpet/triangle fractal, and randomly pan around.

This source accepts the following options:

Set frame size. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "640x480".
Set frame rate, expressed as number of frames per second. Default value is "25".
Set seed which is used for random panning.
Set max jump for single pan destination. Allowed range is from 1 to 10000.
Set fractal type, can be default "carpet" or "triangle".

Generate a zoneplate test video pattern.

This source accepts the following options:

Set frame size. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "320x240".
Set frame rate, expressed as number of frames per second. Default value is "25".
Set the duration of the sourced video. See the Time duration section in the ffmpeg-utils(1) manual for the accepted syntax.

If not specified, or the expressed duration is negative, the video is supposed to be generated forever.

Set the sample aspect ratio of the sourced video.
Set precision in bits for look-up table for sine calculations. Default value is 10. Allowed range is from 4 to 16.
Set horizontal axis offset for output signal. Default value is 0.
Set vertical axis offset for output signal. Default value is 0.
Set time axis offset for output signal. Default value is 0.
Set 0-order, constant added to signal phase. Default value is 0.
Set 1-order, phase factor multiplier for horizontal axis. Default value is 0.
Set 1-order, phase factor multiplier for vertical axis. Default value is 0.
Set 1-order, phase factor multiplier for time axis. Default value is 0.
Set phase factor multipliers for combination of spatial and temporal axis. Default value is 0.
Set 2-order, phase factor multiplier for horizontal axis. Default value is 0.
Set 2-order, phase factor multiplier for vertical axis. Default value is 0.
Set 2-order, phase factor multiplier for time axis. Default value is 0.
Set the constant added to final phase to produce chroma-blue component of signal. Default value is 0.
Set the constant added to final phase to produce chroma-red component of signal. Default value is 0.

Commands

This source supports the some above options as commands.

Examples

  • Generate horizontal color sine sweep:
    zoneplate=ku=512:kv=0:kt2=0:kx2=256:s=wvga:xo=-426:kt=11
    
  • Generate vertical color sine sweep:
    zoneplate=ku=512:kv=0:kt2=0:ky2=156:s=wvga:yo=-240:kt=11
    
  • Generate circular zone-plate:
    zoneplate=ku=512:kv=100:kt2=0:ky2=256:kx2=556:s=wvga:yo=0:kt=11
    

Below is a description of the currently available video sinks.

Buffer video frames, and make them available to the end of the filter graph.

This sink is mainly intended for programmatic use, in particular through the interface defined in libavfilter/buffersink.h or the options system.

It accepts a pointer to an AVBufferSinkContext structure, which defines the incoming buffers' formats, to be passed as the opaque parameter to "avfilter_init_filter" for initialization.

Null video sink: do absolutely nothing with the input video. It is mainly useful as a template and for use in analysis / debugging tools.

Below is a description of the currently available multimedia filters.

Convert input audio to 3d scope video output.

The filter accepts the following options:

Set frame rate, expressed as number of frames per second. Default value is "25".
Specify the video size for the output. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "hd720".
Set the camera field of view. Default is 90 degrees. Allowed range is from 40 to 150.
Set the camera roll.
Set the camera pitch.
Set the camera yaw.
Set the camera zoom on X-axis.
Set the camera zoom on Y-axis.
Set the camera zoom on Z-axis.
Set the camera position on X-axis.
Set the camera position on Y-axis.
Set the camera position on Z-axis.
Set the length of displayed audio waves in number of frames.

Commands

Filter supports the some above options as commands.

Convert input audio to a video output, displaying the audio bit scope.

The filter accepts the following options:

Set frame rate, expressed as number of frames per second. Default value is "25".
Specify the video size for the output. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "1024x256".
Specify list of colors separated by space or by '|' which will be used to draw channels. Unrecognized or missing colors will be replaced by white color.
Set output mode. Can be "bars" or "trace". Default is "bars".

Draw a graph using input audio metadata.

See drawgraph

See graphmonitor.

Convert input audio to a video output, displaying the volume histogram.

The filter accepts the following options:

Specify how histogram is calculated.

It accepts the following values:

Use single histogram for all channels.
Use separate histogram for each channel.

Default is "single".

Set frame rate, expressed as number of frames per second. Default value is "25".
Specify the video size for the output. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "hd720".
scale
Set display scale.

It accepts the following values:

logarithmic
square root
cubic root
linear
reverse logarithmic

Default is "log".

Set amplitude scale.

It accepts the following values:

logarithmic
linear

Default is "log".

Set how much frames to accumulate in histogram. Default is 1. Setting this to -1 accumulates all frames.
Set histogram ratio of window height.
Set sonogram sliding.

It accepts the following values:

replace old rows with new ones.
scroll
scroll from top to bottom.

Default is "replace".

Set histogram mode.

It accepts the following values:

Use absolute values of samples.
Use untouched values of samples.

Default is "abs".

Measures phase of input audio, which is exported as metadata "lavfi.aphasemeter.phase", representing mean phase of current audio frame. A video output can also be produced and is enabled by default. The audio is passed through as first output.

Audio will be rematrixed to stereo if it has a different channel layout. Phase value is in range "[-1, 1]" where -1 means left and right channels are completely out of phase and 1 means channels are in phase.

The filter accepts the following options, all related to its video output:

Set the output frame rate. Default value is 25.
Set the video size for the output. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "800x400".
Specify the red, green, blue contrast. Default values are 2, 7 and 1. Allowed range is "[0, 255]".
Set color which will be used for drawing median phase. If color is "none" which is default, no median phase value will be drawn.
Enable video output. Default is enabled.

phasing detection

The filter also detects out of phase and mono sequences in stereo streams. It logs the sequence start, end and duration when it lasts longer or as long as the minimum set.

The filter accepts the following options for this detection:

Enable mono and out of phase detection. Default is disabled.
Set phase tolerance for mono detection, in amplitude ratio. Default is 0. Allowed range is "[0, 1]".
Set angle threshold for out of phase detection, in degree. Default is 170. Allowed range is "[90, 180]".
Set mono or out of phase duration until notification, expressed in seconds. Default is 2.

Examples

Complete example with ffmpeg to detect 1 second of mono with 0.001 phase tolerance:
ffmpeg -i stereo.wav -af aphasemeter=video=0:phasing=1:duration=1:tolerance=0.001 -f null -

Convert input audio to a video output, representing the audio vector scope.

The filter is used to measure the difference between channels of stereo audio stream. A monaural signal, consisting of identical left and right signal, results in straight vertical line. Any stereo separation is visible as a deviation from this line, creating a Lissajous figure. If the straight (or deviation from it) but horizontal line appears this indicates that the left and right channels are out of phase.

The filter accepts the following options:

Set the vectorscope mode.

Available values are:

Lissajous rotated by 45 degrees.
Same as above but not rotated.
Shape resembling half of circle.

Default value is lissajous.

Set the video size for the output. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "400x400".
Set the output frame rate. Default value is 25.
Specify the red, green, blue and alpha contrast. Default values are 40, 160, 80 and 255. Allowed range is "[0, 255]".
Specify the red, green, blue and alpha fade. Default values are 15, 10, 5 and 5. Allowed range is "[0, 255]".
Set the zoom factor. Default value is 1. Allowed range is "[0, 10]". Values lower than 1 will auto adjust zoom factor to maximal possible value.
Set the vectorscope drawing mode.

Available values are:

Draw dot for each sample.
Draw line between previous and current sample.
Draw anti-aliased line between previous and current sample.

Default value is dot.

scale
Specify amplitude scale of audio samples.

Available values are:

Linear.
Square root.
Cubic root.
Logarithmic.
Swap left channel axis with right channel axis.
Mirror axis.
No mirror.
Mirror only x axis.
Mirror only y axis.
Mirror both axis.

Examples

Complete example using ffplay:
ffplay -f lavfi 'amovie=input.mp3, asplit [a][out1];
             [a] avectorscope=zoom=1.3:rc=2:gc=200:bc=10:rf=1:gf=8:bf=7 [out0]'

Commands

This filter supports the all above options as commands except options "size" and "rate".

Benchmark part of a filtergraph.

The filter accepts the following options:

Start or stop a timer.

Available values are:

Get the current time, set it as frame metadata (using the key "lavfi.bench.start_time"), and forward the frame to the next filter.
Get the current time and fetch the "lavfi.bench.start_time" metadata from the input frame metadata to get the time difference. Time difference, average, maximum and minimum time (respectively "t", "avg", "max" and "min") are then printed. The timestamps are expressed in seconds.

Examples

Benchmark selectivecolor filter:
bench=start,selectivecolor=reds=-.2 .12 -.49,bench=stop

Concatenate audio and video streams, joining them together one after the other.

The filter works on segments of synchronized video and audio streams. All segments must have the same number of streams of each type, and that will also be the number of streams at output.

The filter accepts the following options:

Set the number of segments. Default is 2.
Set the number of output video streams, that is also the number of video streams in each segment. Default is 1.
Set the number of output audio streams, that is also the number of audio streams in each segment. Default is 0.
Activate unsafe mode: do not fail if segments have a different format.

The filter has v+a outputs: first v video outputs, then a audio outputs.

There are nx(v+a) inputs: first the inputs for the first segment, in the same order as the outputs, then the inputs for the second segment, etc.

Related streams do not always have exactly the same duration, for various reasons including codec frame size or sloppy authoring. For that reason, related synchronized streams (e.g. a video and its audio track) should be concatenated at once. The concat filter will use the duration of the longest stream in each segment (except the last one), and if necessary pad shorter audio streams with silence.

For this filter to work correctly, all segments must start at timestamp 0.

All corresponding streams must have the same parameters in all segments; the filtering system will automatically select a common pixel format for video streams, and a common sample format, sample rate and channel layout for audio streams, but other settings, such as resolution, must be converted explicitly by the user.

Different frame rates are acceptable but will result in variable frame rate at output; be sure to configure the output file to handle it.

Examples

  • Concatenate an opening, an episode and an ending, all in bilingual version (video in stream 0, audio in streams 1 and 2):
    ffmpeg -i opening.mkv -i episode.mkv -i ending.mkv -filter_complex \
      '[0:0] [0:1] [0:2] [1:0] [1:1] [1:2] [2:0] [2:1] [2:2]
       concat=n=3:v=1:a=2 [v] [a1] [a2]' \
      -map '[v]' -map '[a1]' -map '[a2]' output.mkv
    
  • Concatenate two parts, handling audio and video separately, using the (a)movie sources, and adjusting the resolution:
    movie=part1.mp4, scale=512:288 [v1] ; amovie=part1.mp4 [a1] ;
    movie=part2.mp4, scale=512:288 [v2] ; amovie=part2.mp4 [a2] ;
    [v1] [v2] concat [outv] ; [a1] [a2] concat=v=0:a=1 [outa]
    

    Note that a desync will happen at the stitch if the audio and video streams do not have exactly the same duration in the first file.

Commands

This filter supports the following commands:

Close the current segment and step to the next one

EBU R128 scanner filter. This filter takes an audio stream and analyzes its loudness level. By default, it logs a message at a frequency of 10Hz with the Momentary loudness (identified by "M"), Short-term loudness ("S"), Integrated loudness ("I") and Loudness Range ("LRA").

The filter can only analyze streams which have sample format is double-precision floating point. The input stream will be converted to this specification, if needed. Users may need to insert aformat and/or aresample filters after this filter to obtain the original parameters.

The filter also has a video output (see the video option) with a real time graph to observe the loudness evolution. The graphic contains the logged message mentioned above, so it is not printed anymore when this option is set, unless the verbose logging is set. The main graphing area contains the short-term loudness (3 seconds of analysis), and the gauge on the right is for the momentary loudness (400 milliseconds), but can optionally be configured to instead display short-term loudness (see gauge).

The green area marks a +/- 1LU target range around the target loudness (-23LUFS by default, unless modified through target).

More information about the Loudness Recommendation EBU R128 on http://tech.ebu.ch/loudness.

The filter accepts the following options:

Activate the video output. The audio stream is passed unchanged whether this option is set or no. The video stream will be the first output stream if activated. Default is 0.
Set the video size. This option is for video only. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default and minimum resolution is "640x480".
Set the EBU scale meter. Default is 9. Common values are 9 and 18, respectively for EBU scale meter +9 and EBU scale meter +18. Any other integer value between this range is allowed.
Set metadata injection. If set to 1, the audio input will be segmented into 100ms output frames, each of them containing various loudness information in metadata. All the metadata keys are prefixed with "lavfi.r128.".

Default is 0.

Force the frame logging level.

Available values are:

logging disabled
information logging level
verbose logging level

By default, the logging level is set to info. If the video or the metadata options are set, it switches to verbose.

Set peak mode(s).

Available modes can be cumulated (the option is a "flag" type). Possible values are:

Disable any peak mode (default).
Enable sample-peak mode.

Simple peak mode looking for the higher sample value. It logs a message for sample-peak (identified by "SPK").

Enable true-peak mode.

If enabled, the peak lookup is done on an over-sampled version of the input stream for better peak accuracy. It logs a message for true-peak. (identified by "TPK") and true-peak per frame (identified by "FTPK"). This mode requires a build with "libswresample".

Treat mono input files as "dual mono". If a mono file is intended for playback on a stereo system, its EBU R128 measurement will be perceptually incorrect. If set to "true", this option will compensate for this effect. Multi-channel input files are not affected by this option.
Set a specific pan law to be used for the measurement of dual mono files. This parameter is optional, and has a default value of -3.01dB.
Set a specific target level (in LUFS) used as relative zero in the visualization. This parameter is optional and has a default value of -23LUFS as specified by EBU R128. However, material published online may prefer a level of -16LUFS (e.g. for use with podcasts or video platforms).
Set the value displayed by the gauge. Valid values are "momentary" and s "shortterm". By default the momentary value will be used, but in certain scenarios it may be more useful to observe the short term value instead (e.g. live mixing).
scale
Sets the display scale for the loudness. Valid parameters are "absolute" (in LUFS) or "relative" (LU) relative to the target. This only affects the video output, not the summary or continuous log output.
Read-only exported value for measured integrated loudness, in LUFS.
Read-only exported value for measured loudness range, in LU.
Read-only exported value for measured LRA low, in LUFS.
Read-only exported value for measured LRA high, in LUFS.
Read-only exported value for measured sample peak, in dBFS.
Read-only exported value for measured true peak, in dBFS.

Examples

  • Real-time graph using ffplay, with a EBU scale meter +18:
    ffplay -f lavfi -i "amovie=input.mp3,ebur128=video=1:meter=18 [out0][out1]"
    
  • Run an analysis with ffmpeg:
    ffmpeg -nostats -i input.mp3 -filter_complex ebur128 -f null -
    

Temporally interleave frames from several inputs.

"interleave" works with video inputs, "ainterleave" with audio.

These filters read frames from several inputs and send the oldest queued frame to the output.

Input streams must have well defined, monotonically increasing frame timestamp values.

In order to submit one frame to output, these filters need to enqueue at least one frame for each input, so they cannot work in case one input is not yet terminated and will not receive incoming frames.

For example consider the case when one input is a "select" filter which always drops input frames. The "interleave" filter will keep reading from that input, but it will never be able to send new frames to output until the input sends an end-of-stream signal.

Also, depending on inputs synchronization, the filters will drop frames in case one input receives more frames than the other ones, and the queue is already filled.

These filters accept the following options:

Set the number of different inputs, it is 2 by default.
How to determine the end-of-stream.
The duration of the longest input. (default)
The duration of the shortest input.
The duration of the first input.

Examples

  • Interleave frames belonging to different streams using ffmpeg:
    ffmpeg -i bambi.avi -i pr0n.mkv -filter_complex "[0:v][1:v] interleave" out.avi
    
  • Add flickering blur effect:
    select='if(gt(random(0), 0.2), 1, 2)':n=2 [tmp], boxblur=2:2, [tmp] interleave
    

Measure filtering latency.

Report previous filter filtering latency, delay in number of audio samples for audio filters or number of video frames for video filters.

On end of input stream, filter will report min and max measured latency for previous running filter in filtergraph.

Manipulate frame metadata.

This filter accepts the following options:

Set mode of operation of the filter.

Can be one of the following:

If both "value" and "key" is set, select frames which have such metadata. If only "key" is set, select every frame that has such key in metadata.
Add new metadata "key" and "value". If key is already available do nothing.
Modify value of already present key.
If "value" is set, delete only keys that have such value. Otherwise, delete key. If "key" is not set, delete all metadata values in the frame.
Print key and its value if metadata was found. If "key" is not set print all metadata values available in frame.
Set key used with all modes. Must be set for all modes except "print" and "delete".
Set metadata value which will be used. This option is mandatory for "modify" and "add" mode.
Which function to use when comparing metadata value and "value".

Can be one of following:

Values are interpreted as strings, returns true if metadata value is same as "value".
Values are interpreted as strings, returns true if metadata value starts with the "value" option string.
Values are interpreted as floats, returns true if metadata value is less than "value".
Values are interpreted as floats, returns true if "value" is equal with metadata value.
Values are interpreted as floats, returns true if metadata value is greater than "value".
Values are interpreted as floats, returns true if expression from option "expr" evaluates to true.
Values are interpreted as strings, returns true if metadata value ends with the "value" option string.
Set expression which is used when "function" is set to "expr". The expression is evaluated through the eval API and can contain the following constants:
Float representation of "value" from metadata key.
Float representation of "value" as supplied by user in "value" option.
file
If specified in "print" mode, output is written to the named file. Instead of plain filename any writable url can be specified. Filename ``-'' is a shorthand for standard output. If "file" option is not set, output is written to the log with AV_LOG_INFO loglevel.
Reduces buffering in print mode when output is written to a URL set using file.

Examples

  • Print all metadata values for frames with key "lavfi.signalstats.YDIF" with values between 0 and 1.
    signalstats,metadata=print:key=lavfi.signalstats.YDIF:value=0:function=expr:expr='between(VALUE1,0,1)'
    
  • Print silencedetect output to file metadata.txt.
    silencedetect,ametadata=mode=print:file=metadata.txt
    
  • Direct all metadata to a pipe with file descriptor 4.
    metadata=mode=print:file='pipe\:4'
    

Set read/write permissions for the output frames.

These filters are mainly aimed at developers to test direct path in the following filter in the filtergraph.

The filters accept the following options:

Select the permissions mode.

It accepts the following values:

Do nothing. This is the default.
Set all the output frames read-only.
Set all the output frames directly writable.
Make the frame read-only if writable, and writable if read-only.
random
Set each output frame read-only or writable randomly.
Set the seed for the random mode, must be an integer included between 0 and "UINT32_MAX". If not specified, or if explicitly set to -1, the filter will try to use a good random seed on a best effort basis.

Note: in case of auto-inserted filter between the permission filter and the following one, the permission might not be received as expected in that following filter. Inserting a format or aformat filter before the perms/aperms filter can avoid this problem.

Slow down filtering to match real time approximately.

These filters will pause the filtering for a variable amount of time to match the output rate with the input timestamps. They are similar to the re option to "ffmpeg".

They accept the following options:

Time limit for the pauses. Any pause longer than that will be considered a timestamp discontinuity and reset the timer. Default is 2 seconds.
Speed factor for processing. The value must be a float larger than zero. Values larger than 1.0 will result in faster than realtime processing, smaller will slow processing down. The limit is automatically adapted accordingly. Default is 1.0.

A processing speed faster than what is possible without these filters cannot be achieved.

Commands

Both filters supports the all above options as commands.

Split single input stream into multiple streams.

This filter does opposite of concat filters.

"segment" works on video frames, "asegment" on audio samples.

This filter accepts the following options:

Timestamps of output segments separated by '|'. The first segment will run from the beginning of the input stream. The last segment will run until the end of the input stream
Exact frame/sample count to split the segments.

In all cases, prefixing an each segment with '+' will make it relative to the previous segment.

Examples

Split input audio stream into three output audio streams, starting at start of input audio stream and storing that in 1st output audio stream, then following at 60th second and storing than in 2nd output audio stream, and last after 150th second of input audio stream store in 3rd output audio stream:
asegment=timestamps="60|150"

Select frames to pass in output.

This filter accepts the following options:

Set expression, which is evaluated for each input frame.

If the expression is evaluated to zero, the frame is discarded.

If the evaluation result is negative or NaN, the frame is sent to the first output; otherwise it is sent to the output with index "ceil(val)-1", assuming that the input index starts from 0.

For example a value of 1.2 corresponds to the output with index "ceil(1.2)-1 = 2-1 = 1", that is the second output.

Set the number of outputs. The output to which to send the selected frame is based on the result of the evaluation. Default value is 1.

The expression can contain the following constants:

The (sequential) number of the filtered frame, starting from 0.
The (sequential) number of the selected frame, starting from 0.
The sequential number of the last selected frame. It's NAN if undefined.
The timebase of the input timestamps.
The PTS (Presentation TimeStamp) of the filtered frame, expressed in TB units. It's NAN if undefined.
The PTS of the filtered frame, expressed in seconds. It's NAN if undefined.
The PTS of the previously filtered frame. It's NAN if undefined.
The PTS of the last previously filtered frame. It's NAN if undefined.
The PTS of the last previously selected frame, expressed in seconds. It's NAN if undefined.
The first PTS in the stream which is not NAN. It remains NAN if not found.
The first PTS, in seconds, in the stream which is not NAN. It remains NAN if not found.
The type of the filtered frame. It can assume one of the following values:
The frame interlace type. It can assume one of the following values:
The frame is progressive (not interlaced).
The frame is top-field-first.
The frame is bottom-field-first.
the number of selected samples before the current frame
the number of samples in the current frame
the input sample rate
This is 1 if the filtered frame is a key-frame, 0 otherwise.
the position in the file of the filtered frame, -1 if the information is not available (e.g. for synthetic video); deprecated, do not use
value between 0 and 1 to indicate a new scene; a low value reflects a low probability for the current frame to introduce a new scene, while a higher value means the current frame is more likely to be one (see the example below)
The concat demuxer can select only part of a concat input file by setting an inpoint and an outpoint, but the output packets may not be entirely contained in the selected interval. By using this variable, it is possible to skip frames generated by the concat demuxer which are not exactly contained in the selected interval.

This works by comparing the frame pts against the lavf.concat.start_time and the lavf.concat.duration packet metadata values which are also present in the decoded frames.

The concatdec_select variable is -1 if the frame pts is at least start_time and either the duration metadata is missing or the frame pts is less than start_time + duration, 0 otherwise, and NaN if the start_time metadata is missing.

That basically means that an input frame is selected if its pts is within the interval set by the concat demuxer.

Represents the width of the input video frame.
Represents the height of the input video frame.
View ID for multi-view video.

The default value of the select expression is "1".

Examples

  • Select all frames in input:
    select
    

    The example above is the same as:

    select=1
    
  • Skip all frames:
    select=0
    
  • Select only I-frames:
    select='eq(pict_type\,I)'
    
  • Select one frame every 100:
    select='not(mod(n\,100))'
    
  • Select only frames contained in the 10-20 time interval:
    select=between(t\,10\,20)
    
  • Select only I-frames contained in the 10-20 time interval:
    select=between(t\,10\,20)*eq(pict_type\,I)
    
  • Select frames with a minimum distance of 10 seconds:
    select='isnan(prev_selected_t)+gte(t-prev_selected_t\,10)'
    
  • Use aselect to select only audio frames with samples number > 100:
    aselect='gt(samples_n\,100)'
    
  • Create a mosaic of the first scenes:
    ffmpeg -i video.avi -vf select='gt(scene\,0.4)',scale=160:120,tile -frames:v 1 preview.png
    

    Comparing scene against a value between 0.3 and 0.5 is generally a sane choice.

  • Send even and odd frames to separate outputs, and compose them:
    select=n=2:e='mod(n, 2)+1' [odd][even]; [odd] pad=h=2*ih [tmp]; [tmp][even] overlay=y=h
    
  • Select useful frames from an ffconcat file which is using inpoints and outpoints but where the source files are not intra frame only.
    ffmpeg -copyts -vsync 0 -segment_time_metadata 1 -i input.ffconcat -vf select=concatdec_select -af aselect=concatdec_select output.avi
    

Send commands to filters in the filtergraph.

These filters read commands to be sent to other filters in the filtergraph.

"sendcmd" must be inserted between two video filters, "asendcmd" must be inserted between two audio filters, but apart from that they act the same way.

The specification of commands can be provided in the filter arguments with the commands option, or in a file specified by the filename option.

These filters accept the following options:

Set the commands to be read and sent to the other filters.
Set the filename of the commands to be read and sent to the other filters.

Commands syntax

A commands description consists of a sequence of interval specifications, comprising a list of commands to be executed when a particular event related to that interval occurs. The occurring event is typically the current frame time entering or leaving a given time interval.

An interval is specified by the following syntax:

<START>[-<END>] <COMMANDS>;

The time interval is specified by the START and END times. END is optional and defaults to the maximum time.

The current frame time is considered within the specified interval if it is included in the interval [START, END), that is when the time is greater or equal to START and is lesser than END.

COMMANDS consists of a sequence of one or more command specifications, separated by ",", relating to that interval. The syntax of a command specification is given by:

[<FLAGS>] <TARGET> <COMMAND> <ARG>

FLAGS is optional and specifies the type of events relating to the time interval which enable sending the specified command, and must be a non-null sequence of identifier flags separated by "+" or "|" and enclosed between "[" and "]".

The following flags are recognized:

The command is sent when the current frame timestamp enters the specified interval. In other words, the command is sent when the previous frame timestamp was not in the given interval, and the current is.
The command is sent when the current frame timestamp leaves the specified interval. In other words, the command is sent when the previous frame timestamp was in the given interval, and the current is not.
The command ARG is interpreted as expression and result of expression is passed as ARG.

The expression is evaluated through the eval API and can contain the following constants:

Original position in the file of the frame, or undefined if undefined for the current frame. Deprecated, do not use.
The presentation timestamp in input.
The count of the input frame for video or audio, starting from 0.
The time in seconds of the current frame.
The start time in seconds of the current command interval.
The end time in seconds of the current command interval.
The interpolated time of the current command interval, TI = (T - TS) / (TE - TS).
The video frame width.
The video frame height.

If FLAGS is not specified, a default value of "[enter]" is assumed.

TARGET specifies the target of the command, usually the name of the filter class or a specific filter instance name.

COMMAND specifies the name of the command for the target filter.

ARG is optional and specifies the optional list of argument for the given COMMAND.

Between one interval specification and another, whitespaces, or sequences of characters starting with "#" until the end of line, are ignored and can be used to annotate comments.

A simplified BNF description of the commands specification syntax follows:

<COMMAND_FLAG>  ::= "enter" | "leave"
<COMMAND_FLAGS> ::= <COMMAND_FLAG> [(+|"|")<COMMAND_FLAG>]
<COMMAND>       ::= ["[" <COMMAND_FLAGS> "]"] <TARGET> <COMMAND> [<ARG>]
<COMMANDS>      ::= <COMMAND> [,<COMMANDS>]
<INTERVAL>      ::= <START>[-<END>] <COMMANDS>
<INTERVALS>     ::= <INTERVAL>[;<INTERVALS>]

Examples

  • Specify audio tempo change at second 4:
    asendcmd=c='4.0 atempo tempo 1.5',atempo
    
  • Target a specific filter instance:
    asendcmd=c='4.0 atempo@my tempo 1.5',atempo@my
    
  • Specify a list of drawtext and hue commands in a file.
    # show text in the interval 5-10
    5.0-10.0 [enter] drawtext reinit 'fontfile=FreeSerif.ttf:text=hello world',
             [leave] drawtext reinit 'fontfile=FreeSerif.ttf:text=';
    
    # desaturate the image in the interval 15-20
    15.0-20.0 [enter] hue s 0,
              [enter] drawtext reinit 'fontfile=FreeSerif.ttf:text=nocolor',
              [leave] hue s 1,
              [leave] drawtext reinit 'fontfile=FreeSerif.ttf:text=color';
    
    # apply an exponential saturation fade-out effect, starting from time 25
    25 [enter] hue s exp(25-t)
    

    A filtergraph allowing to read and process the above command list stored in a file test.cmd, can be specified with:

    sendcmd=f=test.cmd,drawtext=fontfile=FreeSerif.ttf:text='',hue
    

Change the PTS (presentation timestamp) of the input frames.

"setpts" works on video frames, "asetpts" on audio frames.

This filter accepts the following options:

The expression which is evaluated for each frame to construct its timestamp.

The expression is evaluated through the eval API and can contain the following constants:

frame rate, only defined for constant frame-rate video
The presentation timestamp in input
The count of the input frame for video or the number of consumed samples, not including the current frame for audio, starting from 0.
The number of consumed samples, not including the current frame (only audio)
The number of samples in the current frame (only audio)
The audio sample rate.
The PTS of the first frame.
the time in seconds of the first frame
State whether the current frame is interlaced.
the time in seconds of the current frame
original position in the file of the frame, or undefined if undefined for the current frame; deprecated, do not use
The previous input PTS.
previous input time in seconds
The previous output PTS.
previous output time in seconds
The wallclock (RTC) time in microseconds. This is deprecated, use time(0) instead.
The wallclock (RTC) time at the start of the movie in microseconds.
The timebase of the input timestamps.
Time of the first frame after command was applied or time of the first frame if no commands.

Examples

  • Start counting PTS from zero
    setpts=PTS-STARTPTS
    
  • Apply fast motion effect:
    setpts=0.5*PTS
    
  • Apply slow motion effect:
    setpts=2.0*PTS
    
  • Set fixed rate of 25 frames per second:
    setpts=N/(25*TB)
    
  • Apply a random jitter effect of +/-100 TB units:
    setpts=PTS+randomi(0, -100\,100)
    
  • Set fixed rate 25 fps with some jitter:
    setpts='1/(25*TB) * (N + 0.05 * sin(N*2*PI/25))'
    
  • Apply an offset of 10 seconds to the input PTS:
    setpts=PTS+10/TB
    
  • Generate timestamps from a "live source" and rebase onto the current timebase:
    setpts='(RTCTIME - RTCSTART) / (TB * 1000000)'
    
  • Generate timestamps by counting samples:
    asetpts=N/SR/TB
    

Commands

Both filters support all above options as commands.

Force color range for the output video frame.

The "setrange" filter marks the color range property for the output frames. It does not change the input frame, but only sets the corresponding property, which affects how the frame is treated by following filters.

The filter accepts the following options:

Available values are:
Keep the same color range property.
Set the color range as unspecified.
Set the color range as limited.
Set the color range as full.

Set the timebase to use for the output frames timestamps. It is mainly useful for testing timebase configuration.

It accepts the following parameters:

The expression which is evaluated into the output timebase.

The value for tb is an arithmetic expression representing a rational. The expression can contain the constants "AVTB" (the default timebase), "intb" (the input timebase) and "sr" (the sample rate, audio only). Default value is "intb".

Examples

  • Set the timebase to 1/25:
    settb=expr=1/25
    
  • Set the timebase to 1/10:
    settb=expr=0.1
    
  • Set the timebase to 1001/1000:
    settb=1+0.001
    
  • Set the timebase to 2*intb:
    settb=2*intb
    
  • Set the default timebase value:
    settb=AVTB
    

Convert input audio to a video output representing frequency spectrum logarithmically using Brown-Puckette constant Q transform algorithm with direct frequency domain coefficient calculation (but the transform itself is not really constant Q, instead the Q factor is actually variable/clamped), with musical tone scale, from E0 to D#10.

The filter accepts the following options:

Specify the video size for the output. It must be even. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "1920x1080".
Set the output frame rate. Default value is 25.
Set the bargraph height. It must be even. Default value is -1 which computes the bargraph height automatically.
Set the axis height. It must be even. Default value is -1 which computes the axis height automatically.
Set the sonogram height. It must be even. Default value is -1 which computes the sonogram height automatically.
Set the fullhd resolution. This option is deprecated, use size, s instead. Default value is 1.
Specify the sonogram volume expression. It can contain variables:
the bar_v evaluated expression
the frequency where it is evaluated
the value of timeclamp option

and functions:

A-weighting of equal loudness
B-weighting of equal loudness
C-weighting of equal loudness.

Default value is 16.

Specify the bargraph volume expression. It can contain variables:
the sono_v evaluated expression
the frequency where it is evaluated
the value of timeclamp option

and functions:

A-weighting of equal loudness
B-weighting of equal loudness
C-weighting of equal loudness.

Default value is "sono_v".

Specify the sonogram gamma. Lower gamma makes the spectrum more contrast, higher gamma makes the spectrum having more range. Default value is 3. Acceptable range is "[1, 7]".
Specify the bargraph gamma. Default value is 1. Acceptable range is "[1, 7]".
Specify the bargraph transparency level. Lower value makes the bargraph sharper. Default value is 1. Acceptable range is "[0, 1]".
Specify the transform timeclamp. At low frequency, there is trade-off between accuracy in time domain and frequency domain. If timeclamp is lower, event in time domain is represented more accurately (such as fast bass drum), otherwise event in frequency domain is represented more accurately (such as bass guitar). Acceptable range is "[0.002, 1]". Default value is 0.17.
Set attack time in seconds. The default is 0 (disabled). Otherwise, it limits future samples by applying asymmetric windowing in time domain, useful when low latency is required. Accepted range is "[0, 1]".
Specify the transform base frequency. Default value is 20.01523126408007475, which is frequency 50 cents below E0. Acceptable range is "[10, 100000]".
Specify the transform end frequency. Default value is 20495.59681441799654, which is frequency 50 cents above D#10. Acceptable range is "[10, 100000]".
This option is deprecated and ignored.
Specify the transform length in time domain. Use this option to control accuracy trade-off between time domain and frequency domain at every frequency sample. It can contain variables:
the frequency where it is evaluated
the value of timeclamp option.

Default value is "384*tc/(384+tc*f)".

Specify the transform count for every video frame. Default value is 6. Acceptable range is "[1, 30]".
Specify the transform count for every single pixel. Default value is 0, which makes it computed automatically. Acceptable range is "[0, 10]".
Specify font file for use with freetype to draw the axis. If not specified, use embedded font. Note that drawing with font file or embedded font is not implemented with custom basefreq and endfreq, use axisfile option instead.
Specify fontconfig pattern. This has lower priority than fontfile. The ":" in the pattern may be replaced by "|" to avoid unnecessary escaping.
Specify font color expression. This is arithmetic expression that should return integer value 0xRRGGBB. It can contain variables:
the frequency where it is evaluated
the value of timeclamp option

and functions:

midi number of frequency f, some midi numbers: E0(16), C1(24), C2(36), A4(69)
red, green, and blue value of intensity x.

Default value is "st(0, (midi(f)-59.5)/12); st(1, if(between(ld(0),0,1), 0.5-0.5*cos(2*PI*ld(0)), 0)); r(1-ld(1)) + b(ld(1))".

Specify image file to draw the axis. This option override fontfile and fontcolor option.
Enable/disable drawing text to the axis. If it is set to 0, drawing to the axis is disabled, ignoring fontfile and axisfile option. Default value is 1.
Set colorspace. The accepted values are:
Unspecified (default)
BT.709
FCC
BT.470BG or BT.601-6 625
SMPTE-170M or BT.601-6 525
SMPTE-240M
BT.2020 with non-constant luminance
Set spectrogram color scheme. This is list of floating point values with format "left_r|left_g|left_b|right_r|right_g|right_b". The default is "1|0.5|0|0|0.5|1".

Examples

  • Playing audio while showing the spectrum:
    ffplay -f lavfi 'amovie=a.mp3, asplit [a][out1]; [a] showcqt [out0]'
    
  • Same as above, but with frame rate 30 fps:
    ffplay -f lavfi 'amovie=a.mp3, asplit [a][out1]; [a] showcqt=fps=30:count=5 [out0]'
    
  • Playing at 1280x720:
    ffplay -f lavfi 'amovie=a.mp3, asplit [a][out1]; [a] showcqt=s=1280x720:count=4 [out0]'
    
  • Disable sonogram display:
    sono_h=0
    
  • A1 and its harmonics: A1, A2, (near)E3, A3:
    ffplay -f lavfi 'aevalsrc=0.1*sin(2*PI*55*t)+0.1*sin(4*PI*55*t)+0.1*sin(6*PI*55*t)+0.1*sin(8*PI*55*t),
                     asplit[a][out1]; [a] showcqt [out0]'
    
  • Same as above, but with more accuracy in frequency domain:
    ffplay -f lavfi 'aevalsrc=0.1*sin(2*PI*55*t)+0.1*sin(4*PI*55*t)+0.1*sin(6*PI*55*t)+0.1*sin(8*PI*55*t),
                     asplit[a][out1]; [a] showcqt=timeclamp=0.5 [out0]'
    
  • Custom volume:
    bar_v=10:sono_v=bar_v*a_weighting(f)
    
  • Custom gamma, now spectrum is linear to the amplitude.
    bar_g=2:sono_g=2
    
  • Custom tlength equation:
    tc=0.33:tlength='st(0,0.17); 384*tc / (384 / ld(0) + tc*f /(1-ld(0))) + 384*tc / (tc*f / ld(0) + 384 /(1-ld(0)))'
    
  • Custom fontcolor and fontfile, C-note is colored green, others are colored blue:
    fontcolor='if(mod(floor(midi(f)+0.5),12), 0x0000FF, g(1))':fontfile=myfont.ttf
    
  • Custom font using fontconfig:
    font='Courier New,Monospace,mono|bold'
    
  • Custom frequency range with custom axis using image file:
    axisfile=myaxis.png:basefreq=40:endfreq=10000
    

Convert input audio to video output representing frequency spectrum using Continuous Wavelet Transform and Morlet wavelet.

The filter accepts the following options:

Specify the video size for the output. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "640x512".
Set the output frame rate. Default value is 25.
scale
Set the frequency scale used. Allowed values are:

Default value is "linear".

Set the intensity scale used. Allowed values are:

Default value is "log".

Set the minimum frequency that will be used in output. Default is 20 Hz.
Set the maximum frequency that will be used in output. Default is 20000 Hz. The real frequency upper limit depends on input audio's sample rate and such will be enforced on this value when it is set to value greater than Nyquist frequency.
Set the minimum intensity that will be used in output.
Set the maximum intensity that will be used in output.
Set the logarithmic basis for brightness strength when mapping calculated magnitude values to pixel values. Allowed range is from 0 to 1. Default value is 0.0001.
Set the frequency deviation. Lower values than 1 are more frequency oriented, while higher values than 1 are more time oriented. Allowed range is from 0 to 10. Default value is 1.
Set the number of pixel output per each second in one row. Allowed range is from 1 to 1024. Default value is 64.
Set the output visual mode. Allowed values are:
Show magnitude.
phase
Show only phase.
Show combination of magnitude and phase. Magnitude is mapped to brightness and phase to color.
Show unique color per channel magnitude.
Show unique color per stereo difference.

Default value is "magnitude".

Set the output slide method. Allowed values are:
scroll
Set the direction method for output slide method. Allowed values are:
Direction from left to right.
Direction from right to left.
Direction from up to down.
Direction from down to up.
Set the ratio of bargraph display to display size. Default is 0.
Set color rotation, must be in [-1.0, 1.0] range. Default value is 0.

Convert input audio to video output representing the audio power spectrum. Audio amplitude is on Y-axis while frequency is on X-axis.

The filter accepts the following options:

Specify size of video. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default is "1024x512".
Set video rate. Default is 25.
Set display mode. This set how each frequency bin will be represented.

It accepts the following values:

Default is "bar".

Set amplitude scale.

It accepts the following values:

Linear scale.
Square root scale.
Cubic root scale.
Logarithmic scale.

Default is "log".

Set frequency scale.

It accepts the following values:

Linear scale.
Logarithmic scale.
Reverse logarithmic scale.

Default is "lin".

Set window size. Allowed range is from 16 to 65536.

Default is 2048

Set windowing function.

It accepts the following values:

Default is "hanning".

Set window overlap. In range "[0, 1]". Default is 1, which means optimal overlap for selected window function will be picked.
Set time averaging. Setting this to 0 will display current maximal peaks. Default is 1, which means time averaging is disabled.
Specify list of colors separated by space or by '|' which will be used to draw channel frequencies. Unrecognized or missing colors will be replaced by white color.
Set channel display mode.

It accepts the following values:

Default is "combined".

Set minimum amplitude used in "log" amplitude scaler.
data
Set data display mode.

It accepts the following values:

Default is "magnitude".

Set channels to use when processing audio. By default all are processed.

Convert stereo input audio to a video output, representing the spatial relationship between two channels.

The filter accepts the following options:

Specify the video size for the output. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "512x512".
Set window size. Allowed range is from 1024 to 65536. Default size is 4096.
Set window function.

It accepts the following values:

Default value is "hann".

Set output framerate.

Convert input audio to a video output, representing the audio frequency spectrum.

The filter accepts the following options:

Specify the video size for the output. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "640x512".
Specify how the spectrum should slide along the window.

It accepts the following values:

the samples start again on the left when they reach the right
scroll
the samples scroll from right to left
frames are only produced when the samples reach the right
the samples scroll from left to right
the samples start again on the right when they reach the left

Default value is "replace".

Specify display mode.

It accepts the following values:

all channels are displayed in the same row
all channels are displayed in separate rows

Default value is combined.

Specify display color mode.

It accepts the following values:

each channel is displayed in a separate color
each channel is displayed using the same color scheme
each channel is displayed using the rainbow color scheme
each channel is displayed using the moreland color scheme
each channel is displayed using the nebulae color scheme
each channel is displayed using the fire color scheme
each channel is displayed using the fiery color scheme
each channel is displayed using the fruit color scheme
each channel is displayed using the cool color scheme
each channel is displayed using the magma color scheme
each channel is displayed using the green color scheme
each channel is displayed using the viridis color scheme
each channel is displayed using the plasma color scheme
each channel is displayed using the cividis color scheme
each channel is displayed using the terrain color scheme

Default value is channel.

scale
Specify scale used for calculating intensity color values.

It accepts the following values:

linear
square root, default
cubic root
logarithmic
4thrt
4th root
5thrt
5th root

Default value is sqrt.

Specify frequency scale.

It accepts the following values:

linear
logarithmic

Default value is lin.

Set saturation modifier for displayed colors. Negative values provide alternative color scheme. 0 is no saturation at all. Saturation must be in [-10.0, 10.0] range. Default value is 1.
Set window function.

It accepts the following values:

Default value is "hann".

Set orientation of time vs frequency axis. Can be "vertical" or "horizontal". Default is "vertical".
Set ratio of overlap window. Default value is 0. When value is 1 overlap is set to recommended size for specific window function currently used.
Set scale gain for calculating intensity color values. Default value is 1.
data
Set which data to display. Can be "magnitude", default or "phase", or unwrapped phase: "uphase".
Set color rotation, must be in [-1.0, 1.0] range. Default value is 0.
Set start frequency from which to display spectrogram. Default is 0.
Set stop frequency to which to display spectrogram. Default is 0.
fps
Set upper frame rate limit. Default is "auto", unlimited.
Draw time and frequency axes and legends. Default is disabled.
Set dynamic range used to calculate intensity color values. Default is 120 dBFS. Allowed range is from 10 to 200.
Set upper limit of input audio samples volume in dBFS. Default is 0 dBFS. Allowed range is from -100 to 100.
Set opacity strength when using pixel format output with alpha component.

The usage is very similar to the showwaves filter; see the examples in that section.

Examples

  • Large window with logarithmic color scaling:
    showspectrum=s=1280x480:scale=log
    
  • Complete example for a colored and sliding spectrum per channel using ffplay:
    ffplay -f lavfi 'amovie=input.mp3, asplit [a][out1];
                 [a] showspectrum=mode=separate:color=intensity:slide=1:scale=cbrt [out0]'
    

Convert input audio to a single video frame, representing the audio frequency spectrum.

The filter accepts the following options:

Specify the video size for the output. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "4096x2048".
Specify display mode.

It accepts the following values:

all channels are displayed in the same row
all channels are displayed in separate rows

Default value is combined.

Specify display color mode.

It accepts the following values:

each channel is displayed in a separate color
each channel is displayed using the same color scheme
each channel is displayed using the rainbow color scheme
each channel is displayed using the moreland color scheme
each channel is displayed using the nebulae color scheme
each channel is displayed using the fire color scheme
each channel is displayed using the fiery color scheme
each channel is displayed using the fruit color scheme
each channel is displayed using the cool color scheme
each channel is displayed using the magma color scheme
each channel is displayed using the green color scheme
each channel is displayed using the viridis color scheme
each channel is displayed using the plasma color scheme
each channel is displayed using the cividis color scheme
each channel is displayed using the terrain color scheme

Default value is intensity.

scale
Specify scale used for calculating intensity color values.

It accepts the following values:

linear
square root, default
cubic root
logarithmic
4thrt
4th root
5thrt
5th root

Default value is log.

Specify frequency scale.

It accepts the following values:

linear
logarithmic

Default value is lin.

Set saturation modifier for displayed colors. Negative values provide alternative color scheme. 0 is no saturation at all. Saturation must be in [-10.0, 10.0] range. Default value is 1.
Set window function.

It accepts the following values:

Default value is "hann".

Set orientation of time vs frequency axis. Can be "vertical" or "horizontal". Default is "vertical".
Set scale gain for calculating intensity color values. Default value is 1.
Draw time and frequency axes and legends. Default is enabled.
Set color rotation, must be in [-1.0, 1.0] range. Default value is 0.
Set start frequency from which to display spectrogram. Default is 0.
Set stop frequency to which to display spectrogram. Default is 0.
Set dynamic range used to calculate intensity color values. Default is 120 dBFS. Allowed range is from 10 to 200.
Set upper limit of input audio samples volume in dBFS. Default is 0 dBFS. Allowed range is from -100 to 100.
Set opacity strength when using pixel format output with alpha component.

Examples

Extract an audio spectrogram of a whole audio track in a 1024x1024 picture using ffmpeg:
ffmpeg -i audio.flac -lavfi showspectrumpic=s=1024x1024 spectrogram.png

Convert input audio volume to a video output.

The filter accepts the following options:

Set video rate.
Set border width, allowed range is [0, 5]. Default is 1.
Set channel width, allowed range is [80, 8192]. Default is 400.
Set channel height, allowed range is [1, 900]. Default is 20.
Set fade, allowed range is [0, 1]. Default is 0.95.
Set volume color expression.

The expression can use the following variables:

Current max volume of channel in dB.
Current peak.
Current channel number, starting from 0.
If set, displays channel names. Default is enabled.
If set, displays volume values. Default is enabled.
Set orientation, can be horizontal: "h" or vertical: "v", default is "h".
Set step size, allowed range is [0, 5]. Default is 0, which means step is disabled.
Set background opacity, allowed range is [0, 1]. Default is 0.
Set metering mode, can be peak: "p" or rms: "r", default is "p".
Set display scale, can be linear: "lin" or log: "log", default is "lin".
In second. If set to > 0., display a line for the max level in the previous seconds. default is disabled: 0.
The color of the max line. Use when "dm" option is set to > 0. default is: "orange"

Convert input audio to a video output, representing the samples waves.

The filter accepts the following options:

Specify the video size for the output. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "600x240".
Set display mode.

Available values are:

Draw a point for each sample.
Draw a vertical line for each sample.
Draw a point for each sample and a line between them.
Draw a centered vertical line for each sample.

Default value is "point".

Set the number of samples which are printed on the same column. A larger value will decrease the frame rate. Must be a positive integer. This option can be set only if the value for rate is not explicitly specified.
Set the (approximate) output frame rate. This is done by setting the option n. Default value is "25".
Set if channels should be drawn separately or overlap. Default value is 0.
Set colors separated by '|' which are going to be used for drawing of each channel.
scale
Set amplitude scale.

Available values are:

Linear.
Logarithmic.
Square root.
Cubic root.

Default is linear.

Set the draw mode. This is mostly useful to set for high n.

Available values are:

scale
Scale pixel values for each drawn sample.
Draw every sample directly.

Default value is "scale".

Examples

  • Output the input file audio and the corresponding video representation at the same time:
    amovie=a.mp3,asplit[out0],showwaves[out1]
    
  • Create a synthetic signal and show it with showwaves, forcing a frame rate of 30 frames per second:
    aevalsrc=sin(1*2*PI*t)*sin(880*2*PI*t):cos(2*PI*200*t),asplit[out0],showwaves=r=30[out1]
    

Convert input audio to a single video frame, representing the samples waves.

The filter accepts the following options:

Specify the video size for the output. For the syntax of this option, check the "Video size" section in the ffmpeg-utils manual. Default value is "600x240".
Set if channels should be drawn separately or overlap. Default value is 0.
Set colors separated by '|' which are going to be used for drawing of each channel.
scale
Set amplitude scale.

Available values are:

Linear.
Logarithmic.
Square root.
Cubic root.

Default is linear.

Set the draw mode.

Available values are:

scale
Scale pixel values for each drawn sample.
Draw every sample directly.

Default value is "scale".

Set the filter mode.

Available values are:

Use average samples values for each drawn sample.
Use peak samples values for each drawn sample.

Default value is "average".

Examples

Extract a channel split representation of the wave form of a whole audio track in a 1024x800 picture using ffmpeg:
ffmpeg -i audio.flac -lavfi showwavespic=split_channels=1:s=1024x800 waveform.png

Delete frame side data, or select frames based on it.

This filter accepts the following options:

Set mode of operation of the filter.

Can be one of the following:

Select every frame with side data of "type".
Delete side data of "type". If "type" is not set, delete all side data in the frame.
Set side data type used with all modes. Must be set for "select" mode. For the list of frame side data types, refer to the "AVFrameSideDataType" enum in libavutil/frame.h. For example, to choose "AV_FRAME_DATA_PANSCAN" side data, you must specify "PANSCAN".

Synthesize audio from 2 input video spectrums, first input stream represents magnitude across time and second represents phase across time. The filter will transform from frequency domain as displayed in videos back to time domain as presented in audio output.

This filter is primarily created for reversing processed showspectrum filter outputs, but can synthesize sound from other spectrograms too. But in such case results are going to be poor if the phase data is not available, because in such cases phase data need to be recreated, usually it's just recreated from random noise. For best results use gray only output ("channel" color mode in showspectrum filter) and "log" scale for magnitude video and "lin" scale for phase video. To produce phase, for 2nd video, use "data" option. Inputs videos should generally use "fullframe" slide mode as that saves resources needed for decoding video.

The filter accepts the following options:

Specify sample rate of output audio, the sample rate of audio from which spectrum was generated may differ.
Set number of channels represented in input video spectrums.
scale
Set scale which was used when generating magnitude input spectrum. Can be "lin" or "log". Default is "log".
Set slide which was used when generating inputs spectrums. Can be "replace", "scroll", "fullframe" or "rscroll". Default is "fullframe".
Set window function used for resynthesis.
Set window overlap. In range "[0, 1]". Default is 1, which means optimal overlap for selected window function will be picked.
Set orientation of input videos. Can be "vertical" or "horizontal". Default is "vertical".

Examples

First create magnitude and phase videos from audio, assuming audio is stereo with 44100 sample rate, then resynthesize videos back to audio with spectrumsynth:
ffmpeg -i input.flac -lavfi showspectrum=mode=separate:scale=log:overlap=0.875:color=channel:slide=fullframe:data=magnitude -an -c:v rawvideo magnitude.nut
ffmpeg -i input.flac -lavfi showspectrum=mode=separate:scale=lin:overlap=0.875:color=channel:slide=fullframe:data=phase -an -c:v rawvideo phase.nut
ffmpeg -i magnitude.nut -i phase.nut -lavfi spectrumsynth=channels=2:sample_rate=44100:win_func=hann:overlap=0.875:slide=fullframe output.flac

Split input into several identical outputs.

"asplit" works with audio input, "split" with video.

The filter accepts a single parameter which specifies the number of outputs. If unspecified, it defaults to 2.

Examples

  • Create two separate outputs from the same input:
    [in] split [out0][out1]
    
  • To create 3 or more outputs, you need to specify the number of outputs, like in:
    [in] asplit=3 [out0][out1][out2]
    
  • Create two separate outputs from the same input, one cropped and one padded:
    [in] split [splitout1][splitout2];
    [splitout1] crop=100:100:0:0    [cropout];
    [splitout2] pad=200:200:100:100 [padout];
    
  • Create 5 copies of the input audio with ffmpeg:
    ffmpeg -i INPUT -filter_complex asplit=5 OUTPUT
    

Receive commands sent through a libzmq client, and forward them to filters in the filtergraph.

"zmq" and "azmq" work as a pass-through filters. "zmq" must be inserted between two video filters, "azmq" between two audio filters. Both are capable to send messages to any filter type.

To enable these filters you need to install the libzmq library and headers and configure FFmpeg with "--enable-libzmq".

For more information about libzmq see: http://www.zeromq.org/

The "zmq" and "azmq" filters work as a libzmq server, which receives messages sent through a network interface defined by the bind_address (or the abbreviation "b") option. Default value of this option is tcp://localhost:5555. You may want to alter this value to your needs, but do not forget to escape any ':' signs (see filtergraph escaping).

The received message must be in the form:

<TARGET> <COMMAND> [<ARG>]

TARGET specifies the target of the command, usually the name of the filter class or a specific filter instance name. The default filter instance name uses the pattern Parsed_<filter_name>_<index>, but you can override this by using the filter_name@id syntax (see Filtergraph syntax).

COMMAND specifies the name of the command for the target filter.

ARG is optional and specifies the optional argument list for the given COMMAND.

Upon reception, the message is processed and the corresponding command is injected into the filtergraph. Depending on the result, the filter will send a reply to the client, adopting the format:

<ERROR_CODE> <ERROR_REASON>
<MESSAGE>

MESSAGE is optional.

Examples

Look at tools/zmqsend for an example of a zmq client which can be used to send commands processed by these filters.

Consider the following filtergraph generated by ffplay. In this example the last overlay filter has an instance name. All other filters will have default instance names.

ffplay -dumpgraph 1 -f lavfi "
color=s=100x100:c=red  [l];
color=s=100x100:c=blue [r];
nullsrc=s=200x100, zmq [bg];
[bg][l]   overlay     [bg+l];
[bg+l][r] overlay@my=x=100 "

To change the color of the left side of the video, the following command can be used:

echo Parsed_color_0 c yellow | tools/zmqsend

To change the right side:

echo Parsed_color_1 c pink | tools/zmqsend

To change the position of the right side:

echo overlay@my x 150 | tools/zmqsend

Below is a description of the currently available multimedia sources.

This is the same as movie source, except it selects an audio stream by default.

Generate an Audio/Video Sync Test.

Generated stream periodically shows flash video frame and emits beep in audio. Useful to inspect A/V sync issues.

It accepts the following options:

Set output video size. Default value is "hd720".
Set output video frame rate. Default value is 30.
Set output audio sample rate. Default value is 44100.
Set output audio beep amplitude. Default value is 0.7.
Set output audio beep period in seconds. Default value is 3.
Set output video flash delay in number of frames. Default value is 0.
Enable cycling of video delays, by default is disabled.
Set stream output duration. By default duration is unlimited.
Set foreground/background/additional color.

Commands

This source supports the some above options as commands.

Read audio and/or video stream(s) from a movie container.

It accepts the following parameters:

The name of the resource to read (not necessarily a file; it can also be a device or a stream accessed through some protocol).
Specifies the format assumed for the movie to read, and can be either the name of a container or an input device. If not specified, the format is guessed from movie_name or by probing.
Specifies the seek point in seconds. The frames will be output starting from this seek point. The parameter is evaluated with "av_strtod", so the numerical value may be suffixed by an IS postfix. The default value is "0".
Specifies the streams to read. Several streams can be specified, separated by "+". The source will then have as many outputs, in the same order. The syntax is explained in the "Stream specifiers" section in the ffmpeg manual. Two special names, "dv" and "da" specify respectively the default (best suited) video and audio stream. Default is "dv", or "da" if the filter is called as "amovie".
Specifies the index of the video stream to read. If the value is -1, the most suitable video stream will be automatically selected. The default value is "-1". Deprecated. If the filter is called "amovie", it will select audio instead of video.
loop
Specifies how many times to read the stream in sequence. If the value is 0, the stream will be looped infinitely. Default value is "1".

Note that when the movie is looped the source timestamps are not changed, so it will generate non monotonically increasing timestamps.

Specifies the time difference between frames above which the point is considered a timestamp discontinuity which is removed by adjusting the later timestamps.
Specifies the number of threads for decoding
Specify format options for the opened file. Format options can be specified as a list of key=value pairs separated by ':'. The following example shows how to add protocol_whitelist and protocol_blacklist options:
ffplay -f lavfi
"movie=filename='1.sdp':format_opts='protocol_whitelist=file,rtp,udp\:protocol_blacklist=http'"

It allows overlaying a second video on top of the main input of a filtergraph, as shown in this graph:

input -----------> deltapts0 --> overlay --> output
                                    ^
                                    |
movie --> scale--> deltapts1 -------+

Examples

  • Skip 3.2 seconds from the start of the AVI file in.avi, and overlay it on top of the input labelled "in":
    movie=in.avi:seek_point=3.2, scale=180:-1, setpts=PTS-STARTPTS [over];
    [in] setpts=PTS-STARTPTS [main];
    [main][over] overlay=16:16 [out]
    
  • Read from a video4linux2 device, and overlay it on top of the input labelled "in":
    movie=/dev/video0:f=video4linux2, scale=180:-1, setpts=PTS-STARTPTS [over];
    [in] setpts=PTS-STARTPTS [main];
    [main][over] overlay=16:16 [out]
    
  • Read the first video stream and the audio stream with id 0x81 from dvd.vob; the video is connected to the pad named "video" and the audio is connected to the pad named "audio":
    movie=dvd.vob:s=v:0+#0x81 [video] [audio]
    

Commands

Both movie and amovie support the following commands:

Perform seek using "av_seek_frame". The syntax is: seek stream_index|timestamp|flags
  • stream_index: If stream_index is -1, a default stream is selected, and timestamp is automatically converted from AV_TIME_BASE units to the stream specific time_base.
  • timestamp: Timestamp in AVStream.time_base units or, if no stream is specified, in AV_TIME_BASE units.
  • flags: Flags which select direction and seeking mode.
Get movie duration in AV_TIME_BASE units.

FFmpeg can be hooked up with a number of external libraries to add support for more formats. None of them are used by default, their use has to be explicitly requested by passing the appropriate flags to ./configure.

FFmpeg can make use of the AOM library for AV1 decoding and encoding.

Go to http://aomedia.org/ and follow the instructions for installing the library. Then pass "--enable-libaom" to configure to enable it.

FFmpeg can use the AMD Advanced Media Framework library for accelerated H.264 and HEVC(only windows) encoding on hardware with Video Coding Engine (VCE).

To enable support you must obtain the AMF framework header files(version 1.4.9+) from https://github.com/GPUOpen-LibrariesAndSDKs/AMF.git.

Create an "AMF/" directory in the system include path. Copy the contents of "AMF/amf/public/include/" into that directory. Then configure FFmpeg with "--enable-amf".

Initialization of amf encoder occurs in this order: 1) trying to initialize through dx11(only windows) 2) trying to initialize through dx9(only windows) 3) trying to initialize through vulkan

To use h.264(AMD VCE) encoder on linux amdgru-pro version 19.20+ and amf-amdgpu-pro package(amdgru-pro contains, but does not install automatically) are required.

This driver can be installed using amdgpu-pro-install script in official amd driver archive.

FFmpeg can read AviSynth scripts as input. To enable support, pass "--enable-avisynth" to configure after installing the headers provided by https://github.com/AviSynth/AviSynthPlus. AviSynth+ can be configured to install only the headers by either passing "-DHEADERS_ONLY:bool=on" to the normal CMake-based build system, or by using the supplied "GNUmakefile".

For Windows, supported AviSynth variants are http://avisynth.nl for 32-bit builds and <http://avisynth.nl/index.php/AviSynth+> for 32-bit and 64-bit builds.

For Linux, macOS, and BSD, the only supported AviSynth variant is https://github.com/AviSynth/AviSynthPlus, starting with version 3.5.

In 2016, AviSynth+ added support for building with GCC. However, due to the eccentricities of Windows' calling conventions, 32-bit GCC builds of AviSynth+ are not compatible with typical 32-bit builds of FFmpeg.

By default, FFmpeg assumes compatibility with 32-bit MSVC builds of AviSynth+ since that is the most widely-used and entrenched build configuration. Users can override this and enable support for 32-bit GCC builds of AviSynth+ by passing "-DAVSC_WIN32_GCC32" to "--extra-cflags" when configuring FFmpeg.

64-bit builds of FFmpeg are not affected, and can use either MSVC or GCC builds of AviSynth+ without any special flags.

AviSynth(+) is loaded dynamically. Distributors can build FFmpeg with "--enable-avisynth", and the binaries will work regardless of the end user having AviSynth installed. If/when an end user would like to use AviSynth scripts, then they can install AviSynth(+) and FFmpeg will be able to find and use it to open scripts.

FFmpeg can make use of the Chromaprint library for generating audio fingerprints. Pass "--enable-chromaprint" to configure to enable it. See https://acoustid.org/chromaprint.

FFmpeg can make use of the codec2 library for codec2 decoding and encoding. There is currently no native decoder, so libcodec2 must be used for decoding.

Go to http://freedv.org/, download "Codec 2 source archive". Build and install using CMake. Debian users can install the libcodec2-dev package instead. Once libcodec2 is installed you can pass "--enable-libcodec2" to configure to enable it.

The easiest way to use codec2 is with .c2 files, since they contain the mode information required for decoding. To encode such a file, use a .c2 file extension and give the libcodec2 encoder the -mode option: "ffmpeg -i input.wav -mode 700C output.c2". Playback is as simple as "ffplay output.c2". For a list of supported modes, run "ffmpeg -h encoder=libcodec2". Raw codec2 files are also supported. To make sense of them the mode in use needs to be specified as a format option: "ffmpeg -f codec2raw -mode 1300 -i input.raw output.wav".

FFmpeg can make use of the dav1d library for AV1 video decoding.

Go to https://code.videolan.org/videolan/dav1d and follow the instructions for installing the library. Then pass "--enable-libdav1d" to configure to enable it.

FFmpeg can make use of the davs2 library for AVS2-P2/IEEE1857.4 video decoding.

Go to https://github.com/pkuvcl/davs2 and follow the instructions for installing the library. Then pass "--enable-libdavs2" to configure to enable it.

libdavs2 is under the GNU Public License Version 2 or later (see http://www.gnu.org/licenses/old-licenses/gpl-2.0.html for details), you must upgrade FFmpeg's license to GPL in order to use it.

FFmpeg can make use of the uavs3d library for AVS3-P2/IEEE1857.10 video decoding.

Go to https://github.com/uavs3/uavs3d and follow the instructions for installing the library. Then pass "--enable-libuavs3d" to configure to enable it.

FFmpeg can make use of the Game Music Emu library to read audio from supported video game music file formats. Pass "--enable-libgme" to configure to enable it. See https://bitbucket.org/mpyne/game-music-emu/overview.

FFmpeg can use Intel QuickSync Video (QSV) for accelerated decoding and encoding of multiple codecs. To use QSV, FFmpeg must be linked against the "libmfx" dispatcher, which loads the actual decoding libraries.

The dispatcher is open source and can be downloaded from https://github.com/lu-zero/mfx_dispatch.git. FFmpeg needs to be configured with the "--enable-libmfx" option and "pkg-config" needs to be able to locate the dispatcher's ".pc" files.

FFmpeg can make use of the Kvazaar library for HEVC encoding.

Go to https://github.com/ultravideo/kvazaar and follow the instructions for installing the library. Then pass "--enable-libkvazaar" to configure to enable it.

FFmpeg can make use of the LAME library for MP3 encoding.

Go to http://lame.sourceforge.net/ and follow the instructions for installing the library. Then pass "--enable-libmp3lame" to configure to enable it.

FFmpeg can make use of the liblcevc_dec library for LCEVC enhacement layer decoding on supported bitstreams.

Go to https://github.com/v-novaltd/LCEVCdec and follow the instructions for installing the library. Then pass "--enable-liblcevc-dec" to configure to enable it.

LCEVCdec is under the BSD-3-Clause-Clear License.

iLBC is a narrowband speech codec that has been made freely available by Google as part of the WebRTC project. libilbc is a packaging friendly copy of the iLBC codec. FFmpeg can make use of the libilbc library for iLBC decoding and encoding.

Go to https://github.com/TimothyGu/libilbc and follow the instructions for installing the library. Then pass "--enable-libilbc" to configure to enable it.

JPEG XL is an image format intended to fully replace legacy JPEG for an extended period of life. See https://jpegxl.info/ for more information, and see https://github.com/libjxl/libjxl for the library source. You can pass "--enable-libjxl" to configure in order enable the libjxl wrapper.

FFmpeg can make use of the libvpx library for VP8/VP9 decoding and encoding.

Go to http://www.webmproject.org/ and follow the instructions for installing the library. Then pass "--enable-libvpx" to configure to enable it.

FFmpeg can make use of this library, originating in Modplug-XMMS, to read from MOD-like music files. See https://github.com/Konstanty/libmodplug. Pass "--enable-libmodplug" to configure to enable it.

Spun off Google Android sources, OpenCore, VisualOn and Fraunhofer libraries provide encoders for a number of audio codecs.

OpenCORE and VisualOn libraries are under the Apache License 2.0 (see http://www.apache.org/licenses/LICENSE-2.0 for details), which is incompatible to the LGPL version 2.1 and GPL version 2. You have to upgrade FFmpeg's license to LGPL version 3 (or if you have enabled GPL components, GPL version 3) by passing "--enable-version3" to configure in order to use it.

The license of the Fraunhofer AAC library is incompatible with the GPL. Therefore, for GPL builds, you have to pass "--enable-nonfree" to configure in order to use it. To the best of our knowledge, it is compatible with the LGPL.

OpenCORE AMR

FFmpeg can make use of the OpenCORE libraries for AMR-NB decoding/encoding and AMR-WB decoding.

Go to http://sourceforge.net/projects/opencore-amr/ and follow the instructions for installing the libraries. Then pass "--enable-libopencore-amrnb" and/or "--enable-libopencore-amrwb" to configure to enable them.

VisualOn AMR-WB encoder library

FFmpeg can make use of the VisualOn AMR-WBenc library for AMR-WB encoding.

Go to http://sourceforge.net/projects/opencore-amr/ and follow the instructions for installing the library. Then pass "--enable-libvo-amrwbenc" to configure to enable it.

Fraunhofer AAC library

FFmpeg can make use of the Fraunhofer AAC library for AAC decoding & encoding.

Go to http://sourceforge.net/projects/opencore-amr/ and follow the instructions for installing the library. Then pass "--enable-libfdk-aac" to configure to enable it.

LC3 library

FFmpeg can make use of the Google LC3 library for LC3 decoding & encoding.

Go to https://github.com/google/liblc3/ and follow the instructions for installing the library. Then pass "--enable-liblc3" to configure to enable it.

FFmpeg can make use of the OpenH264 library for H.264 decoding and encoding.

Go to http://www.openh264.org/ and follow the instructions for installing the library. Then pass "--enable-libopenh264" to configure to enable it.

For decoding, this library is much more limited than the built-in decoder in libavcodec; currently, this library lacks support for decoding B-frames and some other main/high profile features. (It currently only supports constrained baseline profile and CABAC.) Using it is mostly useful for testing and for taking advantage of Cisco's patent portfolio license (http://www.openh264.org/BINARY_LICENSE.txt).

FFmpeg can use the OpenJPEG libraries for decoding/encoding J2K videos. Go to http://www.openjpeg.org/ to get the libraries and follow the installation instructions. To enable using OpenJPEG in FFmpeg, pass "--enable-libopenjpeg" to ./configure.

FFmpeg can make use of rav1e (Rust AV1 Encoder) via its C bindings to encode videos. Go to https://github.com/xiph/rav1e/ and follow the instructions to build the C library. To enable using rav1e in FFmpeg, pass "--enable-librav1e" to ./configure.

FFmpeg can make use of the Scalable Video Technology for AV1 library for AV1 encoding.

Go to https://gitlab.com/AOMediaCodec/SVT-AV1/ and follow the instructions for installing the library. Then pass "--enable-libsvtav1" to configure to enable it.

FFmpeg can make use of the TwoLAME library for MP2 encoding.

Go to http://www.twolame.org/ and follow the instructions for installing the library. Then pass "--enable-libtwolame" to configure to enable it.

FFmpeg can read VapourSynth scripts as input. To enable support, pass "--enable-vapoursynth" to configure. Vapoursynth is detected via "pkg-config". Versions 42 or greater supported. See http://www.vapoursynth.com/.

Due to security concerns, Vapoursynth scripts will not be autodetected so the input format has to be forced. For ff* CLI tools, add "-f vapoursynth" before the input "-i yourscript.vpy".

FFmpeg can make use of the x264 library for H.264 encoding.

Go to http://www.videolan.org/developers/x264.html and follow the instructions for installing the library. Then pass "--enable-libx264" to configure to enable it.

x264 is under the GNU Public License Version 2 or later (see http://www.gnu.org/licenses/old-licenses/gpl-2.0.html for details), you must upgrade FFmpeg's license to GPL in order to use it.

FFmpeg can make use of the x265 library for HEVC encoding.

Go to http://x265.org/developers.html and follow the instructions for installing the library. Then pass "--enable-libx265" to configure to enable it.

x265 is under the GNU Public License Version 2 or later (see http://www.gnu.org/licenses/old-licenses/gpl-2.0.html for details), you must upgrade FFmpeg's license to GPL in order to use it.

FFmpeg can make use of the xavs library for AVS encoding.

Go to http://xavs.sf.net/ and follow the instructions for installing the library. Then pass "--enable-libxavs" to configure to enable it.

FFmpeg can make use of the xavs2 library for AVS2-P2/IEEE1857.4 video encoding.

Go to https://github.com/pkuvcl/xavs2 and follow the instructions for installing the library. Then pass "--enable-libxavs2" to configure to enable it.

libxavs2 is under the GNU Public License Version 2 or later (see http://www.gnu.org/licenses/old-licenses/gpl-2.0.html for details), you must upgrade FFmpeg's license to GPL in order to use it.

FFmpeg can make use of the XEVE library for EVC video encoding.

Go to https://github.com/mpeg5/xeve and follow the instructions for installing the XEVE library. Then pass "--enable-libxeve" to configure to enable it.

FFmpeg can make use of the XEVD library for EVC video decoding.

Go to https://github.com/mpeg5/xevd and follow the instructions for installing the XEVD library. Then pass "--enable-libxevd" to configure to enable it.

ZVBI is a VBI decoding library which can be used by FFmpeg to decode DVB teletext pages and DVB teletext subtitles.

Go to http://sourceforge.net/projects/zapping/ and follow the instructions for installing the library. Then pass "--enable-libzvbi" to configure to enable it.

You can use the "-formats" and "-codecs" options to have an exhaustive list.

FFmpeg supports the following file formats through the "libavformat" library:

3dostr : @tab X
4xm : @tab X
@tab 4X Technologies format, used in some games.
8088flex TMV : @tab X
@tab Audible Enhanced Audio format, used in audiobooks.
@tab Audible Format 2, 3, and 4, used in audiobooks.
@tab contains G.729 audio
@tab Multimedia format used in games like Mad Dog McCree.
3GPP AMR : X @tab X
@tab Multimedia format used in game Heart Of Darkness.
@tab Audio only format used in some Interplay games.
@tab Audio format used on the Nintendo Gamecube.
@tab Audio format used on the Nintendo Gamecube.
@tab Audio format used on the PS2.
@tab Advanced / Active Streaming Format.
@tab Audio format used on the Nintendo Wii.
AviSynth : @tab X
@tab Audio format used on Mac.
@tab Multimedia format used by the Creature Shock game.
@tab Audio and video format used in some games by Beam Software.
@tab Used in some games from Bethesda Softworks.
@tab Multimedia format used by many games.
@tab Audio only multimedia format used by some games.
@tab Used in Z and Z95 games.
@tab Argonaut Games format.
@tab Used in the game Flash Traffic: City of Angels.
@tab Audio format used on the Nintendo WiiU (based on BRSTM).
@tab Audio format used on the Nintendo Wii.
@tab Broadcast Wave 64bit.
codec2 (raw) : X @tab X
@tab Must be given -mode format option to decode correctly.
codec2 (.c2 files) : X @tab X
@tab Contains header with version and mode info, simplifying playback.
@tab Audio-only format used in console video games.
@tab Audio-only format used in console video games.
@tab Used in the game Cyberia from Interplay.
@tab Multimedia format used by Delphine Software games.
@tab Video format used by CD+G karaoke disks
@tab Amiga CD video format
@tab Apple Core Audio Format
@tab Created for the Sound Blaster Pro.
@tab Audio format used in some games by CRYO Interactive Entertainment.
@tab This format is used in Chronomaster game
@tab This format is used in the non-Windows version of the Feeble Files
     game and different game cutscenes repacked for use with ScummVM.
@tab Used in various EA games; files have extensions like WVE and UV2.
@tab Only embedded audio is decoded.
@tab .fli/.flc files
@tab Macromedia Flash video files
@tab Audio format used in various games from FunCom like The Longest Journey.
@tab Audio format for various games.
@tab General eXchange Format SMPTE 360M, used by Thomson Grass Valley
     playout servers.
@tab Only version 4 supported, used in some games from Cryo Interactive
@tab Microsoft Windows ICO
@tab Used in Quake III, Jedi Knight 2 and other computer games.
@tab Interchange File Format
@tab A format used by some old CCTV DVRs.
@tab Format used in various Interplay computer games.
@tab I-frames only
@tab A format generated by IndigoVision 8000 video server.
@tab A format used by libvpx
@tab Limitless Audio Format
@tab Used by Linux Media Labs MPEG-4 PCI boards
@tab contains LATM multiplexed AAC audio
@tab VR native stream format, used by Leitch/Harris' video servers.
@tab Metadata in text format.
@tab Used in Sim City 3000; file extension .xa.
@tab Used in some games from Capcom; file extension .mca.
@tab Used by Megalux Ultimate Paint
@tab 3GP, 3GP2, PSP, iPod variants supported
@tab muxed audio and video, VCD format supported
@tab also known as C<VOB> file, SVCD and DVD format supported
@tab also known as DVB Transport Stream
@tab MPEG-4 is a variant of QuickTime.
@tab Audio format used on the PS3.
@tab No cursor rendering.
@tab Used by MSN Messenger webcam streams.
@tab SMPTE 377M, used by D-Cinema, broadcast industry.
@tab SMPTE 386M, D-10/IMX Mapping.
@tab NC (AVIP NC4600) camera streams
@tab Nippon Telegraph and Telephone Corporation TwinVQ.
@tab NUT Open Container Format
@tab Used by TechnoTrend DVB PCI boards.
@tab File format used by RED Digital cameras, contains JPEG 2000 frames and PCM audio.
@tab Encoding is only supported for the DXT1 (Normal Quality, No Alpha) texture format.
@tab Audio and video format used in some games by Entertainment Software Partners.
@tab Output is performed by publishing stream to RTMP server
@tab Used in many Sega Saturn console games.
@tab .sol files used in Sierra Online games.
@tab Used in Sierra CD-ROM games.
@tab Multimedia format used by many games.
@tab Used in certain Loki game ports.
@tab Multimedia format used in some LucasArts games.
@tab Audio format used in Sony Sonic Stage and Sony Vegas.
@tab Audio format used in Konami PS2 games.
@tab Used on the Nintendo GameCube.
@tab Tiertex .seq files used in the DOS CD-ROM version of the game Flashback.
@tab Audio format used in many Sony PS2 games.
@tab Audio format used in Sony PS games.
@tab Multimedia format used in Origin's Wing Commander III computer game.
@tab Multimedia format used in Westwood Studios games.
@tab Multimedia format used in Westwood Studios games.
@tab Microsoft video container used in Xbox games.
@tab Audio format used on the PS3.
@tab Microsoft audio container used by XAudio 2.

"X" means that the feature in that column (encoding / decoding) is supported.

FFmpeg can read and write images for each frame of a video sequence. The following image formats are supported:

.Y.U.V : X @tab X
@tab one raw file per component
@tab Alias/Wavefront PIX image format
@tab Animated Portable Network Graphics
@tab Microsoft BMP image
@tab Argonaut BRender 3D engine image format.
@tab Cintel RAW
@tab Digital Picture Exchange
@tab OpenEXR
@tab Flexible Image Transport System
@tab Radiance HDR RGBE Image format
@tab GEM Raster image
@tab Progressive JPEG is not supported.
@tab Lossless JPEG
@tab Microsoft Paint image
@tab PAM is a PNM extension with alpha support.
@tab Portable BitMap image
@tab PhotoCD
@tab PC Paintbrush
@tab Portable FloatMap image
@tab Portable GrayMap image
@tab PGM with U and V components in YUV 4:2:0
@tab PGX file decoder
@tab Portable HalfFloatMap image
@tab Pictor/PC Paint
@tab Portable Network Graphics image
@tab Portable PixelMap image
@tab Photoshop
@tab V.Flash PTX format
@tab Quite OK Image format
@tab SGI RGB image format
@tab Sun RAS image format
@tab YUV, JPEG and some extension is not supported yet.
@tab Targa (.TGA) image format
@tab Vizrt Binary Image format
@tab Wireless Application Protocol Bitmap image format
@tab WebP image format, encoding supported through external library libwebp
@tab X BitMap image format
@tab X-Face image format
@tab X PixMap image format
@tab X Window Dump image format

"X" means that the feature in that column (encoding / decoding) is supported.

"E" means that support is provided through an external library.

4X Movie : @tab X
@tab Used in certain computer games.
8088flex TMV : @tab X
@tab Creates video suitable to be played on a commodore 64 (multicolor mode).
@tab Used in games like Mad Dog McCree.
@tab Used in Chinese MP3 players.
@tab fourcc: apch,apcn,apcs,apco,ap4h,ap4x
@tab fourcc: qdrw
@tab Used in some Argonaut games.
@tab fourcc: ASV1
@tab fourcc: ASV2
@tab fourcc: VCR1
@tab fourcc: VCR2
@tab fourcc: AASC
@tab Supported through external libraries libaom, libdav1d, librav1e and libsvtav1
@tab fourcc: AVrp
@tab Video encoding used by the Creature Shock game.
@tab Supported through external libraries libxavs2 and libdavs2
@tab Supported through external library libuavs3d
@tab Microsoft uncompressed packed 4:4:4:4
@tab Used in some games from Bethesda Softworks.
@tab fourcc: BT20
@tab Used in the game Flash Traffic: City of Angels.
@tab Codec used in Cyberia game.
@tab fourcc: CSCD
@tab Video codec for CD+G karaoke disks
@tab Amiga CD video codec
@tab AVS1-P2, JiZhun profile, encoding through external library libxavs
@tab Codec used in Delphine Software International games.
@tab Codec used in various Broderbund games.
@tab fourcc: CLJR
@tab Codec used in Chronomaster game.
@tab supported though the native vc2 (Dirac Pro) encoder
@tab aka SMPTE VC3
@tab fourcc: DUCK
@tab fourcc: TM20
@tab fourcc: TR20
@tab Codec originally used in Feeble Files game.
@tab Used in NHL 95 game.
@tab encoding and decoding supported through external libraries libxeve and libxevd
@tab lossless codec (fourcc: FFV1)
@tab fourcc: FSV1
@tab Sorenson H.263 used in Flash
@tab fourcc: G2M2, G2M3
@tab fourcc: G2M4
@tab encoding supported through external library libx264 and OpenH264
@tab encoding supported through external library libx265 and libkvazaar
@tab fourcc: ULTI
@tab Used in Quake II.
@tab Used in Quake III, Jedi Knight 2, other computer games.
@tab IFF interleaved bitmap
@tab IFF run length encoded bitmap
@tab Used in the game Cyberia from Interplay.
@tab Used in Interplay .MVE files.
@tab Codec used in Worms games.
@tab Kega emulator screen capture codec.
@tab decoding supported through external library liblcevc-dec
@tab Used in LucasArts games / SMUSH animations.
@tab Also known as Microsoft Screen 3.
@tab Also known as Microsoft Titanium Screen 2.
@tab Also known as Windows Media Video V7 Screen.
@tab Also known as Windows Media Video V9 Screen.
@tab Used in MSN Messenger Webcam streams.
@tab fourcc: VIXL
@tab libxvidcore can be used alternatively for encoding.
@tab Video encoding used in NuppelVideo files.
@tab still experimental
@tab fourcc: VP40
@tab fourcc: VP50
@tab fourcc: VP60,VP61,VP62
@tab fourcc: VP70,VP71
@tab fourcc: VP80, encoding supported through external library libvpx
@tab encoding supported through external library libvpx
@tab fourcc: Y216
@tab fourccs: QPEG, Q1.0, Q1.1
@tab fourcc: 'rle '
@tab fourcc: 'smc '
@tab fourcc: rpza
@tab still far from ideal
@tab Texture dictionaries used by the Renderware Engine.
@tab fourcc: 'RTV1'
@tab used in some games by Entertainment Software Partners
@tab Used in Sierra VMD files.
@tab Video encoding used in Smacker.
@tab experimental wavelet codec (fourcc: SNOW)
@tab fourcc: SVQ1
@tab fourcc: SVQ3
@tab fourcc: SP5X
@tab fourcc: TSCC
@tab fourcc: TSC2
@tab encoding supported through external library libtheora
@tab Codec used in DOS CD-ROM FlashBack game.
v210 QuickTime uncompressed 4:2:2 10-bit : X @tab X
@tab fourcc: 'VMX1'
@tab Codec used in videos captured by VMware.
@tab not completely working
@tab Used in Wing Commander III .MVE files.
@tab Used in Wing Commander IV.
@tab libquicktime uncompressed packed 4:2:0
@tab part of LCL, encoder experimental
@tab Encoder works only in PAL8.

"X" means that the feature in that column (encoding / decoding) is supported.

"E" means that support is provided through an external library.

8SVX exponential : @tab X
8SVX fibonacci : @tab X
@tab encoding supported through internal encoder and external library libfdk-aac
@tab encoding supported through external library libfdk-aac
@tab 16 -E<gt> 4, 8 -E<gt> 4, 8 -E<gt> 3, 8 -E<gt> 2
@tab Used in various EA titles.
@tab Used in Sim City 3000.
@tab Used in AMV files
@tab Used in FunCom games.
@tab Used in some Sega Saturn console games.
@tab Used in some Sega Saturn console games.
@tab Used in Sega Dreamcast games.
@tab Used in LucasArts SMUSH animations.
@tab Used in Westwood Studios games like Command and Conquer.
@tab encoding supported through external library libopencore-amrnb
@tab encoding supported through external library libvo-amrwbenc
@tab QuickTime fourcc 'alac'
@tab Used in Bluetooth A2DP
@tab Used in Bluetooth A2DP
@tab Used in Bink and Smacker files in many games.
@tab decoding supported through external library libcelt
codec2 : E @tab E
@tab en/decoding supported through external library libcodec2
@tab Codec used in Delphine Software International games.
@tab All versions except 5.1 are supported.
@tab supported extensions: XCh, XXCH, X96, XBR, XLL, LBR (partially)
@tab Used in few games.
@tab Used in Quake III, Jedi Knight 2 and other computer games.
@tab Used in various Interplay computer games.
@tab Used in various games.
@tab Used in Sierra Online game audio files.
@tab Used in Origin's Wing Commander IV AVI files.
@tab encoding supported through external library libgsm
@tab encoding supported through external library libgsm
@tab encoding and decoding supported through external library libilbc
@tab supported through external library liblc3
@tab Used in DVD-Audio discs.
@tab encoding supported also through external library TwoLAME
@tab encoding supported through external library LAME, ADU MP3 and MP3onMP4 also supported
@tab encoding supported through external library libopus
@tab There are still some distortions.
@tab Real 14400 bit/s codec
@tab Real 28800 bit/s codec
@tab Real low bitrate AC-3 codec
@tab Used in Bluetooth A2DP
@tab Used in Sierra VMD files.
@tab experimental codec
@tab experimental codec
@tab supported through external library libspeex
@tab Used in HD-DVD and Blu-Ray discs.
@tab Used in LucasArts SMUSH animations.
@tab A native but very primitive encoder exists.

"X" means that the feature in that column (encoding / decoding) is supported.

"E" means that support is provided through an external library.

"I" means that an integer-only version is available, too (ensures high performance on systems without hardware floating point support).

3GPP Timed Text : @tab @tab X @tab X

"X" means that the feature is supported.

"E" means that support is provided through an external library.

file : X
Icecast : X
pipe : X

"X" means that the protocol is supported.

"E" means that support is provided through an external library.

"X" means that input/output is supported.

ffplay(1), ffmpeg(1), ffprobe(1), ffmpeg-utils(1), ffmpeg-scaler(1), ffmpeg-resampler(1), ffmpeg-codecs(1), ffmpeg-bitstream-filters(1), ffmpeg-formats(1), ffmpeg-devices(1), ffmpeg-protocols(1), ffmpeg-filters(1)

The FFmpeg developers.

For details about the authorship, see the Git history of the project (https://git.ffmpeg.org/ffmpeg), e.g. by typing the command git log in the FFmpeg source directory, or browsing the online repository at https://git.ffmpeg.org/ffmpeg.

Maintainers for the specific components are listed in the file MAINTAINERS in the source code tree.