Imported Debian version 2.5.0~trusty1.1
[deb_ffmpeg.git] / ffmpeg / doc / ffmpeg.texi
CommitLineData
2ba45a60
DM
1\input texinfo @c -*- texinfo -*-
2
3@settitle ffmpeg Documentation
4@titlepage
5@center @titlefont{ffmpeg Documentation}
6@end titlepage
7
8@top
9
10@contents
11
12@chapter Synopsis
13
14ffmpeg [@var{global_options}] @{[@var{input_file_options}] -i @file{input_file}@} ... @{[@var{output_file_options}] @file{output_file}@} ...
15
16@chapter Description
17@c man begin DESCRIPTION
18
19@command{ffmpeg} is a very fast video and audio converter that can also grab from
20a live audio/video source. It can also convert between arbitrary sample
21rates and resize video on the fly with a high quality polyphase filter.
22
23@command{ffmpeg} reads from an arbitrary number of input "files" (which can be regular
24files, pipes, network streams, grabbing devices, etc.), specified by the
25@code{-i} option, and writes to an arbitrary number of output "files", which are
26specified by a plain output filename. Anything found on the command line which
27cannot be interpreted as an option is considered to be an output filename.
28
29Each input or output file can, in principle, contain any number of streams of
30different types (video/audio/subtitle/attachment/data). The allowed number and/or
31types of streams may be limited by the container format. Selecting which
32streams from which inputs will go into which output is either done automatically
33or with the @code{-map} option (see the Stream selection chapter).
34
35To refer to input files in options, you must use their indices (0-based). E.g.
36the first input file is @code{0}, the second is @code{1}, etc. Similarly, streams
37within a file are referred to by their indices. E.g. @code{2:3} refers to the
38fourth stream in the third input file. Also see the Stream specifiers chapter.
39
40As a general rule, options are applied to the next specified
41file. Therefore, order is important, and you can have the same
42option on the command line multiple times. Each occurrence is
43then applied to the next input or output file.
44Exceptions from this rule are the global options (e.g. verbosity level),
45which should be specified first.
46
47Do not mix input and output files -- first specify all input files, then all
48output files. Also do not mix options which belong to different files. All
49options apply ONLY to the next input or output file and are reset between files.
50
51@itemize
52@item
53To set the video bitrate of the output file to 64 kbit/s:
54@example
55ffmpeg -i input.avi -b:v 64k -bufsize 64k output.avi
56@end example
57
58@item
59To force the frame rate of the output file to 24 fps:
60@example
61ffmpeg -i input.avi -r 24 output.avi
62@end example
63
64@item
65To force the frame rate of the input file (valid for raw formats only)
66to 1 fps and the frame rate of the output file to 24 fps:
67@example
68ffmpeg -r 1 -i input.m2v -r 24 output.avi
69@end example
70@end itemize
71
72The format option may be needed for raw input files.
73
74@c man end DESCRIPTION
75
76@chapter Detailed description
77@c man begin DETAILED DESCRIPTION
78
79The transcoding process in @command{ffmpeg} for each output can be described by
80the following diagram:
81
82@example
83 _______ ______________
84| | | |
85| input | demuxer | encoded data | decoder
86| file | ---------> | packets | -----+
87|_______| |______________| |
88 v
89 _________
90 | |
91 | decoded |
92 | frames |
93 |_________|
94 ________ ______________ |
95| | | | |
96| output | <-------- | encoded data | <----+
97| file | muxer | packets | encoder
98|________| |______________|
99
100
101@end example
102
103@command{ffmpeg} calls the libavformat library (containing demuxers) to read
104input files and get packets containing encoded data from them. When there are
105multiple input files, @command{ffmpeg} tries to keep them synchronized by
106tracking lowest timestamp on any active input stream.
107
108Encoded packets are then passed to the decoder (unless streamcopy is selected
109for the stream, see further for a description). The decoder produces
110uncompressed frames (raw video/PCM audio/...) which can be processed further by
111filtering (see next section). After filtering, the frames are passed to the
112encoder, which encodes them and outputs encoded packets. Finally those are
113passed to the muxer, which writes the encoded packets to the output file.
114
115@section Filtering
116Before encoding, @command{ffmpeg} can process raw audio and video frames using
117filters from the libavfilter library. Several chained filters form a filter
118graph. @command{ffmpeg} distinguishes between two types of filtergraphs:
119simple and complex.
120
121@subsection Simple filtergraphs
122Simple filtergraphs are those that have exactly one input and output, both of
123the same type. In the above diagram they can be represented by simply inserting
124an additional step between decoding and encoding:
125
126@example
127 _________ ______________
128| | | |
129| decoded | | encoded data |
130| frames |\ _ | packets |
131|_________| \ /||______________|
132 \ __________ /
133 simple _\|| | / encoder
134 filtergraph | filtered |/
135 | frames |
136 |__________|
137
138@end example
139
140Simple filtergraphs are configured with the per-stream @option{-filter} option
141(with @option{-vf} and @option{-af} aliases for video and audio respectively).
142A simple filtergraph for video can look for example like this:
143
144@example
145 _______ _____________ _______ ________
146| | | | | | | |
147| input | ---> | deinterlace | ---> | scale | ---> | output |
148|_______| |_____________| |_______| |________|
149
150@end example
151
152Note that some filters change frame properties but not frame contents. E.g. the
153@code{fps} filter in the example above changes number of frames, but does not
154touch the frame contents. Another example is the @code{setpts} filter, which
155only sets timestamps and otherwise passes the frames unchanged.
156
157@subsection Complex filtergraphs
158Complex filtergraphs are those which cannot be described as simply a linear
159processing chain applied to one stream. This is the case, for example, when the graph has
160more than one input and/or output, or when output stream type is different from
161input. They can be represented with the following diagram:
162
163@example
164 _________
165| |
166| input 0 |\ __________
167|_________| \ | |
168 \ _________ /| output 0 |
169 \ | | / |__________|
170 _________ \| complex | /
171| | | |/
172| input 1 |---->| filter |\
173|_________| | | \ __________
174 /| graph | \ | |
175 / | | \| output 1 |
176 _________ / |_________| |__________|
177| | /
178| input 2 |/
179|_________|
180
181@end example
182
183Complex filtergraphs are configured with the @option{-filter_complex} option.
184Note that this option is global, since a complex filtergraph, by its nature,
185cannot be unambiguously associated with a single stream or file.
186
187The @option{-lavfi} option is equivalent to @option{-filter_complex}.
188
189A trivial example of a complex filtergraph is the @code{overlay} filter, which
190has two video inputs and one video output, containing one video overlaid on top
191of the other. Its audio counterpart is the @code{amix} filter.
192
193@section Stream copy
194Stream copy is a mode selected by supplying the @code{copy} parameter to the
195@option{-codec} option. It makes @command{ffmpeg} omit the decoding and encoding
196step for the specified stream, so it does only demuxing and muxing. It is useful
197for changing the container format or modifying container-level metadata. The
198diagram above will, in this case, simplify to this:
199
200@example
201 _______ ______________ ________
202| | | | | |
203| input | demuxer | encoded data | muxer | output |
204| file | ---------> | packets | -------> | file |
205|_______| |______________| |________|
206
207@end example
208
209Since there is no decoding or encoding, it is very fast and there is no quality
210loss. However, it might not work in some cases because of many factors. Applying
211filters is obviously also impossible, since filters work on uncompressed data.
212
213@c man end DETAILED DESCRIPTION
214
215@chapter Stream selection
216@c man begin STREAM SELECTION
217
218By default, @command{ffmpeg} includes only one stream of each type (video, audio, subtitle)
219present in the input files and adds them to each output file. It picks the
220"best" of each based upon the following criteria: for video, it is the stream
221with the highest resolution, for audio, it is the stream with the most channels, for
222subtitles, it is the first subtitle stream. In the case where several streams of
223the same type rate equally, the stream with the lowest index is chosen.
224
225You can disable some of those defaults by using the @code{-vn/-an/-sn} options. For
226full manual control, use the @code{-map} option, which disables the defaults just
227described.
228
229@c man end STREAM SELECTION
230
231@chapter Options
232@c man begin OPTIONS
233
234@include fftools-common-opts.texi
235
236@section Main options
237
238@table @option
239
240@item -f @var{fmt} (@emph{input/output})
241Force input or output file format. The format is normally auto detected for input
242files and guessed from the file extension for output files, so this option is not
243needed in most cases.
244
245@item -i @var{filename} (@emph{input})
246input file name
247
248@item -y (@emph{global})
249Overwrite output files without asking.
250
251@item -n (@emph{global})
252Do not overwrite output files, and exit immediately if a specified
253output file already exists.
254
255@item -c[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
256@itemx -codec[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
257Select an encoder (when used before an output file) or a decoder (when used
258before an input file) for one or more streams. @var{codec} is the name of a
259decoder/encoder or a special value @code{copy} (output only) to indicate that
260the stream is not to be re-encoded.
261
262For example
263@example
264ffmpeg -i INPUT -map 0 -c:v libx264 -c:a copy OUTPUT
265@end example
266encodes all video streams with libx264 and copies all audio streams.
267
268For each stream, the last matching @code{c} option is applied, so
269@example
270ffmpeg -i INPUT -map 0 -c copy -c:v:1 libx264 -c:a:137 libvorbis OUTPUT
271@end example
272will copy all the streams except the second video, which will be encoded with
273libx264, and the 138th audio, which will be encoded with libvorbis.
274
275@item -t @var{duration} (@emph{input/output})
276When used as an input option (before @code{-i}), limit the @var{duration} of
277data read from the input file.
278
279When used as an output option (before an output filename), stop writing the
280output after its duration reaches @var{duration}.
281
282@var{duration} may be a number in seconds, or in @code{hh:mm:ss[.xxx]} form.
283
284-to and -t are mutually exclusive and -t has priority.
285
286@item -to @var{position} (@emph{output})
287Stop writing the output at @var{position}.
288@var{position} may be a number in seconds, or in @code{hh:mm:ss[.xxx]} form.
289
290-to and -t are mutually exclusive and -t has priority.
291
292@item -fs @var{limit_size} (@emph{output})
293Set the file size limit, expressed in bytes.
294
295@item -ss @var{position} (@emph{input/output})
296When used as an input option (before @code{-i}), seeks in this input file to
297@var{position}. Note the in most formats it is not possible to seek exactly, so
298@command{ffmpeg} will seek to the closest seek point before @var{position}.
299When transcoding and @option{-accurate_seek} is enabled (the default), this
300extra segment between the seek point and @var{position} will be decoded and
301discarded. When doing stream copy or when @option{-noaccurate_seek} is used, it
302will be preserved.
303
304When used as an output option (before an output filename), decodes but discards
305input until the timestamps reach @var{position}.
306
307@var{position} may be either in seconds or in @code{hh:mm:ss[.xxx]} form.
308
309@item -itsoffset @var{offset} (@emph{input})
310Set the input time offset.
311
312@var{offset} must be a time duration specification,
313see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
314
315The offset is added to the timestamps of the input files. Specifying
316a positive offset means that the corresponding streams are delayed by
317the time duration specified in @var{offset}.
318
319@item -timestamp @var{date} (@emph{output})
320Set the recording timestamp in the container.
321
322@var{date} must be a time duration specification,
323see @ref{date syntax,,the Date section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
324
325@item -metadata[:metadata_specifier] @var{key}=@var{value} (@emph{output,per-metadata})
326Set a metadata key/value pair.
327
328An optional @var{metadata_specifier} may be given to set metadata
329on streams or chapters. See @code{-map_metadata} documentation for
330details.
331
332This option overrides metadata set with @code{-map_metadata}. It is
333also possible to delete metadata by using an empty value.
334
335For example, for setting the title in the output file:
336@example
337ffmpeg -i in.avi -metadata title="my title" out.flv
338@end example
339
340To set the language of the first audio stream:
341@example
342ffmpeg -i INPUT -metadata:s:a:0 language=eng OUTPUT
343@end example
344
345@item -target @var{type} (@emph{output})
346Specify target file type (@code{vcd}, @code{svcd}, @code{dvd}, @code{dv},
347@code{dv50}). @var{type} may be prefixed with @code{pal-}, @code{ntsc-} or
348@code{film-} to use the corresponding standard. All the format options
349(bitrate, codecs, buffer sizes) are then set automatically. You can just type:
350
351@example
352ffmpeg -i myfile.avi -target vcd /tmp/vcd.mpg
353@end example
354
355Nevertheless you can specify additional options as long as you know
356they do not conflict with the standard, as in:
357
358@example
359ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
360@end example
361
362@item -dframes @var{number} (@emph{output})
f6fa7814 363Set the number of data frames to output. This is an alias for @code{-frames:d}.
2ba45a60
DM
364
365@item -frames[:@var{stream_specifier}] @var{framecount} (@emph{output,per-stream})
366Stop writing to the stream after @var{framecount} frames.
367
368@item -q[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
369@itemx -qscale[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
370Use fixed quality scale (VBR). The meaning of @var{q}/@var{qscale} is
371codec-dependent.
372If @var{qscale} is used without a @var{stream_specifier} then it applies only
373to the video stream, this is to maintain compatibility with previous behavior
374and as specifying the same codec specific value to 2 different codecs that is
375audio and video generally is not what is intended when no stream_specifier is
376used.
377
378@anchor{filter_option}
379@item -filter[:@var{stream_specifier}] @var{filtergraph} (@emph{output,per-stream})
380Create the filtergraph specified by @var{filtergraph} and use it to
381filter the stream.
382
383@var{filtergraph} is a description of the filtergraph to apply to
384the stream, and must have a single input and a single output of the
385same type of the stream. In the filtergraph, the input is associated
386to the label @code{in}, and the output to the label @code{out}. See
387the ffmpeg-filters manual for more information about the filtergraph
388syntax.
389
390See the @ref{filter_complex_option,,-filter_complex option} if you
391want to create filtergraphs with multiple inputs and/or outputs.
392
393@item -filter_script[:@var{stream_specifier}] @var{filename} (@emph{output,per-stream})
394This option is similar to @option{-filter}, the only difference is that its
395argument is the name of the file from which a filtergraph description is to be
396read.
397
398@item -pre[:@var{stream_specifier}] @var{preset_name} (@emph{output,per-stream})
399Specify the preset for matching stream(s).
400
401@item -stats (@emph{global})
402Print encoding progress/statistics. It is on by default, to explicitly
403disable it you need to specify @code{-nostats}.
404
405@item -progress @var{url} (@emph{global})
406Send program-friendly progress information to @var{url}.
407
408Progress information is written approximately every second and at the end of
409the encoding process. It is made of "@var{key}=@var{value}" lines. @var{key}
410consists of only alphanumeric characters. The last key of a sequence of
411progress information is always "progress".
412
413@item -stdin
414Enable interaction on standard input. On by default unless standard input is
415used as an input. To explicitly disable interaction you need to specify
416@code{-nostdin}.
417
418Disabling interaction on standard input is useful, for example, if
419ffmpeg is in the background process group. Roughly the same result can
420be achieved with @code{ffmpeg ... < /dev/null} but it requires a
421shell.
422
423@item -debug_ts (@emph{global})
424Print timestamp information. It is off by default. This option is
425mostly useful for testing and debugging purposes, and the output
426format may change from one version to another, so it should not be
427employed by portable scripts.
428
429See also the option @code{-fdebug ts}.
430
431@item -attach @var{filename} (@emph{output})
432Add an attachment to the output file. This is supported by a few formats
433like Matroska for e.g. fonts used in rendering subtitles. Attachments
434are implemented as a specific type of stream, so this option will add
435a new stream to the file. It is then possible to use per-stream options
436on this stream in the usual way. Attachment streams created with this
437option will be created after all the other streams (i.e. those created
438with @code{-map} or automatic mappings).
439
440Note that for Matroska you also have to set the mimetype metadata tag:
441@example
442ffmpeg -i INPUT -attach DejaVuSans.ttf -metadata:s:2 mimetype=application/x-truetype-font out.mkv
443@end example
444(assuming that the attachment stream will be third in the output file).
445
446@item -dump_attachment[:@var{stream_specifier}] @var{filename} (@emph{input,per-stream})
447Extract the matching attachment stream into a file named @var{filename}. If
448@var{filename} is empty, then the value of the @code{filename} metadata tag
449will be used.
450
451E.g. to extract the first attachment to a file named 'out.ttf':
452@example
453ffmpeg -dump_attachment:t:0 out.ttf -i INPUT
454@end example
455To extract all attachments to files determined by the @code{filename} tag:
456@example
457ffmpeg -dump_attachment:t "" -i INPUT
458@end example
459
460Technical note -- attachments are implemented as codec extradata, so this
461option can actually be used to extract extradata from any stream, not just
462attachments.
463
464@end table
465
466@section Video Options
467
468@table @option
469@item -vframes @var{number} (@emph{output})
f6fa7814 470Set the number of video frames to output. This is an alias for @code{-frames:v}.
2ba45a60
DM
471@item -r[:@var{stream_specifier}] @var{fps} (@emph{input/output,per-stream})
472Set frame rate (Hz value, fraction or abbreviation).
473
474As an input option, ignore any timestamps stored in the file and instead
475generate timestamps assuming constant frame rate @var{fps}.
476This is not the same as the @option{-framerate} option used for some input formats
477like image2 or v4l2 (it used to be the same in older versions of FFmpeg).
478If in doubt use @option{-framerate} instead of the input option @option{-r}.
479
480As an output option, duplicate or drop input frames to achieve constant output
481frame rate @var{fps}.
482
483@item -s[:@var{stream_specifier}] @var{size} (@emph{input/output,per-stream})
484Set frame size.
485
486As an input option, this is a shortcut for the @option{video_size} private
487option, recognized by some demuxers for which the frame size is either not
488stored in the file or is configurable -- e.g. raw video or video grabbers.
489
490As an output option, this inserts the @code{scale} video filter to the
491@emph{end} of the corresponding filtergraph. Please use the @code{scale} filter
492directly to insert it at the beginning or some other place.
493
494The format is @samp{wxh} (default - same as source).
495
496@item -aspect[:@var{stream_specifier}] @var{aspect} (@emph{output,per-stream})
497Set the video display aspect ratio specified by @var{aspect}.
498
499@var{aspect} can be a floating point number string, or a string of the
500form @var{num}:@var{den}, where @var{num} and @var{den} are the
501numerator and denominator of the aspect ratio. For example "4:3",
502"16:9", "1.3333", and "1.7777" are valid argument values.
503
504If used together with @option{-vcodec copy}, it will affect the aspect ratio
505stored at container level, but not the aspect ratio stored in encoded
506frames, if it exists.
507
508@item -vn (@emph{output})
509Disable video recording.
510
511@item -vcodec @var{codec} (@emph{output})
512Set the video codec. This is an alias for @code{-codec:v}.
513
514@item -pass[:@var{stream_specifier}] @var{n} (@emph{output,per-stream})
515Select the pass number (1 or 2). It is used to do two-pass
516video encoding. The statistics of the video are recorded in the first
517pass into a log file (see also the option -passlogfile),
518and in the second pass that log file is used to generate the video
519at the exact requested bitrate.
520On pass 1, you may just deactivate audio and set output to null,
521examples for Windows and Unix:
522@example
523ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y NUL
524ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y /dev/null
525@end example
526
527@item -passlogfile[:@var{stream_specifier}] @var{prefix} (@emph{output,per-stream})
528Set two-pass log file name prefix to @var{prefix}, the default file name
529prefix is ``ffmpeg2pass''. The complete file name will be
530@file{PREFIX-N.log}, where N is a number specific to the output
531stream
532
533@item -vf @var{filtergraph} (@emph{output})
534Create the filtergraph specified by @var{filtergraph} and use it to
535filter the stream.
536
537This is an alias for @code{-filter:v}, see the @ref{filter_option,,-filter option}.
538@end table
539
540@section Advanced Video options
541
542@table @option
543@item -pix_fmt[:@var{stream_specifier}] @var{format} (@emph{input/output,per-stream})
544Set pixel format. Use @code{-pix_fmts} to show all the supported
545pixel formats.
546If the selected pixel format can not be selected, ffmpeg will print a
547warning and select the best pixel format supported by the encoder.
548If @var{pix_fmt} is prefixed by a @code{+}, ffmpeg will exit with an error
549if the requested pixel format can not be selected, and automatic conversions
550inside filtergraphs are disabled.
551If @var{pix_fmt} is a single @code{+}, ffmpeg selects the same pixel format
552as the input (or graph output) and automatic conversions are disabled.
553
554@item -sws_flags @var{flags} (@emph{input/output})
555Set SwScaler flags.
556@item -vdt @var{n}
557Discard threshold.
558
559@item -rc_override[:@var{stream_specifier}] @var{override} (@emph{output,per-stream})
560Rate control override for specific intervals, formatted as "int,int,int"
561list separated with slashes. Two first values are the beginning and
562end frame numbers, last one is quantizer to use if positive, or quality
563factor if negative.
564
565@item -ilme
566Force interlacing support in encoder (MPEG-2 and MPEG-4 only).
567Use this option if your input file is interlaced and you want
568to keep the interlaced format for minimum losses.
569The alternative is to deinterlace the input stream with
570@option{-deinterlace}, but deinterlacing introduces losses.
571@item -psnr
572Calculate PSNR of compressed frames.
573@item -vstats
574Dump video coding statistics to @file{vstats_HHMMSS.log}.
575@item -vstats_file @var{file}
576Dump video coding statistics to @var{file}.
577@item -top[:@var{stream_specifier}] @var{n} (@emph{output,per-stream})
578top=1/bottom=0/auto=-1 field first
579@item -dc @var{precision}
580Intra_dc_precision.
581@item -vtag @var{fourcc/tag} (@emph{output})
582Force video tag/fourcc. This is an alias for @code{-tag:v}.
583@item -qphist (@emph{global})
584Show QP histogram
585@item -vbsf @var{bitstream_filter}
586Deprecated see -bsf
587
588@item -force_key_frames[:@var{stream_specifier}] @var{time}[,@var{time}...] (@emph{output,per-stream})
589@item -force_key_frames[:@var{stream_specifier}] expr:@var{expr} (@emph{output,per-stream})
590Force key frames at the specified timestamps, more precisely at the first
591frames after each specified time.
592
593If the argument is prefixed with @code{expr:}, the string @var{expr}
594is interpreted like an expression and is evaluated for each frame. A
595key frame is forced in case the evaluation is non-zero.
596
597If one of the times is "@code{chapters}[@var{delta}]", it is expanded into
598the time of the beginning of all chapters in the file, shifted by
599@var{delta}, expressed as a time in seconds.
600This option can be useful to ensure that a seek point is present at a
601chapter mark or any other designated place in the output file.
602
603For example, to insert a key frame at 5 minutes, plus key frames 0.1 second
604before the beginning of every chapter:
605@example
606-force_key_frames 0:05:00,chapters-0.1
607@end example
608
609The expression in @var{expr} can contain the following constants:
610@table @option
611@item n
612the number of current processed frame, starting from 0
613@item n_forced
614the number of forced frames
615@item prev_forced_n
616the number of the previous forced frame, it is @code{NAN} when no
617keyframe was forced yet
618@item prev_forced_t
619the time of the previous forced frame, it is @code{NAN} when no
620keyframe was forced yet
621@item t
622the time of the current processed frame
623@end table
624
625For example to force a key frame every 5 seconds, you can specify:
626@example
627-force_key_frames expr:gte(t,n_forced*5)
628@end example
629
630To force a key frame 5 seconds after the time of the last forced one,
631starting from second 13:
632@example
633-force_key_frames expr:if(isnan(prev_forced_t),gte(t,13),gte(t,prev_forced_t+5))
634@end example
635
636Note that forcing too many keyframes is very harmful for the lookahead
637algorithms of certain encoders: using fixed-GOP options or similar
638would be more efficient.
639
640@item -copyinkf[:@var{stream_specifier}] (@emph{output,per-stream})
641When doing stream copy, copy also non-key frames found at the
642beginning.
643
644@item -hwaccel[:@var{stream_specifier}] @var{hwaccel} (@emph{input,per-stream})
645Use hardware acceleration to decode the matching stream(s). The allowed values
646of @var{hwaccel} are:
647@table @option
648@item none
649Do not use any hardware acceleration (the default).
650
651@item auto
652Automatically select the hardware acceleration method.
653
654@item vda
655Use Apple VDA hardware acceleration.
656
657@item vdpau
658Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration.
659
660@item dxva2
661Use DXVA2 (DirectX Video Acceleration) hardware acceleration.
662@end table
663
664This option has no effect if the selected hwaccel is not available or not
665supported by the chosen decoder.
666
667Note that most acceleration methods are intended for playback and will not be
668faster than software decoding on modern CPUs. Additionally, @command{ffmpeg}
669will usually need to copy the decoded frames from the GPU memory into the system
670memory, resulting in further performance loss. This option is thus mainly
671useful for testing.
672
673@item -hwaccel_device[:@var{stream_specifier}] @var{hwaccel_device} (@emph{input,per-stream})
674Select a device to use for hardware acceleration.
675
676This option only makes sense when the @option{-hwaccel} option is also
677specified. Its exact meaning depends on the specific hardware acceleration
678method chosen.
679
680@table @option
681@item vdpau
682For VDPAU, this option specifies the X11 display/screen to use. If this option
683is not specified, the value of the @var{DISPLAY} environment variable is used
684
685@item dxva2
686For DXVA2, this option should contain the number of the display adapter to use.
687If this option is not specified, the default adapter is used.
688@end table
689@end table
690
691@section Audio Options
692
693@table @option
694@item -aframes @var{number} (@emph{output})
f6fa7814 695Set the number of audio frames to output. This is an alias for @code{-frames:a}.
2ba45a60
DM
696@item -ar[:@var{stream_specifier}] @var{freq} (@emph{input/output,per-stream})
697Set the audio sampling frequency. For output streams it is set by
698default to the frequency of the corresponding input stream. For input
699streams this option only makes sense for audio grabbing devices and raw
700demuxers and is mapped to the corresponding demuxer options.
701@item -aq @var{q} (@emph{output})
702Set the audio quality (codec-specific, VBR). This is an alias for -q:a.
703@item -ac[:@var{stream_specifier}] @var{channels} (@emph{input/output,per-stream})
704Set the number of audio channels. For output streams it is set by
705default to the number of input audio channels. For input streams
706this option only makes sense for audio grabbing devices and raw demuxers
707and is mapped to the corresponding demuxer options.
708@item -an (@emph{output})
709Disable audio recording.
710@item -acodec @var{codec} (@emph{input/output})
711Set the audio codec. This is an alias for @code{-codec:a}.
712@item -sample_fmt[:@var{stream_specifier}] @var{sample_fmt} (@emph{output,per-stream})
713Set the audio sample format. Use @code{-sample_fmts} to get a list
714of supported sample formats.
715
716@item -af @var{filtergraph} (@emph{output})
717Create the filtergraph specified by @var{filtergraph} and use it to
718filter the stream.
719
720This is an alias for @code{-filter:a}, see the @ref{filter_option,,-filter option}.
721@end table
722
723@section Advanced Audio options
724
725@table @option
726@item -atag @var{fourcc/tag} (@emph{output})
727Force audio tag/fourcc. This is an alias for @code{-tag:a}.
728@item -absf @var{bitstream_filter}
729Deprecated, see -bsf
730@item -guess_layout_max @var{channels} (@emph{input,per-stream})
731If some input channel layout is not known, try to guess only if it
732corresponds to at most the specified number of channels. For example, 2
733tells to @command{ffmpeg} to recognize 1 channel as mono and 2 channels as
734stereo but not 6 channels as 5.1. The default is to always try to guess. Use
7350 to disable all guessing.
736@end table
737
738@section Subtitle options
739
740@table @option
741@item -scodec @var{codec} (@emph{input/output})
742Set the subtitle codec. This is an alias for @code{-codec:s}.
743@item -sn (@emph{output})
744Disable subtitle recording.
745@item -sbsf @var{bitstream_filter}
746Deprecated, see -bsf
747@end table
748
749@section Advanced Subtitle options
750
751@table @option
752
753@item -fix_sub_duration
754Fix subtitles durations. For each subtitle, wait for the next packet in the
755same stream and adjust the duration of the first to avoid overlap. This is
756necessary with some subtitles codecs, especially DVB subtitles, because the
757duration in the original packet is only a rough estimate and the end is
758actually marked by an empty subtitle frame. Failing to use this option when
759necessary can result in exaggerated durations or muxing failures due to
760non-monotonic timestamps.
761
762Note that this option will delay the output of all data until the next
763subtitle packet is decoded: it may increase memory consumption and latency a
764lot.
765
766@item -canvas_size @var{size}
767Set the size of the canvas used to render subtitles.
768
769@end table
770
771@section Advanced options
772
773@table @option
774@item -map [-]@var{input_file_id}[:@var{stream_specifier}][,@var{sync_file_id}[:@var{stream_specifier}]] | @var{[linklabel]} (@emph{output})
775
776Designate one or more input streams as a source for the output file. Each input
777stream is identified by the input file index @var{input_file_id} and
778the input stream index @var{input_stream_id} within the input
779file. Both indices start at 0. If specified,
780@var{sync_file_id}:@var{stream_specifier} sets which input stream
781is used as a presentation sync reference.
782
783The first @code{-map} option on the command line specifies the
784source for output stream 0, the second @code{-map} option specifies
785the source for output stream 1, etc.
786
787A @code{-} character before the stream identifier creates a "negative" mapping.
788It disables matching streams from already created mappings.
789
790An alternative @var{[linklabel]} form will map outputs from complex filter
791graphs (see the @option{-filter_complex} option) to the output file.
792@var{linklabel} must correspond to a defined output link label in the graph.
793
794For example, to map ALL streams from the first input file to output
795@example
796ffmpeg -i INPUT -map 0 output
797@end example
798
799For example, if you have two audio streams in the first input file,
800these streams are identified by "0:0" and "0:1". You can use
801@code{-map} to select which streams to place in an output file. For
802example:
803@example
804ffmpeg -i INPUT -map 0:1 out.wav
805@end example
806will map the input stream in @file{INPUT} identified by "0:1" to
807the (single) output stream in @file{out.wav}.
808
809For example, to select the stream with index 2 from input file
810@file{a.mov} (specified by the identifier "0:2"), and stream with
811index 6 from input @file{b.mov} (specified by the identifier "1:6"),
812and copy them to the output file @file{out.mov}:
813@example
814ffmpeg -i a.mov -i b.mov -c copy -map 0:2 -map 1:6 out.mov
815@end example
816
817To select all video and the third audio stream from an input file:
818@example
819ffmpeg -i INPUT -map 0:v -map 0:a:2 OUTPUT
820@end example
821
822To map all the streams except the second audio, use negative mappings
823@example
824ffmpeg -i INPUT -map 0 -map -0:a:1 OUTPUT
825@end example
826
827To pick the English audio stream:
828@example
829ffmpeg -i INPUT -map 0:m:language:eng OUTPUT
830@end example
831
832Note that using this option disables the default mappings for this output file.
833
834@item -map_channel [@var{input_file_id}.@var{stream_specifier}.@var{channel_id}|-1][:@var{output_file_id}.@var{stream_specifier}]
835Map an audio channel from a given input to an output. If
836@var{output_file_id}.@var{stream_specifier} is not set, the audio channel will
837be mapped on all the audio streams.
838
839Using "-1" instead of
840@var{input_file_id}.@var{stream_specifier}.@var{channel_id} will map a muted
841channel.
842
843For example, assuming @var{INPUT} is a stereo audio file, you can switch the
844two audio channels with the following command:
845@example
846ffmpeg -i INPUT -map_channel 0.0.1 -map_channel 0.0.0 OUTPUT
847@end example
848
849If you want to mute the first channel and keep the second:
850@example
851ffmpeg -i INPUT -map_channel -1 -map_channel 0.0.1 OUTPUT
852@end example
853
854The order of the "-map_channel" option specifies the order of the channels in
855the output stream. The output channel layout is guessed from the number of
856channels mapped (mono if one "-map_channel", stereo if two, etc.). Using "-ac"
857in combination of "-map_channel" makes the channel gain levels to be updated if
858input and output channel layouts don't match (for instance two "-map_channel"
859options and "-ac 6").
860
861You can also extract each channel of an input to specific outputs; the following
862command extracts two channels of the @var{INPUT} audio stream (file 0, stream 0)
863to the respective @var{OUTPUT_CH0} and @var{OUTPUT_CH1} outputs:
864@example
865ffmpeg -i INPUT -map_channel 0.0.0 OUTPUT_CH0 -map_channel 0.0.1 OUTPUT_CH1
866@end example
867
868The following example splits the channels of a stereo input into two separate
869streams, which are put into the same output file:
870@example
871ffmpeg -i stereo.wav -map 0:0 -map 0:0 -map_channel 0.0.0:0.0 -map_channel 0.0.1:0.1 -y out.ogg
872@end example
873
874Note that currently each output stream can only contain channels from a single
875input stream; you can't for example use "-map_channel" to pick multiple input
876audio channels contained in different streams (from the same or different files)
877and merge them into a single output stream. It is therefore not currently
878possible, for example, to turn two separate mono streams into a single stereo
879stream. However splitting a stereo stream into two single channel mono streams
880is possible.
881
882If you need this feature, a possible workaround is to use the @emph{amerge}
883filter. For example, if you need to merge a media (here @file{input.mkv}) with 2
884mono audio streams into one single stereo channel audio stream (and keep the
885video stream), you can use the following command:
886@example
887ffmpeg -i input.mkv -filter_complex "[0:1] [0:2] amerge" -c:a pcm_s16le -c:v copy output.mkv
888@end example
889
890@item -map_metadata[:@var{metadata_spec_out}] @var{infile}[:@var{metadata_spec_in}] (@emph{output,per-metadata})
891Set metadata information of the next output file from @var{infile}. Note that
892those are file indices (zero-based), not filenames.
893Optional @var{metadata_spec_in/out} parameters specify, which metadata to copy.
894A metadata specifier can have the following forms:
895@table @option
896@item @var{g}
897global metadata, i.e. metadata that applies to the whole file
898
899@item @var{s}[:@var{stream_spec}]
900per-stream metadata. @var{stream_spec} is a stream specifier as described
901in the @ref{Stream specifiers} chapter. In an input metadata specifier, the first
902matching stream is copied from. In an output metadata specifier, all matching
903streams are copied to.
904
905@item @var{c}:@var{chapter_index}
906per-chapter metadata. @var{chapter_index} is the zero-based chapter index.
907
908@item @var{p}:@var{program_index}
909per-program metadata. @var{program_index} is the zero-based program index.
910@end table
911If metadata specifier is omitted, it defaults to global.
912
913By default, global metadata is copied from the first input file,
914per-stream and per-chapter metadata is copied along with streams/chapters. These
915default mappings are disabled by creating any mapping of the relevant type. A negative
916file index can be used to create a dummy mapping that just disables automatic copying.
917
918For example to copy metadata from the first stream of the input file to global metadata
919of the output file:
920@example
921ffmpeg -i in.ogg -map_metadata 0:s:0 out.mp3
922@end example
923
924To do the reverse, i.e. copy global metadata to all audio streams:
925@example
926ffmpeg -i in.mkv -map_metadata:s:a 0:g out.mkv
927@end example
928Note that simple @code{0} would work as well in this example, since global
929metadata is assumed by default.
930
931@item -map_chapters @var{input_file_index} (@emph{output})
932Copy chapters from input file with index @var{input_file_index} to the next
933output file. If no chapter mapping is specified, then chapters are copied from
934the first input file with at least one chapter. Use a negative file index to
935disable any chapter copying.
936
937@item -benchmark (@emph{global})
938Show benchmarking information at the end of an encode.
939Shows CPU time used and maximum memory consumption.
940Maximum memory consumption is not supported on all systems,
941it will usually display as 0 if not supported.
942@item -benchmark_all (@emph{global})
943Show benchmarking information during the encode.
944Shows CPU time used in various steps (audio/video encode/decode).
945@item -timelimit @var{duration} (@emph{global})
946Exit after ffmpeg has been running for @var{duration} seconds.
947@item -dump (@emph{global})
948Dump each input packet to stderr.
949@item -hex (@emph{global})
950When dumping packets, also dump the payload.
951@item -re (@emph{input})
952Read input at native frame rate. Mainly used to simulate a grab device.
953or live input stream (e.g. when reading from a file). Should not be used
954with actual grab devices or live input streams (where it can cause packet
955loss).
956By default @command{ffmpeg} attempts to read the input(s) as fast as possible.
957This option will slow down the reading of the input(s) to the native frame rate
958of the input(s). It is useful for real-time output (e.g. live streaming).
959@item -loop_input
960Loop over the input stream. Currently it works only for image
961streams. This option is used for automatic FFserver testing.
962This option is deprecated, use -loop 1.
963@item -loop_output @var{number_of_times}
964Repeatedly loop output for formats that support looping such as animated GIF
965(0 will loop the output infinitely).
966This option is deprecated, use -loop.
967@item -vsync @var{parameter}
968Video sync method.
969For compatibility reasons old values can be specified as numbers.
970Newly added values will have to be specified as strings always.
971
972@table @option
973@item 0, passthrough
974Each frame is passed with its timestamp from the demuxer to the muxer.
975@item 1, cfr
976Frames will be duplicated and dropped to achieve exactly the requested
977constant frame rate.
978@item 2, vfr
979Frames are passed through with their timestamp or dropped so as to
980prevent 2 frames from having the same timestamp.
981@item drop
982As passthrough but destroys all timestamps, making the muxer generate
983fresh timestamps based on frame-rate.
984@item -1, auto
985Chooses between 1 and 2 depending on muxer capabilities. This is the
986default method.
987@end table
988
989Note that the timestamps may be further modified by the muxer, after this.
990For example, in the case that the format option @option{avoid_negative_ts}
991is enabled.
992
993With -map you can select from which stream the timestamps should be
994taken. You can leave either video or audio unchanged and sync the
995remaining stream(s) to the unchanged one.
996
997@item -async @var{samples_per_second}
998Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps,
999the parameter is the maximum samples per second by which the audio is changed.
1000-async 1 is a special case where only the start of the audio stream is corrected
1001without any later correction.
1002
1003Note that the timestamps may be further modified by the muxer, after this.
1004For example, in the case that the format option @option{avoid_negative_ts}
1005is enabled.
1006
1007This option has been deprecated. Use the @code{aresample} audio filter instead.
1008
1009@item -copyts
1010Do not process input timestamps, but keep their values without trying
1011to sanitize them. In particular, do not remove the initial start time
1012offset value.
1013
1014Note that, depending on the @option{vsync} option or on specific muxer
1015processing (e.g. in case the format option @option{avoid_negative_ts}
1016is enabled) the output timestamps may mismatch with the input
1017timestamps even when this option is selected.
1018
f6fa7814
DM
1019@item -start_at_zero
1020When used with @option{copyts}, shift input timestamps so they start at zero.
1021
1022This means that using e.g. @code{-ss 50} will make output timestamps start at
102350 seconds, regardless of what timestamp the input file started at.
1024
2ba45a60
DM
1025@item -copytb @var{mode}
1026Specify how to set the encoder timebase when stream copying. @var{mode} is an
1027integer numeric value, and can assume one of the following values:
1028
1029@table @option
1030@item 1
1031Use the demuxer timebase.
1032
1033The time base is copied to the output encoder from the corresponding input
1034demuxer. This is sometimes required to avoid non monotonically increasing
1035timestamps when copying video streams with variable frame rate.
1036
1037@item 0
1038Use the decoder timebase.
1039
1040The time base is copied to the output encoder from the corresponding input
1041decoder.
1042
1043@item -1
1044Try to make the choice automatically, in order to generate a sane output.
1045@end table
1046
1047Default value is -1.
1048
1049@item -shortest (@emph{output})
1050Finish encoding when the shortest input stream ends.
1051@item -dts_delta_threshold
1052Timestamp discontinuity delta threshold.
1053@item -muxdelay @var{seconds} (@emph{input})
1054Set the maximum demux-decode delay.
1055@item -muxpreload @var{seconds} (@emph{input})
1056Set the initial demux-decode delay.
1057@item -streamid @var{output-stream-index}:@var{new-value} (@emph{output})
1058Assign a new stream-id value to an output stream. This option should be
1059specified prior to the output filename to which it applies.
1060For the situation where multiple output files exist, a streamid
1061may be reassigned to a different value.
1062
1063For example, to set the stream 0 PID to 33 and the stream 1 PID to 36 for
1064an output mpegts file:
1065@example
1066ffmpeg -i infile -streamid 0:33 -streamid 1:36 out.ts
1067@end example
1068
1069@item -bsf[:@var{stream_specifier}] @var{bitstream_filters} (@emph{output,per-stream})
1070Set bitstream filters for matching streams. @var{bitstream_filters} is
1071a comma-separated list of bitstream filters. Use the @code{-bsfs} option
1072to get the list of bitstream filters.
1073@example
1074ffmpeg -i h264.mp4 -c:v copy -bsf:v h264_mp4toannexb -an out.h264
1075@end example
1076@example
1077ffmpeg -i file.mov -an -vn -bsf:s mov2textsub -c:s copy -f rawvideo sub.txt
1078@end example
1079
1080@item -tag[:@var{stream_specifier}] @var{codec_tag} (@emph{input/output,per-stream})
1081Force a tag/fourcc for matching streams.
1082
1083@item -timecode @var{hh}:@var{mm}:@var{ss}SEP@var{ff}
1084Specify Timecode for writing. @var{SEP} is ':' for non drop timecode and ';'
1085(or '.') for drop.
1086@example
1087ffmpeg -i input.mpg -timecode 01:02:03.04 -r 30000/1001 -s ntsc output.mpg
1088@end example
1089
1090@anchor{filter_complex_option}
1091@item -filter_complex @var{filtergraph} (@emph{global})
1092Define a complex filtergraph, i.e. one with arbitrary number of inputs and/or
1093outputs. For simple graphs -- those with one input and one output of the same
1094type -- see the @option{-filter} options. @var{filtergraph} is a description of
1095the filtergraph, as described in the ``Filtergraph syntax'' section of the
1096ffmpeg-filters manual.
1097
1098Input link labels must refer to input streams using the
1099@code{[file_index:stream_specifier]} syntax (i.e. the same as @option{-map}
1100uses). If @var{stream_specifier} matches multiple streams, the first one will be
1101used. An unlabeled input will be connected to the first unused input stream of
1102the matching type.
1103
1104Output link labels are referred to with @option{-map}. Unlabeled outputs are
1105added to the first output file.
1106
1107Note that with this option it is possible to use only lavfi sources without
1108normal input files.
1109
1110For example, to overlay an image over video
1111@example
1112ffmpeg -i video.mkv -i image.png -filter_complex '[0:v][1:v]overlay[out]' -map
1113'[out]' out.mkv
1114@end example
1115Here @code{[0:v]} refers to the first video stream in the first input file,
1116which is linked to the first (main) input of the overlay filter. Similarly the
1117first video stream in the second input is linked to the second (overlay) input
1118of overlay.
1119
1120Assuming there is only one video stream in each input file, we can omit input
1121labels, so the above is equivalent to
1122@example
1123ffmpeg -i video.mkv -i image.png -filter_complex 'overlay[out]' -map
1124'[out]' out.mkv
1125@end example
1126
1127Furthermore we can omit the output label and the single output from the filter
1128graph will be added to the output file automatically, so we can simply write
1129@example
1130ffmpeg -i video.mkv -i image.png -filter_complex 'overlay' out.mkv
1131@end example
1132
1133To generate 5 seconds of pure red video using lavfi @code{color} source:
1134@example
1135ffmpeg -filter_complex 'color=c=red' -t 5 out.mkv
1136@end example
1137
1138@item -lavfi @var{filtergraph} (@emph{global})
1139Define a complex filtergraph, i.e. one with arbitrary number of inputs and/or
1140outputs. Equivalent to @option{-filter_complex}.
1141
1142@item -filter_complex_script @var{filename} (@emph{global})
1143This option is similar to @option{-filter_complex}, the only difference is that
1144its argument is the name of the file from which a complex filtergraph
1145description is to be read.
1146
1147@item -accurate_seek (@emph{input})
1148This option enables or disables accurate seeking in input files with the
1149@option{-ss} option. It is enabled by default, so seeking is accurate when
1150transcoding. Use @option{-noaccurate_seek} to disable it, which may be useful
1151e.g. when copying some streams and transcoding the others.
1152
1153@item -override_ffserver (@emph{global})
1154Overrides the input specifications from @command{ffserver}. Using this
1155option you can map any input stream to @command{ffserver} and control
1156many aspects of the encoding from @command{ffmpeg}. Without this
1157option @command{ffmpeg} will transmit to @command{ffserver} what is
1158requested by @command{ffserver}.
1159
1160The option is intended for cases where features are needed that cannot be
1161specified to @command{ffserver} but can be to @command{ffmpeg}.
1162
1163@item -discard (@emph{input})
1164Allows discarding specific streams or frames of streams at the demuxer.
1165Not all demuxers support this.
1166
1167@table @option
1168@item none
1169Discard no frame.
1170
1171@item default
1172Default, which discards no frames.
1173
1174@item noref
1175Discard all non-reference frames.
1176
1177@item bidir
1178Discard all bidirectional frames.
1179
1180@item nokey
1181Discard all frames excepts keyframes.
1182
1183@item all
1184Discard all frames.
1185@end table
1186
1187@end table
1188
1189As a special exception, you can use a bitmap subtitle stream as input: it
1190will be converted into a video with the same size as the largest video in
1191the file, or 720x576 if no video is present. Note that this is an
1192experimental and temporary solution. It will be removed once libavfilter has
1193proper support for subtitles.
1194
1195For example, to hardcode subtitles on top of a DVB-T recording stored in
1196MPEG-TS format, delaying the subtitles by 1 second:
1197@example
1198ffmpeg -i input.ts -filter_complex \
1199 '[#0x2ef] setpts=PTS+1/TB [sub] ; [#0x2d0] [sub] overlay' \
1200 -sn -map '#0x2dc' output.mkv
1201@end example
1202(0x2d0, 0x2dc and 0x2ef are the MPEG-TS PIDs of respectively the video,
1203audio and subtitles streams; 0:0, 0:3 and 0:7 would have worked too)
1204
1205@section Preset files
1206A preset file contains a sequence of @var{option}=@var{value} pairs,
1207one for each line, specifying a sequence of options which would be
1208awkward to specify on the command line. Lines starting with the hash
1209('#') character are ignored and are used to provide comments. Check
1210the @file{presets} directory in the FFmpeg source tree for examples.
1211
1212Preset files are specified with the @code{vpre}, @code{apre},
1213@code{spre}, and @code{fpre} options. The @code{fpre} option takes the
1214filename of the preset instead of a preset name as input and can be
1215used for any kind of codec. For the @code{vpre}, @code{apre}, and
1216@code{spre} options, the options specified in a preset file are
1217applied to the currently selected codec of the same type as the preset
1218option.
1219
1220The argument passed to the @code{vpre}, @code{apre}, and @code{spre}
1221preset options identifies the preset file to use according to the
1222following rules:
1223
1224First ffmpeg searches for a file named @var{arg}.ffpreset in the
1225directories @file{$FFMPEG_DATADIR} (if set), and @file{$HOME/.ffmpeg}, and in
1226the datadir defined at configuration time (usually @file{PREFIX/share/ffmpeg})
1227or in a @file{ffpresets} folder along the executable on win32,
1228in that order. For example, if the argument is @code{libvpx-1080p}, it will
1229search for the file @file{libvpx-1080p.ffpreset}.
1230
1231If no such file is found, then ffmpeg will search for a file named
1232@var{codec_name}-@var{arg}.ffpreset in the above-mentioned
1233directories, where @var{codec_name} is the name of the codec to which
1234the preset file options will be applied. For example, if you select
1235the video codec with @code{-vcodec libvpx} and use @code{-vpre 1080p},
1236then it will search for the file @file{libvpx-1080p.ffpreset}.
1237@c man end OPTIONS
1238
1239@chapter Tips
1240@c man begin TIPS
1241
1242@itemize
1243@item
1244For streaming at very low bitrates, use a low frame rate
1245and a small GOP size. This is especially true for RealVideo where
1246the Linux player does not seem to be very fast, so it can miss
1247frames. An example is:
1248
1249@example
1250ffmpeg -g 3 -r 3 -t 10 -b:v 50k -s qcif -f rv10 /tmp/b.rm
1251@end example
1252
1253@item
1254The parameter 'q' which is displayed while encoding is the current
1255quantizer. The value 1 indicates that a very good quality could
1256be achieved. The value 31 indicates the worst quality. If q=31 appears
1257too often, it means that the encoder cannot compress enough to meet
1258your bitrate. You must either increase the bitrate, decrease the
1259frame rate or decrease the frame size.
1260
1261@item
1262If your computer is not fast enough, you can speed up the
1263compression at the expense of the compression ratio. You can use
1264'-me zero' to speed up motion estimation, and '-g 0' to disable
1265motion estimation completely (you have only I-frames, which means it
1266is about as good as JPEG compression).
1267
1268@item
1269To have very low audio bitrates, reduce the sampling frequency
1270(down to 22050 Hz for MPEG audio, 22050 or 11025 for AC-3).
1271
1272@item
1273To have a constant quality (but a variable bitrate), use the option
1274'-qscale n' when 'n' is between 1 (excellent quality) and 31 (worst
1275quality).
1276
1277@end itemize
1278@c man end TIPS
1279
1280@chapter Examples
1281@c man begin EXAMPLES
1282
1283@section Preset files
1284
1285A preset file contains a sequence of @var{option=value} pairs, one for
1286each line, specifying a sequence of options which can be specified also on
1287the command line. Lines starting with the hash ('#') character are ignored and
1288are used to provide comments. Empty lines are also ignored. Check the
1289@file{presets} directory in the FFmpeg source tree for examples.
1290
1291Preset files are specified with the @code{pre} option, this option takes a
1292preset name as input. FFmpeg searches for a file named @var{preset_name}.avpreset in
1293the directories @file{$AVCONV_DATADIR} (if set), and @file{$HOME/.ffmpeg}, and in
1294the data directory defined at configuration time (usually @file{$PREFIX/share/ffmpeg})
1295in that order. For example, if the argument is @code{libx264-max}, it will
1296search for the file @file{libx264-max.avpreset}.
1297
1298@section Video and Audio grabbing
1299
1300If you specify the input format and device then ffmpeg can grab video
1301and audio directly.
1302
1303@example
1304ffmpeg -f oss -i /dev/dsp -f video4linux2 -i /dev/video0 /tmp/out.mpg
1305@end example
1306
1307Or with an ALSA audio source (mono input, card id 1) instead of OSS:
1308@example
1309ffmpeg -f alsa -ac 1 -i hw:1 -f video4linux2 -i /dev/video0 /tmp/out.mpg
1310@end example
1311
1312Note that you must activate the right video source and channel before
1313launching ffmpeg with any TV viewer such as
1314@uref{http://linux.bytesex.org/xawtv/, xawtv} by Gerd Knorr. You also
1315have to set the audio recording levels correctly with a
1316standard mixer.
1317
1318@section X11 grabbing
1319
1320Grab the X11 display with ffmpeg via
1321
1322@example
1323ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0 /tmp/out.mpg
1324@end example
1325
13260.0 is display.screen number of your X11 server, same as
1327the DISPLAY environment variable.
1328
1329@example
1330ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0+10,20 /tmp/out.mpg
1331@end example
1332
13330.0 is display.screen number of your X11 server, same as the DISPLAY environment
1334variable. 10 is the x-offset and 20 the y-offset for the grabbing.
1335
1336@section Video and Audio file format conversion
1337
1338Any supported file format and protocol can serve as input to ffmpeg:
1339
1340Examples:
1341@itemize
1342@item
1343You can use YUV files as input:
1344
1345@example
1346ffmpeg -i /tmp/test%d.Y /tmp/out.mpg
1347@end example
1348
1349It will use the files:
1350@example
1351/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
1352/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...
1353@end example
1354
1355The Y files use twice the resolution of the U and V files. They are
1356raw files, without header. They can be generated by all decent video
1357decoders. You must specify the size of the image with the @option{-s} option
1358if ffmpeg cannot guess it.
1359
1360@item
1361You can input from a raw YUV420P file:
1362
1363@example
1364ffmpeg -i /tmp/test.yuv /tmp/out.avi
1365@end example
1366
1367test.yuv is a file containing raw YUV planar data. Each frame is composed
1368of the Y plane followed by the U and V planes at half vertical and
1369horizontal resolution.
1370
1371@item
1372You can output to a raw YUV420P file:
1373
1374@example
1375ffmpeg -i mydivx.avi hugefile.yuv
1376@end example
1377
1378@item
1379You can set several input files and output files:
1380
1381@example
1382ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg
1383@end example
1384
1385Converts the audio file a.wav and the raw YUV video file a.yuv
1386to MPEG file a.mpg.
1387
1388@item
1389You can also do audio and video conversions at the same time:
1390
1391@example
1392ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2
1393@end example
1394
1395Converts a.wav to MPEG audio at 22050 Hz sample rate.
1396
1397@item
1398You can encode to several formats at the same time and define a
1399mapping from input stream to output streams:
1400
1401@example
1402ffmpeg -i /tmp/a.wav -map 0:a -b:a 64k /tmp/a.mp2 -map 0:a -b:a 128k /tmp/b.mp2
1403@end example
1404
1405Converts a.wav to a.mp2 at 64 kbits and to b.mp2 at 128 kbits. '-map
1406file:index' specifies which input stream is used for each output
1407stream, in the order of the definition of output streams.
1408
1409@item
1410You can transcode decrypted VOBs:
1411
1412@example
1413ffmpeg -i snatch_1.vob -f avi -c:v mpeg4 -b:v 800k -g 300 -bf 2 -c:a libmp3lame -b:a 128k snatch.avi
1414@end example
1415
1416This is a typical DVD ripping example; the input is a VOB file, the
1417output an AVI file with MPEG-4 video and MP3 audio. Note that in this
1418command we use B-frames so the MPEG-4 stream is DivX5 compatible, and
1419GOP size is 300 which means one intra frame every 10 seconds for 29.97fps
1420input video. Furthermore, the audio stream is MP3-encoded so you need
1421to enable LAME support by passing @code{--enable-libmp3lame} to configure.
1422The mapping is particularly useful for DVD transcoding
1423to get the desired audio language.
1424
1425NOTE: To see the supported input formats, use @code{ffmpeg -formats}.
1426
1427@item
1428You can extract images from a video, or create a video from many images:
1429
1430For extracting images from a video:
1431@example
1432ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
1433@end example
1434
1435This will extract one video frame per second from the video and will
1436output them in files named @file{foo-001.jpeg}, @file{foo-002.jpeg},
1437etc. Images will be rescaled to fit the new WxH values.
1438
1439If you want to extract just a limited number of frames, you can use the
1440above command in combination with the -vframes or -t option, or in
1441combination with -ss to start extracting from a certain point in time.
1442
1443For creating a video from many images:
1444@example
1445ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi
1446@end example
1447
1448The syntax @code{foo-%03d.jpeg} specifies to use a decimal number
1449composed of three digits padded with zeroes to express the sequence
1450number. It is the same syntax supported by the C printf function, but
1451only formats accepting a normal integer are suitable.
1452
1453When importing an image sequence, -i also supports expanding
1454shell-like wildcard patterns (globbing) internally, by selecting the
1455image2-specific @code{-pattern_type glob} option.
1456
1457For example, for creating a video from filenames matching the glob pattern
1458@code{foo-*.jpeg}:
1459@example
1460ffmpeg -f image2 -pattern_type glob -i 'foo-*.jpeg' -r 12 -s WxH foo.avi
1461@end example
1462
1463@item
1464You can put many streams of the same type in the output:
1465
1466@example
1467ffmpeg -i test1.avi -i test2.avi -map 1:1 -map 1:0 -map 0:1 -map 0:0 -c copy -y test12.nut
1468@end example
1469
1470The resulting output file @file{test12.nut} will contain the first four streams
1471from the input files in reverse order.
1472
1473@item
1474To force CBR video output:
1475@example
1476ffmpeg -i myfile.avi -b 4000k -minrate 4000k -maxrate 4000k -bufsize 1835k out.m2v
1477@end example
1478
1479@item
1480The four options lmin, lmax, mblmin and mblmax use 'lambda' units,
1481but you may use the QP2LAMBDA constant to easily convert from 'q' units:
1482@example
1483ffmpeg -i src.ext -lmax 21*QP2LAMBDA dst.ext
1484@end example
1485
1486@end itemize
1487@c man end EXAMPLES
1488
1489@include config.texi
1490@ifset config-all
1491@ifset config-avutil
1492@include utils.texi
1493@end ifset
1494@ifset config-avcodec
1495@include codecs.texi
1496@include bitstream_filters.texi
1497@end ifset
1498@ifset config-avformat
1499@include formats.texi
1500@include protocols.texi
1501@end ifset
1502@ifset config-avdevice
1503@include devices.texi
1504@end ifset
1505@ifset config-swresample
1506@include resampler.texi
1507@end ifset
1508@ifset config-swscale
1509@include scaler.texi
1510@end ifset
1511@ifset config-avfilter
1512@include filters.texi
1513@end ifset
1514@end ifset
1515
1516@chapter See Also
1517
1518@ifhtml
1519@ifset config-all
1520@url{ffmpeg.html,ffmpeg}
1521@end ifset
1522@ifset config-not-all
1523@url{ffmpeg-all.html,ffmpeg-all},
1524@end ifset
1525@url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
1526@url{ffmpeg-utils.html,ffmpeg-utils},
1527@url{ffmpeg-scaler.html,ffmpeg-scaler},
1528@url{ffmpeg-resampler.html,ffmpeg-resampler},
1529@url{ffmpeg-codecs.html,ffmpeg-codecs},
1530@url{ffmpeg-bitstream-filters.html,ffmpeg-bitstream-filters},
1531@url{ffmpeg-formats.html,ffmpeg-formats},
1532@url{ffmpeg-devices.html,ffmpeg-devices},
1533@url{ffmpeg-protocols.html,ffmpeg-protocols},
1534@url{ffmpeg-filters.html,ffmpeg-filters}
1535@end ifhtml
1536
1537@ifnothtml
1538@ifset config-all
1539ffmpeg(1),
1540@end ifset
1541@ifset config-not-all
1542ffmpeg-all(1),
1543@end ifset
1544ffplay(1), ffprobe(1), ffserver(1),
1545ffmpeg-utils(1), ffmpeg-scaler(1), ffmpeg-resampler(1),
1546ffmpeg-codecs(1), ffmpeg-bitstream-filters(1), ffmpeg-formats(1),
1547ffmpeg-devices(1), ffmpeg-protocols(1), ffmpeg-filters(1)
1548@end ifnothtml
1549
1550@include authors.texi
1551
1552@ignore
1553
1554@setfilename ffmpeg
1555@settitle ffmpeg video converter
1556
1557@end ignore
1558
1559@bye