Imported Debian version 2.5.0~trusty1.1
[deb_ffmpeg.git] / ffmpeg / doc / ffmpeg.texi
1 \input texinfo @c -*- texinfo -*-
2
3 @settitle ffmpeg Documentation
4 @titlepage
5 @center @titlefont{ffmpeg Documentation}
6 @end titlepage
7
8 @top
9
10 @contents
11
12 @chapter Synopsis
13
14 ffmpeg [@var{global_options}] @{[@var{input_file_options}] -i @file{input_file}@} ... @{[@var{output_file_options}] @file{output_file}@} ...
15
16 @chapter Description
17 @c man begin DESCRIPTION
18
19 @command{ffmpeg} is a very fast video and audio converter that can also grab from
20 a live audio/video source. It can also convert between arbitrary sample
21 rates and resize video on the fly with a high quality polyphase filter.
22
23 @command{ffmpeg} reads from an arbitrary number of input "files" (which can be regular
24 files, pipes, network streams, grabbing devices, etc.), specified by the
25 @code{-i} option, and writes to an arbitrary number of output "files", which are
26 specified by a plain output filename. Anything found on the command line which
27 cannot be interpreted as an option is considered to be an output filename.
28
29 Each input or output file can, in principle, contain any number of streams of
30 different types (video/audio/subtitle/attachment/data). The allowed number and/or
31 types of streams may be limited by the container format. Selecting which
32 streams from which inputs will go into which output is either done automatically
33 or with the @code{-map} option (see the Stream selection chapter).
34
35 To refer to input files in options, you must use their indices (0-based). E.g.
36 the first input file is @code{0}, the second is @code{1}, etc. Similarly, streams
37 within a file are referred to by their indices. E.g. @code{2:3} refers to the
38 fourth stream in the third input file. Also see the Stream specifiers chapter.
39
40 As a general rule, options are applied to the next specified
41 file. Therefore, order is important, and you can have the same
42 option on the command line multiple times. Each occurrence is
43 then applied to the next input or output file.
44 Exceptions from this rule are the global options (e.g. verbosity level),
45 which should be specified first.
46
47 Do not mix input and output files -- first specify all input files, then all
48 output files. Also do not mix options which belong to different files. All
49 options apply ONLY to the next input or output file and are reset between files.
50
51 @itemize
52 @item
53 To set the video bitrate of the output file to 64 kbit/s:
54 @example
55 ffmpeg -i input.avi -b:v 64k -bufsize 64k output.avi
56 @end example
57
58 @item
59 To force the frame rate of the output file to 24 fps:
60 @example
61 ffmpeg -i input.avi -r 24 output.avi
62 @end example
63
64 @item
65 To force the frame rate of the input file (valid for raw formats only)
66 to 1 fps and the frame rate of the output file to 24 fps:
67 @example
68 ffmpeg -r 1 -i input.m2v -r 24 output.avi
69 @end example
70 @end itemize
71
72 The format option may be needed for raw input files.
73
74 @c man end DESCRIPTION
75
76 @chapter Detailed description
77 @c man begin DETAILED DESCRIPTION
78
79 The transcoding process in @command{ffmpeg} for each output can be described by
80 the following diagram:
81
82 @example
83 _______ ______________
84 | | | |
85 | input | demuxer | encoded data | decoder
86 | file | ---------> | packets | -----+
87 |_______| |______________| |
88 v
89 _________
90 | |
91 | decoded |
92 | frames |
93 |_________|
94 ________ ______________ |
95 | | | | |
96 | output | <-------- | encoded data | <----+
97 | file | muxer | packets | encoder
98 |________| |______________|
99
100
101 @end example
102
103 @command{ffmpeg} calls the libavformat library (containing demuxers) to read
104 input files and get packets containing encoded data from them. When there are
105 multiple input files, @command{ffmpeg} tries to keep them synchronized by
106 tracking lowest timestamp on any active input stream.
107
108 Encoded packets are then passed to the decoder (unless streamcopy is selected
109 for the stream, see further for a description). The decoder produces
110 uncompressed frames (raw video/PCM audio/...) which can be processed further by
111 filtering (see next section). After filtering, the frames are passed to the
112 encoder, which encodes them and outputs encoded packets. Finally those are
113 passed to the muxer, which writes the encoded packets to the output file.
114
115 @section Filtering
116 Before encoding, @command{ffmpeg} can process raw audio and video frames using
117 filters from the libavfilter library. Several chained filters form a filter
118 graph. @command{ffmpeg} distinguishes between two types of filtergraphs:
119 simple and complex.
120
121 @subsection Simple filtergraphs
122 Simple filtergraphs are those that have exactly one input and output, both of
123 the same type. In the above diagram they can be represented by simply inserting
124 an additional step between decoding and encoding:
125
126 @example
127 _________ ______________
128 | | | |
129 | decoded | | encoded data |
130 | frames |\ _ | packets |
131 |_________| \ /||______________|
132 \ __________ /
133 simple _\|| | / encoder
134 filtergraph | filtered |/
135 | frames |
136 |__________|
137
138 @end example
139
140 Simple filtergraphs are configured with the per-stream @option{-filter} option
141 (with @option{-vf} and @option{-af} aliases for video and audio respectively).
142 A simple filtergraph for video can look for example like this:
143
144 @example
145 _______ _____________ _______ ________
146 | | | | | | | |
147 | input | ---> | deinterlace | ---> | scale | ---> | output |
148 |_______| |_____________| |_______| |________|
149
150 @end example
151
152 Note that some filters change frame properties but not frame contents. E.g. the
153 @code{fps} filter in the example above changes number of frames, but does not
154 touch the frame contents. Another example is the @code{setpts} filter, which
155 only sets timestamps and otherwise passes the frames unchanged.
156
157 @subsection Complex filtergraphs
158 Complex filtergraphs are those which cannot be described as simply a linear
159 processing chain applied to one stream. This is the case, for example, when the graph has
160 more than one input and/or output, or when output stream type is different from
161 input. They can be represented with the following diagram:
162
163 @example
164 _________
165 | |
166 | input 0 |\ __________
167 |_________| \ | |
168 \ _________ /| output 0 |
169 \ | | / |__________|
170 _________ \| complex | /
171 | | | |/
172 | input 1 |---->| filter |\
173 |_________| | | \ __________
174 /| graph | \ | |
175 / | | \| output 1 |
176 _________ / |_________| |__________|
177 | | /
178 | input 2 |/
179 |_________|
180
181 @end example
182
183 Complex filtergraphs are configured with the @option{-filter_complex} option.
184 Note that this option is global, since a complex filtergraph, by its nature,
185 cannot be unambiguously associated with a single stream or file.
186
187 The @option{-lavfi} option is equivalent to @option{-filter_complex}.
188
189 A trivial example of a complex filtergraph is the @code{overlay} filter, which
190 has two video inputs and one video output, containing one video overlaid on top
191 of the other. Its audio counterpart is the @code{amix} filter.
192
193 @section Stream copy
194 Stream copy is a mode selected by supplying the @code{copy} parameter to the
195 @option{-codec} option. It makes @command{ffmpeg} omit the decoding and encoding
196 step for the specified stream, so it does only demuxing and muxing. It is useful
197 for changing the container format or modifying container-level metadata. The
198 diagram above will, in this case, simplify to this:
199
200 @example
201 _______ ______________ ________
202 | | | | | |
203 | input | demuxer | encoded data | muxer | output |
204 | file | ---------> | packets | -------> | file |
205 |_______| |______________| |________|
206
207 @end example
208
209 Since there is no decoding or encoding, it is very fast and there is no quality
210 loss. However, it might not work in some cases because of many factors. Applying
211 filters is obviously also impossible, since filters work on uncompressed data.
212
213 @c man end DETAILED DESCRIPTION
214
215 @chapter Stream selection
216 @c man begin STREAM SELECTION
217
218 By default, @command{ffmpeg} includes only one stream of each type (video, audio, subtitle)
219 present in the input files and adds them to each output file. It picks the
220 "best" of each based upon the following criteria: for video, it is the stream
221 with the highest resolution, for audio, it is the stream with the most channels, for
222 subtitles, it is the first subtitle stream. In the case where several streams of
223 the same type rate equally, the stream with the lowest index is chosen.
224
225 You can disable some of those defaults by using the @code{-vn/-an/-sn} options. For
226 full manual control, use the @code{-map} option, which disables the defaults just
227 described.
228
229 @c man end STREAM SELECTION
230
231 @chapter Options
232 @c man begin OPTIONS
233
234 @include fftools-common-opts.texi
235
236 @section Main options
237
238 @table @option
239
240 @item -f @var{fmt} (@emph{input/output})
241 Force input or output file format. The format is normally auto detected for input
242 files and guessed from the file extension for output files, so this option is not
243 needed in most cases.
244
245 @item -i @var{filename} (@emph{input})
246 input file name
247
248 @item -y (@emph{global})
249 Overwrite output files without asking.
250
251 @item -n (@emph{global})
252 Do not overwrite output files, and exit immediately if a specified
253 output file already exists.
254
255 @item -c[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
256 @itemx -codec[:@var{stream_specifier}] @var{codec} (@emph{input/output,per-stream})
257 Select an encoder (when used before an output file) or a decoder (when used
258 before an input file) for one or more streams. @var{codec} is the name of a
259 decoder/encoder or a special value @code{copy} (output only) to indicate that
260 the stream is not to be re-encoded.
261
262 For example
263 @example
264 ffmpeg -i INPUT -map 0 -c:v libx264 -c:a copy OUTPUT
265 @end example
266 encodes all video streams with libx264 and copies all audio streams.
267
268 For each stream, the last matching @code{c} option is applied, so
269 @example
270 ffmpeg -i INPUT -map 0 -c copy -c:v:1 libx264 -c:a:137 libvorbis OUTPUT
271 @end example
272 will copy all the streams except the second video, which will be encoded with
273 libx264, and the 138th audio, which will be encoded with libvorbis.
274
275 @item -t @var{duration} (@emph{input/output})
276 When used as an input option (before @code{-i}), limit the @var{duration} of
277 data read from the input file.
278
279 When used as an output option (before an output filename), stop writing the
280 output after its duration reaches @var{duration}.
281
282 @var{duration} may be a number in seconds, or in @code{hh:mm:ss[.xxx]} form.
283
284 -to and -t are mutually exclusive and -t has priority.
285
286 @item -to @var{position} (@emph{output})
287 Stop writing the output at @var{position}.
288 @var{position} may be a number in seconds, or in @code{hh:mm:ss[.xxx]} form.
289
290 -to and -t are mutually exclusive and -t has priority.
291
292 @item -fs @var{limit_size} (@emph{output})
293 Set the file size limit, expressed in bytes.
294
295 @item -ss @var{position} (@emph{input/output})
296 When used as an input option (before @code{-i}), seeks in this input file to
297 @var{position}. Note the in most formats it is not possible to seek exactly, so
298 @command{ffmpeg} will seek to the closest seek point before @var{position}.
299 When transcoding and @option{-accurate_seek} is enabled (the default), this
300 extra segment between the seek point and @var{position} will be decoded and
301 discarded. When doing stream copy or when @option{-noaccurate_seek} is used, it
302 will be preserved.
303
304 When used as an output option (before an output filename), decodes but discards
305 input until the timestamps reach @var{position}.
306
307 @var{position} may be either in seconds or in @code{hh:mm:ss[.xxx]} form.
308
309 @item -itsoffset @var{offset} (@emph{input})
310 Set the input time offset.
311
312 @var{offset} must be a time duration specification,
313 see @ref{time duration syntax,,the Time duration section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
314
315 The offset is added to the timestamps of the input files. Specifying
316 a positive offset means that the corresponding streams are delayed by
317 the time duration specified in @var{offset}.
318
319 @item -timestamp @var{date} (@emph{output})
320 Set the recording timestamp in the container.
321
322 @var{date} must be a time duration specification,
323 see @ref{date syntax,,the Date section in the ffmpeg-utils(1) manual,ffmpeg-utils}.
324
325 @item -metadata[:metadata_specifier] @var{key}=@var{value} (@emph{output,per-metadata})
326 Set a metadata key/value pair.
327
328 An optional @var{metadata_specifier} may be given to set metadata
329 on streams or chapters. See @code{-map_metadata} documentation for
330 details.
331
332 This option overrides metadata set with @code{-map_metadata}. It is
333 also possible to delete metadata by using an empty value.
334
335 For example, for setting the title in the output file:
336 @example
337 ffmpeg -i in.avi -metadata title="my title" out.flv
338 @end example
339
340 To set the language of the first audio stream:
341 @example
342 ffmpeg -i INPUT -metadata:s:a:0 language=eng OUTPUT
343 @end example
344
345 @item -target @var{type} (@emph{output})
346 Specify target file type (@code{vcd}, @code{svcd}, @code{dvd}, @code{dv},
347 @code{dv50}). @var{type} may be prefixed with @code{pal-}, @code{ntsc-} or
348 @code{film-} to use the corresponding standard. All the format options
349 (bitrate, codecs, buffer sizes) are then set automatically. You can just type:
350
351 @example
352 ffmpeg -i myfile.avi -target vcd /tmp/vcd.mpg
353 @end example
354
355 Nevertheless you can specify additional options as long as you know
356 they do not conflict with the standard, as in:
357
358 @example
359 ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
360 @end example
361
362 @item -dframes @var{number} (@emph{output})
363 Set the number of data frames to output. This is an alias for @code{-frames:d}.
364
365 @item -frames[:@var{stream_specifier}] @var{framecount} (@emph{output,per-stream})
366 Stop writing to the stream after @var{framecount} frames.
367
368 @item -q[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
369 @itemx -qscale[:@var{stream_specifier}] @var{q} (@emph{output,per-stream})
370 Use fixed quality scale (VBR). The meaning of @var{q}/@var{qscale} is
371 codec-dependent.
372 If @var{qscale} is used without a @var{stream_specifier} then it applies only
373 to the video stream, this is to maintain compatibility with previous behavior
374 and as specifying the same codec specific value to 2 different codecs that is
375 audio and video generally is not what is intended when no stream_specifier is
376 used.
377
378 @anchor{filter_option}
379 @item -filter[:@var{stream_specifier}] @var{filtergraph} (@emph{output,per-stream})
380 Create the filtergraph specified by @var{filtergraph} and use it to
381 filter the stream.
382
383 @var{filtergraph} is a description of the filtergraph to apply to
384 the stream, and must have a single input and a single output of the
385 same type of the stream. In the filtergraph, the input is associated
386 to the label @code{in}, and the output to the label @code{out}. See
387 the ffmpeg-filters manual for more information about the filtergraph
388 syntax.
389
390 See the @ref{filter_complex_option,,-filter_complex option} if you
391 want to create filtergraphs with multiple inputs and/or outputs.
392
393 @item -filter_script[:@var{stream_specifier}] @var{filename} (@emph{output,per-stream})
394 This option is similar to @option{-filter}, the only difference is that its
395 argument is the name of the file from which a filtergraph description is to be
396 read.
397
398 @item -pre[:@var{stream_specifier}] @var{preset_name} (@emph{output,per-stream})
399 Specify the preset for matching stream(s).
400
401 @item -stats (@emph{global})
402 Print encoding progress/statistics. It is on by default, to explicitly
403 disable it you need to specify @code{-nostats}.
404
405 @item -progress @var{url} (@emph{global})
406 Send program-friendly progress information to @var{url}.
407
408 Progress information is written approximately every second and at the end of
409 the encoding process. It is made of "@var{key}=@var{value}" lines. @var{key}
410 consists of only alphanumeric characters. The last key of a sequence of
411 progress information is always "progress".
412
413 @item -stdin
414 Enable interaction on standard input. On by default unless standard input is
415 used as an input. To explicitly disable interaction you need to specify
416 @code{-nostdin}.
417
418 Disabling interaction on standard input is useful, for example, if
419 ffmpeg is in the background process group. Roughly the same result can
420 be achieved with @code{ffmpeg ... < /dev/null} but it requires a
421 shell.
422
423 @item -debug_ts (@emph{global})
424 Print timestamp information. It is off by default. This option is
425 mostly useful for testing and debugging purposes, and the output
426 format may change from one version to another, so it should not be
427 employed by portable scripts.
428
429 See also the option @code{-fdebug ts}.
430
431 @item -attach @var{filename} (@emph{output})
432 Add an attachment to the output file. This is supported by a few formats
433 like Matroska for e.g. fonts used in rendering subtitles. Attachments
434 are implemented as a specific type of stream, so this option will add
435 a new stream to the file. It is then possible to use per-stream options
436 on this stream in the usual way. Attachment streams created with this
437 option will be created after all the other streams (i.e. those created
438 with @code{-map} or automatic mappings).
439
440 Note that for Matroska you also have to set the mimetype metadata tag:
441 @example
442 ffmpeg -i INPUT -attach DejaVuSans.ttf -metadata:s:2 mimetype=application/x-truetype-font out.mkv
443 @end example
444 (assuming that the attachment stream will be third in the output file).
445
446 @item -dump_attachment[:@var{stream_specifier}] @var{filename} (@emph{input,per-stream})
447 Extract the matching attachment stream into a file named @var{filename}. If
448 @var{filename} is empty, then the value of the @code{filename} metadata tag
449 will be used.
450
451 E.g. to extract the first attachment to a file named 'out.ttf':
452 @example
453 ffmpeg -dump_attachment:t:0 out.ttf -i INPUT
454 @end example
455 To extract all attachments to files determined by the @code{filename} tag:
456 @example
457 ffmpeg -dump_attachment:t "" -i INPUT
458 @end example
459
460 Technical note -- attachments are implemented as codec extradata, so this
461 option can actually be used to extract extradata from any stream, not just
462 attachments.
463
464 @end table
465
466 @section Video Options
467
468 @table @option
469 @item -vframes @var{number} (@emph{output})
470 Set the number of video frames to output. This is an alias for @code{-frames:v}.
471 @item -r[:@var{stream_specifier}] @var{fps} (@emph{input/output,per-stream})
472 Set frame rate (Hz value, fraction or abbreviation).
473
474 As an input option, ignore any timestamps stored in the file and instead
475 generate timestamps assuming constant frame rate @var{fps}.
476 This is not the same as the @option{-framerate} option used for some input formats
477 like image2 or v4l2 (it used to be the same in older versions of FFmpeg).
478 If in doubt use @option{-framerate} instead of the input option @option{-r}.
479
480 As an output option, duplicate or drop input frames to achieve constant output
481 frame rate @var{fps}.
482
483 @item -s[:@var{stream_specifier}] @var{size} (@emph{input/output,per-stream})
484 Set frame size.
485
486 As an input option, this is a shortcut for the @option{video_size} private
487 option, recognized by some demuxers for which the frame size is either not
488 stored in the file or is configurable -- e.g. raw video or video grabbers.
489
490 As an output option, this inserts the @code{scale} video filter to the
491 @emph{end} of the corresponding filtergraph. Please use the @code{scale} filter
492 directly to insert it at the beginning or some other place.
493
494 The format is @samp{wxh} (default - same as source).
495
496 @item -aspect[:@var{stream_specifier}] @var{aspect} (@emph{output,per-stream})
497 Set the video display aspect ratio specified by @var{aspect}.
498
499 @var{aspect} can be a floating point number string, or a string of the
500 form @var{num}:@var{den}, where @var{num} and @var{den} are the
501 numerator and denominator of the aspect ratio. For example "4:3",
502 "16:9", "1.3333", and "1.7777" are valid argument values.
503
504 If used together with @option{-vcodec copy}, it will affect the aspect ratio
505 stored at container level, but not the aspect ratio stored in encoded
506 frames, if it exists.
507
508 @item -vn (@emph{output})
509 Disable video recording.
510
511 @item -vcodec @var{codec} (@emph{output})
512 Set the video codec. This is an alias for @code{-codec:v}.
513
514 @item -pass[:@var{stream_specifier}] @var{n} (@emph{output,per-stream})
515 Select the pass number (1 or 2). It is used to do two-pass
516 video encoding. The statistics of the video are recorded in the first
517 pass into a log file (see also the option -passlogfile),
518 and in the second pass that log file is used to generate the video
519 at the exact requested bitrate.
520 On pass 1, you may just deactivate audio and set output to null,
521 examples for Windows and Unix:
522 @example
523 ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y NUL
524 ffmpeg -i foo.mov -c:v libxvid -pass 1 -an -f rawvideo -y /dev/null
525 @end example
526
527 @item -passlogfile[:@var{stream_specifier}] @var{prefix} (@emph{output,per-stream})
528 Set two-pass log file name prefix to @var{prefix}, the default file name
529 prefix is ``ffmpeg2pass''. The complete file name will be
530 @file{PREFIX-N.log}, where N is a number specific to the output
531 stream
532
533 @item -vf @var{filtergraph} (@emph{output})
534 Create the filtergraph specified by @var{filtergraph} and use it to
535 filter the stream.
536
537 This is an alias for @code{-filter:v}, see the @ref{filter_option,,-filter option}.
538 @end table
539
540 @section Advanced Video options
541
542 @table @option
543 @item -pix_fmt[:@var{stream_specifier}] @var{format} (@emph{input/output,per-stream})
544 Set pixel format. Use @code{-pix_fmts} to show all the supported
545 pixel formats.
546 If the selected pixel format can not be selected, ffmpeg will print a
547 warning and select the best pixel format supported by the encoder.
548 If @var{pix_fmt} is prefixed by a @code{+}, ffmpeg will exit with an error
549 if the requested pixel format can not be selected, and automatic conversions
550 inside filtergraphs are disabled.
551 If @var{pix_fmt} is a single @code{+}, ffmpeg selects the same pixel format
552 as the input (or graph output) and automatic conversions are disabled.
553
554 @item -sws_flags @var{flags} (@emph{input/output})
555 Set SwScaler flags.
556 @item -vdt @var{n}
557 Discard threshold.
558
559 @item -rc_override[:@var{stream_specifier}] @var{override} (@emph{output,per-stream})
560 Rate control override for specific intervals, formatted as "int,int,int"
561 list separated with slashes. Two first values are the beginning and
562 end frame numbers, last one is quantizer to use if positive, or quality
563 factor if negative.
564
565 @item -ilme
566 Force interlacing support in encoder (MPEG-2 and MPEG-4 only).
567 Use this option if your input file is interlaced and you want
568 to keep the interlaced format for minimum losses.
569 The alternative is to deinterlace the input stream with
570 @option{-deinterlace}, but deinterlacing introduces losses.
571 @item -psnr
572 Calculate PSNR of compressed frames.
573 @item -vstats
574 Dump video coding statistics to @file{vstats_HHMMSS.log}.
575 @item -vstats_file @var{file}
576 Dump video coding statistics to @var{file}.
577 @item -top[:@var{stream_specifier}] @var{n} (@emph{output,per-stream})
578 top=1/bottom=0/auto=-1 field first
579 @item -dc @var{precision}
580 Intra_dc_precision.
581 @item -vtag @var{fourcc/tag} (@emph{output})
582 Force video tag/fourcc. This is an alias for @code{-tag:v}.
583 @item -qphist (@emph{global})
584 Show QP histogram
585 @item -vbsf @var{bitstream_filter}
586 Deprecated see -bsf
587
588 @item -force_key_frames[:@var{stream_specifier}] @var{time}[,@var{time}...] (@emph{output,per-stream})
589 @item -force_key_frames[:@var{stream_specifier}] expr:@var{expr} (@emph{output,per-stream})
590 Force key frames at the specified timestamps, more precisely at the first
591 frames after each specified time.
592
593 If the argument is prefixed with @code{expr:}, the string @var{expr}
594 is interpreted like an expression and is evaluated for each frame. A
595 key frame is forced in case the evaluation is non-zero.
596
597 If one of the times is "@code{chapters}[@var{delta}]", it is expanded into
598 the time of the beginning of all chapters in the file, shifted by
599 @var{delta}, expressed as a time in seconds.
600 This option can be useful to ensure that a seek point is present at a
601 chapter mark or any other designated place in the output file.
602
603 For example, to insert a key frame at 5 minutes, plus key frames 0.1 second
604 before the beginning of every chapter:
605 @example
606 -force_key_frames 0:05:00,chapters-0.1
607 @end example
608
609 The expression in @var{expr} can contain the following constants:
610 @table @option
611 @item n
612 the number of current processed frame, starting from 0
613 @item n_forced
614 the number of forced frames
615 @item prev_forced_n
616 the number of the previous forced frame, it is @code{NAN} when no
617 keyframe was forced yet
618 @item prev_forced_t
619 the time of the previous forced frame, it is @code{NAN} when no
620 keyframe was forced yet
621 @item t
622 the time of the current processed frame
623 @end table
624
625 For example to force a key frame every 5 seconds, you can specify:
626 @example
627 -force_key_frames expr:gte(t,n_forced*5)
628 @end example
629
630 To force a key frame 5 seconds after the time of the last forced one,
631 starting from second 13:
632 @example
633 -force_key_frames expr:if(isnan(prev_forced_t),gte(t,13),gte(t,prev_forced_t+5))
634 @end example
635
636 Note that forcing too many keyframes is very harmful for the lookahead
637 algorithms of certain encoders: using fixed-GOP options or similar
638 would be more efficient.
639
640 @item -copyinkf[:@var{stream_specifier}] (@emph{output,per-stream})
641 When doing stream copy, copy also non-key frames found at the
642 beginning.
643
644 @item -hwaccel[:@var{stream_specifier}] @var{hwaccel} (@emph{input,per-stream})
645 Use hardware acceleration to decode the matching stream(s). The allowed values
646 of @var{hwaccel} are:
647 @table @option
648 @item none
649 Do not use any hardware acceleration (the default).
650
651 @item auto
652 Automatically select the hardware acceleration method.
653
654 @item vda
655 Use Apple VDA hardware acceleration.
656
657 @item vdpau
658 Use VDPAU (Video Decode and Presentation API for Unix) hardware acceleration.
659
660 @item dxva2
661 Use DXVA2 (DirectX Video Acceleration) hardware acceleration.
662 @end table
663
664 This option has no effect if the selected hwaccel is not available or not
665 supported by the chosen decoder.
666
667 Note that most acceleration methods are intended for playback and will not be
668 faster than software decoding on modern CPUs. Additionally, @command{ffmpeg}
669 will usually need to copy the decoded frames from the GPU memory into the system
670 memory, resulting in further performance loss. This option is thus mainly
671 useful for testing.
672
673 @item -hwaccel_device[:@var{stream_specifier}] @var{hwaccel_device} (@emph{input,per-stream})
674 Select a device to use for hardware acceleration.
675
676 This option only makes sense when the @option{-hwaccel} option is also
677 specified. Its exact meaning depends on the specific hardware acceleration
678 method chosen.
679
680 @table @option
681 @item vdpau
682 For VDPAU, this option specifies the X11 display/screen to use. If this option
683 is not specified, the value of the @var{DISPLAY} environment variable is used
684
685 @item dxva2
686 For DXVA2, this option should contain the number of the display adapter to use.
687 If this option is not specified, the default adapter is used.
688 @end table
689 @end table
690
691 @section Audio Options
692
693 @table @option
694 @item -aframes @var{number} (@emph{output})
695 Set the number of audio frames to output. This is an alias for @code{-frames:a}.
696 @item -ar[:@var{stream_specifier}] @var{freq} (@emph{input/output,per-stream})
697 Set the audio sampling frequency. For output streams it is set by
698 default to the frequency of the corresponding input stream. For input
699 streams this option only makes sense for audio grabbing devices and raw
700 demuxers and is mapped to the corresponding demuxer options.
701 @item -aq @var{q} (@emph{output})
702 Set the audio quality (codec-specific, VBR). This is an alias for -q:a.
703 @item -ac[:@var{stream_specifier}] @var{channels} (@emph{input/output,per-stream})
704 Set the number of audio channels. For output streams it is set by
705 default to the number of input audio channels. For input streams
706 this option only makes sense for audio grabbing devices and raw demuxers
707 and is mapped to the corresponding demuxer options.
708 @item -an (@emph{output})
709 Disable audio recording.
710 @item -acodec @var{codec} (@emph{input/output})
711 Set the audio codec. This is an alias for @code{-codec:a}.
712 @item -sample_fmt[:@var{stream_specifier}] @var{sample_fmt} (@emph{output,per-stream})
713 Set the audio sample format. Use @code{-sample_fmts} to get a list
714 of supported sample formats.
715
716 @item -af @var{filtergraph} (@emph{output})
717 Create the filtergraph specified by @var{filtergraph} and use it to
718 filter the stream.
719
720 This is an alias for @code{-filter:a}, see the @ref{filter_option,,-filter option}.
721 @end table
722
723 @section Advanced Audio options
724
725 @table @option
726 @item -atag @var{fourcc/tag} (@emph{output})
727 Force audio tag/fourcc. This is an alias for @code{-tag:a}.
728 @item -absf @var{bitstream_filter}
729 Deprecated, see -bsf
730 @item -guess_layout_max @var{channels} (@emph{input,per-stream})
731 If some input channel layout is not known, try to guess only if it
732 corresponds to at most the specified number of channels. For example, 2
733 tells to @command{ffmpeg} to recognize 1 channel as mono and 2 channels as
734 stereo but not 6 channels as 5.1. The default is to always try to guess. Use
735 0 to disable all guessing.
736 @end table
737
738 @section Subtitle options
739
740 @table @option
741 @item -scodec @var{codec} (@emph{input/output})
742 Set the subtitle codec. This is an alias for @code{-codec:s}.
743 @item -sn (@emph{output})
744 Disable subtitle recording.
745 @item -sbsf @var{bitstream_filter}
746 Deprecated, see -bsf
747 @end table
748
749 @section Advanced Subtitle options
750
751 @table @option
752
753 @item -fix_sub_duration
754 Fix subtitles durations. For each subtitle, wait for the next packet in the
755 same stream and adjust the duration of the first to avoid overlap. This is
756 necessary with some subtitles codecs, especially DVB subtitles, because the
757 duration in the original packet is only a rough estimate and the end is
758 actually marked by an empty subtitle frame. Failing to use this option when
759 necessary can result in exaggerated durations or muxing failures due to
760 non-monotonic timestamps.
761
762 Note that this option will delay the output of all data until the next
763 subtitle packet is decoded: it may increase memory consumption and latency a
764 lot.
765
766 @item -canvas_size @var{size}
767 Set the size of the canvas used to render subtitles.
768
769 @end table
770
771 @section Advanced options
772
773 @table @option
774 @item -map [-]@var{input_file_id}[:@var{stream_specifier}][,@var{sync_file_id}[:@var{stream_specifier}]] | @var{[linklabel]} (@emph{output})
775
776 Designate one or more input streams as a source for the output file. Each input
777 stream is identified by the input file index @var{input_file_id} and
778 the input stream index @var{input_stream_id} within the input
779 file. Both indices start at 0. If specified,
780 @var{sync_file_id}:@var{stream_specifier} sets which input stream
781 is used as a presentation sync reference.
782
783 The first @code{-map} option on the command line specifies the
784 source for output stream 0, the second @code{-map} option specifies
785 the source for output stream 1, etc.
786
787 A @code{-} character before the stream identifier creates a "negative" mapping.
788 It disables matching streams from already created mappings.
789
790 An alternative @var{[linklabel]} form will map outputs from complex filter
791 graphs (see the @option{-filter_complex} option) to the output file.
792 @var{linklabel} must correspond to a defined output link label in the graph.
793
794 For example, to map ALL streams from the first input file to output
795 @example
796 ffmpeg -i INPUT -map 0 output
797 @end example
798
799 For example, if you have two audio streams in the first input file,
800 these streams are identified by "0:0" and "0:1". You can use
801 @code{-map} to select which streams to place in an output file. For
802 example:
803 @example
804 ffmpeg -i INPUT -map 0:1 out.wav
805 @end example
806 will map the input stream in @file{INPUT} identified by "0:1" to
807 the (single) output stream in @file{out.wav}.
808
809 For example, to select the stream with index 2 from input file
810 @file{a.mov} (specified by the identifier "0:2"), and stream with
811 index 6 from input @file{b.mov} (specified by the identifier "1:6"),
812 and copy them to the output file @file{out.mov}:
813 @example
814 ffmpeg -i a.mov -i b.mov -c copy -map 0:2 -map 1:6 out.mov
815 @end example
816
817 To select all video and the third audio stream from an input file:
818 @example
819 ffmpeg -i INPUT -map 0:v -map 0:a:2 OUTPUT
820 @end example
821
822 To map all the streams except the second audio, use negative mappings
823 @example
824 ffmpeg -i INPUT -map 0 -map -0:a:1 OUTPUT
825 @end example
826
827 To pick the English audio stream:
828 @example
829 ffmpeg -i INPUT -map 0:m:language:eng OUTPUT
830 @end example
831
832 Note that using this option disables the default mappings for this output file.
833
834 @item -map_channel [@var{input_file_id}.@var{stream_specifier}.@var{channel_id}|-1][:@var{output_file_id}.@var{stream_specifier}]
835 Map an audio channel from a given input to an output. If
836 @var{output_file_id}.@var{stream_specifier} is not set, the audio channel will
837 be mapped on all the audio streams.
838
839 Using "-1" instead of
840 @var{input_file_id}.@var{stream_specifier}.@var{channel_id} will map a muted
841 channel.
842
843 For example, assuming @var{INPUT} is a stereo audio file, you can switch the
844 two audio channels with the following command:
845 @example
846 ffmpeg -i INPUT -map_channel 0.0.1 -map_channel 0.0.0 OUTPUT
847 @end example
848
849 If you want to mute the first channel and keep the second:
850 @example
851 ffmpeg -i INPUT -map_channel -1 -map_channel 0.0.1 OUTPUT
852 @end example
853
854 The order of the "-map_channel" option specifies the order of the channels in
855 the output stream. The output channel layout is guessed from the number of
856 channels mapped (mono if one "-map_channel", stereo if two, etc.). Using "-ac"
857 in combination of "-map_channel" makes the channel gain levels to be updated if
858 input and output channel layouts don't match (for instance two "-map_channel"
859 options and "-ac 6").
860
861 You can also extract each channel of an input to specific outputs; the following
862 command extracts two channels of the @var{INPUT} audio stream (file 0, stream 0)
863 to the respective @var{OUTPUT_CH0} and @var{OUTPUT_CH1} outputs:
864 @example
865 ffmpeg -i INPUT -map_channel 0.0.0 OUTPUT_CH0 -map_channel 0.0.1 OUTPUT_CH1
866 @end example
867
868 The following example splits the channels of a stereo input into two separate
869 streams, which are put into the same output file:
870 @example
871 ffmpeg -i stereo.wav -map 0:0 -map 0:0 -map_channel 0.0.0:0.0 -map_channel 0.0.1:0.1 -y out.ogg
872 @end example
873
874 Note that currently each output stream can only contain channels from a single
875 input stream; you can't for example use "-map_channel" to pick multiple input
876 audio channels contained in different streams (from the same or different files)
877 and merge them into a single output stream. It is therefore not currently
878 possible, for example, to turn two separate mono streams into a single stereo
879 stream. However splitting a stereo stream into two single channel mono streams
880 is possible.
881
882 If you need this feature, a possible workaround is to use the @emph{amerge}
883 filter. For example, if you need to merge a media (here @file{input.mkv}) with 2
884 mono audio streams into one single stereo channel audio stream (and keep the
885 video stream), you can use the following command:
886 @example
887 ffmpeg -i input.mkv -filter_complex "[0:1] [0:2] amerge" -c:a pcm_s16le -c:v copy output.mkv
888 @end example
889
890 @item -map_metadata[:@var{metadata_spec_out}] @var{infile}[:@var{metadata_spec_in}] (@emph{output,per-metadata})
891 Set metadata information of the next output file from @var{infile}. Note that
892 those are file indices (zero-based), not filenames.
893 Optional @var{metadata_spec_in/out} parameters specify, which metadata to copy.
894 A metadata specifier can have the following forms:
895 @table @option
896 @item @var{g}
897 global metadata, i.e. metadata that applies to the whole file
898
899 @item @var{s}[:@var{stream_spec}]
900 per-stream metadata. @var{stream_spec} is a stream specifier as described
901 in the @ref{Stream specifiers} chapter. In an input metadata specifier, the first
902 matching stream is copied from. In an output metadata specifier, all matching
903 streams are copied to.
904
905 @item @var{c}:@var{chapter_index}
906 per-chapter metadata. @var{chapter_index} is the zero-based chapter index.
907
908 @item @var{p}:@var{program_index}
909 per-program metadata. @var{program_index} is the zero-based program index.
910 @end table
911 If metadata specifier is omitted, it defaults to global.
912
913 By default, global metadata is copied from the first input file,
914 per-stream and per-chapter metadata is copied along with streams/chapters. These
915 default mappings are disabled by creating any mapping of the relevant type. A negative
916 file index can be used to create a dummy mapping that just disables automatic copying.
917
918 For example to copy metadata from the first stream of the input file to global metadata
919 of the output file:
920 @example
921 ffmpeg -i in.ogg -map_metadata 0:s:0 out.mp3
922 @end example
923
924 To do the reverse, i.e. copy global metadata to all audio streams:
925 @example
926 ffmpeg -i in.mkv -map_metadata:s:a 0:g out.mkv
927 @end example
928 Note that simple @code{0} would work as well in this example, since global
929 metadata is assumed by default.
930
931 @item -map_chapters @var{input_file_index} (@emph{output})
932 Copy chapters from input file with index @var{input_file_index} to the next
933 output file. If no chapter mapping is specified, then chapters are copied from
934 the first input file with at least one chapter. Use a negative file index to
935 disable any chapter copying.
936
937 @item -benchmark (@emph{global})
938 Show benchmarking information at the end of an encode.
939 Shows CPU time used and maximum memory consumption.
940 Maximum memory consumption is not supported on all systems,
941 it will usually display as 0 if not supported.
942 @item -benchmark_all (@emph{global})
943 Show benchmarking information during the encode.
944 Shows CPU time used in various steps (audio/video encode/decode).
945 @item -timelimit @var{duration} (@emph{global})
946 Exit after ffmpeg has been running for @var{duration} seconds.
947 @item -dump (@emph{global})
948 Dump each input packet to stderr.
949 @item -hex (@emph{global})
950 When dumping packets, also dump the payload.
951 @item -re (@emph{input})
952 Read input at native frame rate. Mainly used to simulate a grab device.
953 or live input stream (e.g. when reading from a file). Should not be used
954 with actual grab devices or live input streams (where it can cause packet
955 loss).
956 By default @command{ffmpeg} attempts to read the input(s) as fast as possible.
957 This option will slow down the reading of the input(s) to the native frame rate
958 of the input(s). It is useful for real-time output (e.g. live streaming).
959 @item -loop_input
960 Loop over the input stream. Currently it works only for image
961 streams. This option is used for automatic FFserver testing.
962 This option is deprecated, use -loop 1.
963 @item -loop_output @var{number_of_times}
964 Repeatedly loop output for formats that support looping such as animated GIF
965 (0 will loop the output infinitely).
966 This option is deprecated, use -loop.
967 @item -vsync @var{parameter}
968 Video sync method.
969 For compatibility reasons old values can be specified as numbers.
970 Newly added values will have to be specified as strings always.
971
972 @table @option
973 @item 0, passthrough
974 Each frame is passed with its timestamp from the demuxer to the muxer.
975 @item 1, cfr
976 Frames will be duplicated and dropped to achieve exactly the requested
977 constant frame rate.
978 @item 2, vfr
979 Frames are passed through with their timestamp or dropped so as to
980 prevent 2 frames from having the same timestamp.
981 @item drop
982 As passthrough but destroys all timestamps, making the muxer generate
983 fresh timestamps based on frame-rate.
984 @item -1, auto
985 Chooses between 1 and 2 depending on muxer capabilities. This is the
986 default method.
987 @end table
988
989 Note that the timestamps may be further modified by the muxer, after this.
990 For example, in the case that the format option @option{avoid_negative_ts}
991 is enabled.
992
993 With -map you can select from which stream the timestamps should be
994 taken. You can leave either video or audio unchanged and sync the
995 remaining stream(s) to the unchanged one.
996
997 @item -async @var{samples_per_second}
998 Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps,
999 the parameter is the maximum samples per second by which the audio is changed.
1000 -async 1 is a special case where only the start of the audio stream is corrected
1001 without any later correction.
1002
1003 Note that the timestamps may be further modified by the muxer, after this.
1004 For example, in the case that the format option @option{avoid_negative_ts}
1005 is enabled.
1006
1007 This option has been deprecated. Use the @code{aresample} audio filter instead.
1008
1009 @item -copyts
1010 Do not process input timestamps, but keep their values without trying
1011 to sanitize them. In particular, do not remove the initial start time
1012 offset value.
1013
1014 Note that, depending on the @option{vsync} option or on specific muxer
1015 processing (e.g. in case the format option @option{avoid_negative_ts}
1016 is enabled) the output timestamps may mismatch with the input
1017 timestamps even when this option is selected.
1018
1019 @item -start_at_zero
1020 When used with @option{copyts}, shift input timestamps so they start at zero.
1021
1022 This means that using e.g. @code{-ss 50} will make output timestamps start at
1023 50 seconds, regardless of what timestamp the input file started at.
1024
1025 @item -copytb @var{mode}
1026 Specify how to set the encoder timebase when stream copying. @var{mode} is an
1027 integer numeric value, and can assume one of the following values:
1028
1029 @table @option
1030 @item 1
1031 Use the demuxer timebase.
1032
1033 The time base is copied to the output encoder from the corresponding input
1034 demuxer. This is sometimes required to avoid non monotonically increasing
1035 timestamps when copying video streams with variable frame rate.
1036
1037 @item 0
1038 Use the decoder timebase.
1039
1040 The time base is copied to the output encoder from the corresponding input
1041 decoder.
1042
1043 @item -1
1044 Try to make the choice automatically, in order to generate a sane output.
1045 @end table
1046
1047 Default value is -1.
1048
1049 @item -shortest (@emph{output})
1050 Finish encoding when the shortest input stream ends.
1051 @item -dts_delta_threshold
1052 Timestamp discontinuity delta threshold.
1053 @item -muxdelay @var{seconds} (@emph{input})
1054 Set the maximum demux-decode delay.
1055 @item -muxpreload @var{seconds} (@emph{input})
1056 Set the initial demux-decode delay.
1057 @item -streamid @var{output-stream-index}:@var{new-value} (@emph{output})
1058 Assign a new stream-id value to an output stream. This option should be
1059 specified prior to the output filename to which it applies.
1060 For the situation where multiple output files exist, a streamid
1061 may be reassigned to a different value.
1062
1063 For example, to set the stream 0 PID to 33 and the stream 1 PID to 36 for
1064 an output mpegts file:
1065 @example
1066 ffmpeg -i infile -streamid 0:33 -streamid 1:36 out.ts
1067 @end example
1068
1069 @item -bsf[:@var{stream_specifier}] @var{bitstream_filters} (@emph{output,per-stream})
1070 Set bitstream filters for matching streams. @var{bitstream_filters} is
1071 a comma-separated list of bitstream filters. Use the @code{-bsfs} option
1072 to get the list of bitstream filters.
1073 @example
1074 ffmpeg -i h264.mp4 -c:v copy -bsf:v h264_mp4toannexb -an out.h264
1075 @end example
1076 @example
1077 ffmpeg -i file.mov -an -vn -bsf:s mov2textsub -c:s copy -f rawvideo sub.txt
1078 @end example
1079
1080 @item -tag[:@var{stream_specifier}] @var{codec_tag} (@emph{input/output,per-stream})
1081 Force a tag/fourcc for matching streams.
1082
1083 @item -timecode @var{hh}:@var{mm}:@var{ss}SEP@var{ff}
1084 Specify Timecode for writing. @var{SEP} is ':' for non drop timecode and ';'
1085 (or '.') for drop.
1086 @example
1087 ffmpeg -i input.mpg -timecode 01:02:03.04 -r 30000/1001 -s ntsc output.mpg
1088 @end example
1089
1090 @anchor{filter_complex_option}
1091 @item -filter_complex @var{filtergraph} (@emph{global})
1092 Define a complex filtergraph, i.e. one with arbitrary number of inputs and/or
1093 outputs. For simple graphs -- those with one input and one output of the same
1094 type -- see the @option{-filter} options. @var{filtergraph} is a description of
1095 the filtergraph, as described in the ``Filtergraph syntax'' section of the
1096 ffmpeg-filters manual.
1097
1098 Input link labels must refer to input streams using the
1099 @code{[file_index:stream_specifier]} syntax (i.e. the same as @option{-map}
1100 uses). If @var{stream_specifier} matches multiple streams, the first one will be
1101 used. An unlabeled input will be connected to the first unused input stream of
1102 the matching type.
1103
1104 Output link labels are referred to with @option{-map}. Unlabeled outputs are
1105 added to the first output file.
1106
1107 Note that with this option it is possible to use only lavfi sources without
1108 normal input files.
1109
1110 For example, to overlay an image over video
1111 @example
1112 ffmpeg -i video.mkv -i image.png -filter_complex '[0:v][1:v]overlay[out]' -map
1113 '[out]' out.mkv
1114 @end example
1115 Here @code{[0:v]} refers to the first video stream in the first input file,
1116 which is linked to the first (main) input of the overlay filter. Similarly the
1117 first video stream in the second input is linked to the second (overlay) input
1118 of overlay.
1119
1120 Assuming there is only one video stream in each input file, we can omit input
1121 labels, so the above is equivalent to
1122 @example
1123 ffmpeg -i video.mkv -i image.png -filter_complex 'overlay[out]' -map
1124 '[out]' out.mkv
1125 @end example
1126
1127 Furthermore we can omit the output label and the single output from the filter
1128 graph will be added to the output file automatically, so we can simply write
1129 @example
1130 ffmpeg -i video.mkv -i image.png -filter_complex 'overlay' out.mkv
1131 @end example
1132
1133 To generate 5 seconds of pure red video using lavfi @code{color} source:
1134 @example
1135 ffmpeg -filter_complex 'color=c=red' -t 5 out.mkv
1136 @end example
1137
1138 @item -lavfi @var{filtergraph} (@emph{global})
1139 Define a complex filtergraph, i.e. one with arbitrary number of inputs and/or
1140 outputs. Equivalent to @option{-filter_complex}.
1141
1142 @item -filter_complex_script @var{filename} (@emph{global})
1143 This option is similar to @option{-filter_complex}, the only difference is that
1144 its argument is the name of the file from which a complex filtergraph
1145 description is to be read.
1146
1147 @item -accurate_seek (@emph{input})
1148 This option enables or disables accurate seeking in input files with the
1149 @option{-ss} option. It is enabled by default, so seeking is accurate when
1150 transcoding. Use @option{-noaccurate_seek} to disable it, which may be useful
1151 e.g. when copying some streams and transcoding the others.
1152
1153 @item -override_ffserver (@emph{global})
1154 Overrides the input specifications from @command{ffserver}. Using this
1155 option you can map any input stream to @command{ffserver} and control
1156 many aspects of the encoding from @command{ffmpeg}. Without this
1157 option @command{ffmpeg} will transmit to @command{ffserver} what is
1158 requested by @command{ffserver}.
1159
1160 The option is intended for cases where features are needed that cannot be
1161 specified to @command{ffserver} but can be to @command{ffmpeg}.
1162
1163 @item -discard (@emph{input})
1164 Allows discarding specific streams or frames of streams at the demuxer.
1165 Not all demuxers support this.
1166
1167 @table @option
1168 @item none
1169 Discard no frame.
1170
1171 @item default
1172 Default, which discards no frames.
1173
1174 @item noref
1175 Discard all non-reference frames.
1176
1177 @item bidir
1178 Discard all bidirectional frames.
1179
1180 @item nokey
1181 Discard all frames excepts keyframes.
1182
1183 @item all
1184 Discard all frames.
1185 @end table
1186
1187 @end table
1188
1189 As a special exception, you can use a bitmap subtitle stream as input: it
1190 will be converted into a video with the same size as the largest video in
1191 the file, or 720x576 if no video is present. Note that this is an
1192 experimental and temporary solution. It will be removed once libavfilter has
1193 proper support for subtitles.
1194
1195 For example, to hardcode subtitles on top of a DVB-T recording stored in
1196 MPEG-TS format, delaying the subtitles by 1 second:
1197 @example
1198 ffmpeg -i input.ts -filter_complex \
1199 '[#0x2ef] setpts=PTS+1/TB [sub] ; [#0x2d0] [sub] overlay' \
1200 -sn -map '#0x2dc' output.mkv
1201 @end example
1202 (0x2d0, 0x2dc and 0x2ef are the MPEG-TS PIDs of respectively the video,
1203 audio and subtitles streams; 0:0, 0:3 and 0:7 would have worked too)
1204
1205 @section Preset files
1206 A preset file contains a sequence of @var{option}=@var{value} pairs,
1207 one for each line, specifying a sequence of options which would be
1208 awkward to specify on the command line. Lines starting with the hash
1209 ('#') character are ignored and are used to provide comments. Check
1210 the @file{presets} directory in the FFmpeg source tree for examples.
1211
1212 Preset files are specified with the @code{vpre}, @code{apre},
1213 @code{spre}, and @code{fpre} options. The @code{fpre} option takes the
1214 filename of the preset instead of a preset name as input and can be
1215 used for any kind of codec. For the @code{vpre}, @code{apre}, and
1216 @code{spre} options, the options specified in a preset file are
1217 applied to the currently selected codec of the same type as the preset
1218 option.
1219
1220 The argument passed to the @code{vpre}, @code{apre}, and @code{spre}
1221 preset options identifies the preset file to use according to the
1222 following rules:
1223
1224 First ffmpeg searches for a file named @var{arg}.ffpreset in the
1225 directories @file{$FFMPEG_DATADIR} (if set), and @file{$HOME/.ffmpeg}, and in
1226 the datadir defined at configuration time (usually @file{PREFIX/share/ffmpeg})
1227 or in a @file{ffpresets} folder along the executable on win32,
1228 in that order. For example, if the argument is @code{libvpx-1080p}, it will
1229 search for the file @file{libvpx-1080p.ffpreset}.
1230
1231 If no such file is found, then ffmpeg will search for a file named
1232 @var{codec_name}-@var{arg}.ffpreset in the above-mentioned
1233 directories, where @var{codec_name} is the name of the codec to which
1234 the preset file options will be applied. For example, if you select
1235 the video codec with @code{-vcodec libvpx} and use @code{-vpre 1080p},
1236 then it will search for the file @file{libvpx-1080p.ffpreset}.
1237 @c man end OPTIONS
1238
1239 @chapter Tips
1240 @c man begin TIPS
1241
1242 @itemize
1243 @item
1244 For streaming at very low bitrates, use a low frame rate
1245 and a small GOP size. This is especially true for RealVideo where
1246 the Linux player does not seem to be very fast, so it can miss
1247 frames. An example is:
1248
1249 @example
1250 ffmpeg -g 3 -r 3 -t 10 -b:v 50k -s qcif -f rv10 /tmp/b.rm
1251 @end example
1252
1253 @item
1254 The parameter 'q' which is displayed while encoding is the current
1255 quantizer. The value 1 indicates that a very good quality could
1256 be achieved. The value 31 indicates the worst quality. If q=31 appears
1257 too often, it means that the encoder cannot compress enough to meet
1258 your bitrate. You must either increase the bitrate, decrease the
1259 frame rate or decrease the frame size.
1260
1261 @item
1262 If your computer is not fast enough, you can speed up the
1263 compression at the expense of the compression ratio. You can use
1264 '-me zero' to speed up motion estimation, and '-g 0' to disable
1265 motion estimation completely (you have only I-frames, which means it
1266 is about as good as JPEG compression).
1267
1268 @item
1269 To have very low audio bitrates, reduce the sampling frequency
1270 (down to 22050 Hz for MPEG audio, 22050 or 11025 for AC-3).
1271
1272 @item
1273 To have a constant quality (but a variable bitrate), use the option
1274 '-qscale n' when 'n' is between 1 (excellent quality) and 31 (worst
1275 quality).
1276
1277 @end itemize
1278 @c man end TIPS
1279
1280 @chapter Examples
1281 @c man begin EXAMPLES
1282
1283 @section Preset files
1284
1285 A preset file contains a sequence of @var{option=value} pairs, one for
1286 each line, specifying a sequence of options which can be specified also on
1287 the command line. Lines starting with the hash ('#') character are ignored and
1288 are used to provide comments. Empty lines are also ignored. Check the
1289 @file{presets} directory in the FFmpeg source tree for examples.
1290
1291 Preset files are specified with the @code{pre} option, this option takes a
1292 preset name as input. FFmpeg searches for a file named @var{preset_name}.avpreset in
1293 the directories @file{$AVCONV_DATADIR} (if set), and @file{$HOME/.ffmpeg}, and in
1294 the data directory defined at configuration time (usually @file{$PREFIX/share/ffmpeg})
1295 in that order. For example, if the argument is @code{libx264-max}, it will
1296 search for the file @file{libx264-max.avpreset}.
1297
1298 @section Video and Audio grabbing
1299
1300 If you specify the input format and device then ffmpeg can grab video
1301 and audio directly.
1302
1303 @example
1304 ffmpeg -f oss -i /dev/dsp -f video4linux2 -i /dev/video0 /tmp/out.mpg
1305 @end example
1306
1307 Or with an ALSA audio source (mono input, card id 1) instead of OSS:
1308 @example
1309 ffmpeg -f alsa -ac 1 -i hw:1 -f video4linux2 -i /dev/video0 /tmp/out.mpg
1310 @end example
1311
1312 Note that you must activate the right video source and channel before
1313 launching ffmpeg with any TV viewer such as
1314 @uref{http://linux.bytesex.org/xawtv/, xawtv} by Gerd Knorr. You also
1315 have to set the audio recording levels correctly with a
1316 standard mixer.
1317
1318 @section X11 grabbing
1319
1320 Grab the X11 display with ffmpeg via
1321
1322 @example
1323 ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0 /tmp/out.mpg
1324 @end example
1325
1326 0.0 is display.screen number of your X11 server, same as
1327 the DISPLAY environment variable.
1328
1329 @example
1330 ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0+10,20 /tmp/out.mpg
1331 @end example
1332
1333 0.0 is display.screen number of your X11 server, same as the DISPLAY environment
1334 variable. 10 is the x-offset and 20 the y-offset for the grabbing.
1335
1336 @section Video and Audio file format conversion
1337
1338 Any supported file format and protocol can serve as input to ffmpeg:
1339
1340 Examples:
1341 @itemize
1342 @item
1343 You can use YUV files as input:
1344
1345 @example
1346 ffmpeg -i /tmp/test%d.Y /tmp/out.mpg
1347 @end example
1348
1349 It will use the files:
1350 @example
1351 /tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
1352 /tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...
1353 @end example
1354
1355 The Y files use twice the resolution of the U and V files. They are
1356 raw files, without header. They can be generated by all decent video
1357 decoders. You must specify the size of the image with the @option{-s} option
1358 if ffmpeg cannot guess it.
1359
1360 @item
1361 You can input from a raw YUV420P file:
1362
1363 @example
1364 ffmpeg -i /tmp/test.yuv /tmp/out.avi
1365 @end example
1366
1367 test.yuv is a file containing raw YUV planar data. Each frame is composed
1368 of the Y plane followed by the U and V planes at half vertical and
1369 horizontal resolution.
1370
1371 @item
1372 You can output to a raw YUV420P file:
1373
1374 @example
1375 ffmpeg -i mydivx.avi hugefile.yuv
1376 @end example
1377
1378 @item
1379 You can set several input files and output files:
1380
1381 @example
1382 ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg
1383 @end example
1384
1385 Converts the audio file a.wav and the raw YUV video file a.yuv
1386 to MPEG file a.mpg.
1387
1388 @item
1389 You can also do audio and video conversions at the same time:
1390
1391 @example
1392 ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2
1393 @end example
1394
1395 Converts a.wav to MPEG audio at 22050 Hz sample rate.
1396
1397 @item
1398 You can encode to several formats at the same time and define a
1399 mapping from input stream to output streams:
1400
1401 @example
1402 ffmpeg -i /tmp/a.wav -map 0:a -b:a 64k /tmp/a.mp2 -map 0:a -b:a 128k /tmp/b.mp2
1403 @end example
1404
1405 Converts a.wav to a.mp2 at 64 kbits and to b.mp2 at 128 kbits. '-map
1406 file:index' specifies which input stream is used for each output
1407 stream, in the order of the definition of output streams.
1408
1409 @item
1410 You can transcode decrypted VOBs:
1411
1412 @example
1413 ffmpeg -i snatch_1.vob -f avi -c:v mpeg4 -b:v 800k -g 300 -bf 2 -c:a libmp3lame -b:a 128k snatch.avi
1414 @end example
1415
1416 This is a typical DVD ripping example; the input is a VOB file, the
1417 output an AVI file with MPEG-4 video and MP3 audio. Note that in this
1418 command we use B-frames so the MPEG-4 stream is DivX5 compatible, and
1419 GOP size is 300 which means one intra frame every 10 seconds for 29.97fps
1420 input video. Furthermore, the audio stream is MP3-encoded so you need
1421 to enable LAME support by passing @code{--enable-libmp3lame} to configure.
1422 The mapping is particularly useful for DVD transcoding
1423 to get the desired audio language.
1424
1425 NOTE: To see the supported input formats, use @code{ffmpeg -formats}.
1426
1427 @item
1428 You can extract images from a video, or create a video from many images:
1429
1430 For extracting images from a video:
1431 @example
1432 ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
1433 @end example
1434
1435 This will extract one video frame per second from the video and will
1436 output them in files named @file{foo-001.jpeg}, @file{foo-002.jpeg},
1437 etc. Images will be rescaled to fit the new WxH values.
1438
1439 If you want to extract just a limited number of frames, you can use the
1440 above command in combination with the -vframes or -t option, or in
1441 combination with -ss to start extracting from a certain point in time.
1442
1443 For creating a video from many images:
1444 @example
1445 ffmpeg -f image2 -i foo-%03d.jpeg -r 12 -s WxH foo.avi
1446 @end example
1447
1448 The syntax @code{foo-%03d.jpeg} specifies to use a decimal number
1449 composed of three digits padded with zeroes to express the sequence
1450 number. It is the same syntax supported by the C printf function, but
1451 only formats accepting a normal integer are suitable.
1452
1453 When importing an image sequence, -i also supports expanding
1454 shell-like wildcard patterns (globbing) internally, by selecting the
1455 image2-specific @code{-pattern_type glob} option.
1456
1457 For example, for creating a video from filenames matching the glob pattern
1458 @code{foo-*.jpeg}:
1459 @example
1460 ffmpeg -f image2 -pattern_type glob -i 'foo-*.jpeg' -r 12 -s WxH foo.avi
1461 @end example
1462
1463 @item
1464 You can put many streams of the same type in the output:
1465
1466 @example
1467 ffmpeg -i test1.avi -i test2.avi -map 1:1 -map 1:0 -map 0:1 -map 0:0 -c copy -y test12.nut
1468 @end example
1469
1470 The resulting output file @file{test12.nut} will contain the first four streams
1471 from the input files in reverse order.
1472
1473 @item
1474 To force CBR video output:
1475 @example
1476 ffmpeg -i myfile.avi -b 4000k -minrate 4000k -maxrate 4000k -bufsize 1835k out.m2v
1477 @end example
1478
1479 @item
1480 The four options lmin, lmax, mblmin and mblmax use 'lambda' units,
1481 but you may use the QP2LAMBDA constant to easily convert from 'q' units:
1482 @example
1483 ffmpeg -i src.ext -lmax 21*QP2LAMBDA dst.ext
1484 @end example
1485
1486 @end itemize
1487 @c man end EXAMPLES
1488
1489 @include config.texi
1490 @ifset config-all
1491 @ifset config-avutil
1492 @include utils.texi
1493 @end ifset
1494 @ifset config-avcodec
1495 @include codecs.texi
1496 @include bitstream_filters.texi
1497 @end ifset
1498 @ifset config-avformat
1499 @include formats.texi
1500 @include protocols.texi
1501 @end ifset
1502 @ifset config-avdevice
1503 @include devices.texi
1504 @end ifset
1505 @ifset config-swresample
1506 @include resampler.texi
1507 @end ifset
1508 @ifset config-swscale
1509 @include scaler.texi
1510 @end ifset
1511 @ifset config-avfilter
1512 @include filters.texi
1513 @end ifset
1514 @end ifset
1515
1516 @chapter See Also
1517
1518 @ifhtml
1519 @ifset config-all
1520 @url{ffmpeg.html,ffmpeg}
1521 @end ifset
1522 @ifset config-not-all
1523 @url{ffmpeg-all.html,ffmpeg-all},
1524 @end ifset
1525 @url{ffplay.html,ffplay}, @url{ffprobe.html,ffprobe}, @url{ffserver.html,ffserver},
1526 @url{ffmpeg-utils.html,ffmpeg-utils},
1527 @url{ffmpeg-scaler.html,ffmpeg-scaler},
1528 @url{ffmpeg-resampler.html,ffmpeg-resampler},
1529 @url{ffmpeg-codecs.html,ffmpeg-codecs},
1530 @url{ffmpeg-bitstream-filters.html,ffmpeg-bitstream-filters},
1531 @url{ffmpeg-formats.html,ffmpeg-formats},
1532 @url{ffmpeg-devices.html,ffmpeg-devices},
1533 @url{ffmpeg-protocols.html,ffmpeg-protocols},
1534 @url{ffmpeg-filters.html,ffmpeg-filters}
1535 @end ifhtml
1536
1537 @ifnothtml
1538 @ifset config-all
1539 ffmpeg(1),
1540 @end ifset
1541 @ifset config-not-all
1542 ffmpeg-all(1),
1543 @end ifset
1544 ffplay(1), ffprobe(1), ffserver(1),
1545 ffmpeg-utils(1), ffmpeg-scaler(1), ffmpeg-resampler(1),
1546 ffmpeg-codecs(1), ffmpeg-bitstream-filters(1), ffmpeg-formats(1),
1547 ffmpeg-devices(1), ffmpeg-protocols(1), ffmpeg-filters(1)
1548 @end ifnothtml
1549
1550 @include authors.texi
1551
1552 @ignore
1553
1554 @setfilename ffmpeg
1555 @settitle ffmpeg video converter
1556
1557 @end ignore
1558
1559 @bye