Commit | Line | Data |
---|---|---|
2ba45a60 DM |
1 | \input texinfo @c -*- texinfo -*- |
2 | ||
3 | @settitle FFmpeg FAQ | |
4 | @titlepage | |
5 | @center @titlefont{FFmpeg FAQ} | |
6 | @end titlepage | |
7 | ||
8 | @top | |
9 | ||
10 | @contents | |
11 | ||
12 | @chapter General Questions | |
13 | ||
14 | @section Why doesn't FFmpeg support feature [xyz]? | |
15 | ||
16 | Because no one has taken on that task yet. FFmpeg development is | |
17 | driven by the tasks that are important to the individual developers. | |
18 | If there is a feature that is important to you, the best way to get | |
19 | it implemented is to undertake the task yourself or sponsor a developer. | |
20 | ||
21 | @section FFmpeg does not support codec XXX. Can you include a Windows DLL loader to support it? | |
22 | ||
23 | No. Windows DLLs are not portable, bloated and often slow. | |
24 | Moreover FFmpeg strives to support all codecs natively. | |
25 | A DLL loader is not conducive to that goal. | |
26 | ||
27 | @section I cannot read this file although this format seems to be supported by ffmpeg. | |
28 | ||
29 | Even if ffmpeg can read the container format, it may not support all its | |
30 | codecs. Please consult the supported codec list in the ffmpeg | |
31 | documentation. | |
32 | ||
33 | @section Which codecs are supported by Windows? | |
34 | ||
35 | Windows does not support standard formats like MPEG very well, unless you | |
36 | install some additional codecs. | |
37 | ||
38 | The following list of video codecs should work on most Windows systems: | |
39 | @table @option | |
40 | @item msmpeg4v2 | |
41 | .avi/.asf | |
42 | @item msmpeg4 | |
43 | .asf only | |
44 | @item wmv1 | |
45 | .asf only | |
46 | @item wmv2 | |
47 | .asf only | |
48 | @item mpeg4 | |
49 | Only if you have some MPEG-4 codec like ffdshow or Xvid installed. | |
50 | @item mpeg1video | |
51 | .mpg only | |
52 | @end table | |
53 | Note, ASF files often have .wmv or .wma extensions in Windows. It should also | |
54 | be mentioned that Microsoft claims a patent on the ASF format, and may sue | |
55 | or threaten users who create ASF files with non-Microsoft software. It is | |
56 | strongly advised to avoid ASF where possible. | |
57 | ||
58 | The following list of audio codecs should work on most Windows systems: | |
59 | @table @option | |
60 | @item adpcm_ima_wav | |
61 | @item adpcm_ms | |
62 | @item pcm_s16le | |
63 | always | |
64 | @item libmp3lame | |
65 | If some MP3 codec like LAME is installed. | |
66 | @end table | |
67 | ||
68 | ||
69 | @chapter Compilation | |
70 | ||
71 | @section @code{error: can't find a register in class 'GENERAL_REGS' while reloading 'asm'} | |
72 | ||
73 | This is a bug in gcc. Do not report it to us. Instead, please report it to | |
74 | the gcc developers. Note that we will not add workarounds for gcc bugs. | |
75 | ||
76 | Also note that (some of) the gcc developers believe this is not a bug or | |
77 | not a bug they should fix: | |
78 | @url{http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11203}. | |
79 | Then again, some of them do not know the difference between an undecidable | |
80 | problem and an NP-hard problem... | |
81 | ||
82 | @section I have installed this library with my distro's package manager. Why does @command{configure} not see it? | |
83 | ||
84 | Distributions usually split libraries in several packages. The main package | |
85 | contains the files necessary to run programs using the library. The | |
86 | development package contains the files necessary to build programs using the | |
87 | library. Sometimes, docs and/or data are in a separate package too. | |
88 | ||
89 | To build FFmpeg, you need to install the development package. It is usually | |
90 | called @file{libfoo-dev} or @file{libfoo-devel}. You can remove it after the | |
91 | build is finished, but be sure to keep the main package. | |
92 | ||
93 | @chapter Usage | |
94 | ||
95 | @section ffmpeg does not work; what is wrong? | |
96 | ||
97 | Try a @code{make distclean} in the ffmpeg source directory before the build. | |
98 | If this does not help see | |
99 | (@url{http://ffmpeg.org/bugreports.html}). | |
100 | ||
101 | @section How do I encode single pictures into movies? | |
102 | ||
103 | First, rename your pictures to follow a numerical sequence. | |
104 | For example, img1.jpg, img2.jpg, img3.jpg,... | |
105 | Then you may run: | |
106 | ||
107 | @example | |
108 | ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg | |
109 | @end example | |
110 | ||
111 | Notice that @samp{%d} is replaced by the image number. | |
112 | ||
113 | @file{img%03d.jpg} means the sequence @file{img001.jpg}, @file{img002.jpg}, etc. | |
114 | ||
115 | Use the @option{-start_number} option to declare a starting number for | |
116 | the sequence. This is useful if your sequence does not start with | |
117 | @file{img001.jpg} but is still in a numerical order. The following | |
118 | example will start with @file{img100.jpg}: | |
119 | ||
120 | @example | |
121 | ffmpeg -f image2 -start_number 100 -i img%d.jpg /tmp/a.mpg | |
122 | @end example | |
123 | ||
124 | If you have large number of pictures to rename, you can use the | |
125 | following command to ease the burden. The command, using the bourne | |
126 | shell syntax, symbolically links all files in the current directory | |
127 | that match @code{*jpg} to the @file{/tmp} directory in the sequence of | |
128 | @file{img001.jpg}, @file{img002.jpg} and so on. | |
129 | ||
130 | @example | |
131 | x=1; for i in *jpg; do counter=$(printf %03d $x); ln -s "$i" /tmp/img"$counter".jpg; x=$(($x+1)); done | |
132 | @end example | |
133 | ||
134 | If you want to sequence them by oldest modified first, substitute | |
135 | @code{$(ls -r -t *jpg)} in place of @code{*jpg}. | |
136 | ||
137 | Then run: | |
138 | ||
139 | @example | |
140 | ffmpeg -f image2 -i /tmp/img%03d.jpg /tmp/a.mpg | |
141 | @end example | |
142 | ||
143 | The same logic is used for any image format that ffmpeg reads. | |
144 | ||
145 | You can also use @command{cat} to pipe images to ffmpeg: | |
146 | ||
147 | @example | |
148 | cat *.jpg | ffmpeg -f image2pipe -c:v mjpeg -i - output.mpg | |
149 | @end example | |
150 | ||
151 | @section How do I encode movie to single pictures? | |
152 | ||
153 | Use: | |
154 | ||
155 | @example | |
156 | ffmpeg -i movie.mpg movie%d.jpg | |
157 | @end example | |
158 | ||
159 | The @file{movie.mpg} used as input will be converted to | |
160 | @file{movie1.jpg}, @file{movie2.jpg}, etc... | |
161 | ||
162 | Instead of relying on file format self-recognition, you may also use | |
163 | @table @option | |
164 | @item -c:v ppm | |
165 | @item -c:v png | |
166 | @item -c:v mjpeg | |
167 | @end table | |
168 | to force the encoding. | |
169 | ||
170 | Applying that to the previous example: | |
171 | @example | |
172 | ffmpeg -i movie.mpg -f image2 -c:v mjpeg menu%d.jpg | |
173 | @end example | |
174 | ||
175 | Beware that there is no "jpeg" codec. Use "mjpeg" instead. | |
176 | ||
177 | @section Why do I see a slight quality degradation with multithreaded MPEG* encoding? | |
178 | ||
179 | For multithreaded MPEG* encoding, the encoded slices must be independent, | |
180 | otherwise thread n would practically have to wait for n-1 to finish, so it's | |
181 | quite logical that there is a small reduction of quality. This is not a bug. | |
182 | ||
183 | @section How can I read from the standard input or write to the standard output? | |
184 | ||
185 | Use @file{-} as file name. | |
186 | ||
187 | @section -f jpeg doesn't work. | |
188 | ||
189 | Try '-f image2 test%d.jpg'. | |
190 | ||
191 | @section Why can I not change the frame rate? | |
192 | ||
193 | Some codecs, like MPEG-1/2, only allow a small number of fixed frame rates. | |
194 | Choose a different codec with the -c:v command line option. | |
195 | ||
196 | @section How do I encode Xvid or DivX video with ffmpeg? | |
197 | ||
198 | Both Xvid and DivX (version 4+) are implementations of the ISO MPEG-4 | |
199 | standard (note that there are many other coding formats that use this | |
200 | same standard). Thus, use '-c:v mpeg4' to encode in these formats. The | |
201 | default fourcc stored in an MPEG-4-coded file will be 'FMP4'. If you want | |
202 | a different fourcc, use the '-vtag' option. E.g., '-vtag xvid' will | |
203 | force the fourcc 'xvid' to be stored as the video fourcc rather than the | |
204 | default. | |
205 | ||
206 | @section Which are good parameters for encoding high quality MPEG-4? | |
207 | ||
208 | '-mbd rd -flags +mv4+aic -trellis 2 -cmp 2 -subcmp 2 -g 300 -pass 1/2', | |
209 | things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd'. | |
210 | ||
211 | @section Which are good parameters for encoding high quality MPEG-1/MPEG-2? | |
212 | ||
213 | '-mbd rd -trellis 2 -cmp 2 -subcmp 2 -g 100 -pass 1/2' | |
214 | but beware the '-g 100' might cause problems with some decoders. | |
215 | Things to try: '-bf 2', '-flags qprd', '-flags mv0', '-flags skiprd. | |
216 | ||
217 | @section Interlaced video looks very bad when encoded with ffmpeg, what is wrong? | |
218 | ||
219 | You should use '-flags +ilme+ildct' and maybe '-flags +alt' for interlaced | |
220 | material, and try '-top 0/1' if the result looks really messed-up. | |
221 | ||
222 | @section How can I read DirectShow files? | |
223 | ||
224 | If you have built FFmpeg with @code{./configure --enable-avisynth} | |
225 | (only possible on MinGW/Cygwin platforms), | |
226 | then you may use any file that DirectShow can read as input. | |
227 | ||
228 | Just create an "input.avs" text file with this single line ... | |
229 | @example | |
230 | DirectShowSource("C:\path to your file\yourfile.asf") | |
231 | @end example | |
232 | ... and then feed that text file to ffmpeg: | |
233 | @example | |
234 | ffmpeg -i input.avs | |
235 | @end example | |
236 | ||
237 | For ANY other help on AviSynth, please visit the | |
238 | @uref{http://www.avisynth.org/, AviSynth homepage}. | |
239 | ||
240 | @section How can I join video files? | |
241 | ||
242 | To "join" video files is quite ambiguous. The following list explains the | |
243 | different kinds of "joining" and points out how those are addressed in | |
244 | FFmpeg. To join video files may mean: | |
245 | ||
246 | @itemize | |
247 | ||
248 | @item | |
249 | To put them one after the other: this is called to @emph{concatenate} them | |
250 | (in short: concat) and is addressed | |
251 | @ref{How can I concatenate video files, in this very faq}. | |
252 | ||
253 | @item | |
254 | To put them together in the same file, to let the user choose between the | |
255 | different versions (example: different audio languages): this is called to | |
256 | @emph{multiplex} them together (in short: mux), and is done by simply | |
257 | invoking ffmpeg with several @option{-i} options. | |
258 | ||
259 | @item | |
260 | For audio, to put all channels together in a single stream (example: two | |
261 | mono streams into one stereo stream): this is sometimes called to | |
262 | @emph{merge} them, and can be done using the | |
263 | @url{http://ffmpeg.org/ffmpeg-filters.html#amerge, @code{amerge}} filter. | |
264 | ||
265 | @item | |
266 | For audio, to play one on top of the other: this is called to @emph{mix} | |
267 | them, and can be done by first merging them into a single stream and then | |
268 | using the @url{http://ffmpeg.org/ffmpeg-filters.html#pan, @code{pan}} filter to mix | |
269 | the channels at will. | |
270 | ||
271 | @item | |
272 | For video, to display both together, side by side or one on top of a part of | |
273 | the other; it can be done using the | |
274 | @url{http://ffmpeg.org/ffmpeg-filters.html#overlay, @code{overlay}} video filter. | |
275 | ||
276 | @end itemize | |
277 | ||
278 | @anchor{How can I concatenate video files} | |
279 | @section How can I concatenate video files? | |
280 | ||
281 | There are several solutions, depending on the exact circumstances. | |
282 | ||
283 | @subsection Concatenating using the concat @emph{filter} | |
284 | ||
285 | FFmpeg has a @url{http://ffmpeg.org/ffmpeg-filters.html#concat, | |
286 | @code{concat}} filter designed specifically for that, with examples in the | |
287 | documentation. This operation is recommended if you need to re-encode. | |
288 | ||
289 | @subsection Concatenating using the concat @emph{demuxer} | |
290 | ||
291 | FFmpeg has a @url{http://www.ffmpeg.org/ffmpeg-formats.html#concat, | |
292 | @code{concat}} demuxer which you can use when you want to avoid a re-encode and | |
293 | your format doesn't support file level concatenation. | |
294 | ||
295 | @subsection Concatenating using the concat @emph{protocol} (file level) | |
296 | ||
297 | FFmpeg has a @url{http://ffmpeg.org/ffmpeg-protocols.html#concat, | |
298 | @code{concat}} protocol designed specifically for that, with examples in the | |
299 | documentation. | |
300 | ||
301 | A few multimedia containers (MPEG-1, MPEG-2 PS, DV) allow to concatenate | |
302 | video by merely concatenating the files containing them. | |
303 | ||
304 | Hence you may concatenate your multimedia files by first transcoding them to | |
305 | these privileged formats, then using the humble @code{cat} command (or the | |
306 | equally humble @code{copy} under Windows), and finally transcoding back to your | |
307 | format of choice. | |
308 | ||
309 | @example | |
310 | ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg | |
311 | ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg | |
312 | cat intermediate1.mpg intermediate2.mpg > intermediate_all.mpg | |
313 | ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi | |
314 | @end example | |
315 | ||
316 | Additionally, you can use the @code{concat} protocol instead of @code{cat} or | |
317 | @code{copy} which will avoid creation of a potentially huge intermediate file. | |
318 | ||
319 | @example | |
320 | ffmpeg -i input1.avi -qscale:v 1 intermediate1.mpg | |
321 | ffmpeg -i input2.avi -qscale:v 1 intermediate2.mpg | |
322 | ffmpeg -i concat:"intermediate1.mpg|intermediate2.mpg" -c copy intermediate_all.mpg | |
323 | ffmpeg -i intermediate_all.mpg -qscale:v 2 output.avi | |
324 | @end example | |
325 | ||
326 | Note that you may need to escape the character "|" which is special for many | |
327 | shells. | |
328 | ||
329 | Another option is usage of named pipes, should your platform support it: | |
330 | ||
331 | @example | |
332 | mkfifo intermediate1.mpg | |
333 | mkfifo intermediate2.mpg | |
334 | ffmpeg -i input1.avi -qscale:v 1 -y intermediate1.mpg < /dev/null & | |
335 | ffmpeg -i input2.avi -qscale:v 1 -y intermediate2.mpg < /dev/null & | |
336 | cat intermediate1.mpg intermediate2.mpg |\ | |
337 | ffmpeg -f mpeg -i - -c:v mpeg4 -acodec libmp3lame output.avi | |
338 | @end example | |
339 | ||
340 | @subsection Concatenating using raw audio and video | |
341 | ||
342 | Similarly, the yuv4mpegpipe format, and the raw video, raw audio codecs also | |
343 | allow concatenation, and the transcoding step is almost lossless. | |
344 | When using multiple yuv4mpegpipe(s), the first line needs to be discarded | |
345 | from all but the first stream. This can be accomplished by piping through | |
346 | @code{tail} as seen below. Note that when piping through @code{tail} you | |
347 | must use command grouping, @code{@{ ;@}}, to background properly. | |
348 | ||
349 | For example, let's say we want to concatenate two FLV files into an | |
350 | output.flv file: | |
351 | ||
352 | @example | |
353 | mkfifo temp1.a | |
354 | mkfifo temp1.v | |
355 | mkfifo temp2.a | |
356 | mkfifo temp2.v | |
357 | mkfifo all.a | |
358 | mkfifo all.v | |
359 | ffmpeg -i input1.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp1.a < /dev/null & | |
360 | ffmpeg -i input2.flv -vn -f u16le -acodec pcm_s16le -ac 2 -ar 44100 - > temp2.a < /dev/null & | |
361 | ffmpeg -i input1.flv -an -f yuv4mpegpipe - > temp1.v < /dev/null & | |
362 | @{ ffmpeg -i input2.flv -an -f yuv4mpegpipe - < /dev/null | tail -n +2 > temp2.v ; @} & | |
363 | cat temp1.a temp2.a > all.a & | |
364 | cat temp1.v temp2.v > all.v & | |
365 | ffmpeg -f u16le -acodec pcm_s16le -ac 2 -ar 44100 -i all.a \ | |
366 | -f yuv4mpegpipe -i all.v \ | |
367 | -y output.flv | |
368 | rm temp[12].[av] all.[av] | |
369 | @end example | |
370 | ||
371 | @section Using @option{-f lavfi}, audio becomes mono for no apparent reason. | |
372 | ||
373 | Use @option{-dumpgraph -} to find out exactly where the channel layout is | |
374 | lost. | |
375 | ||
376 | Most likely, it is through @code{auto-inserted aresample}. Try to understand | |
377 | why the converting filter was needed at that place. | |
378 | ||
379 | Just before the output is a likely place, as @option{-f lavfi} currently | |
380 | only support packed S16. | |
381 | ||
382 | Then insert the correct @code{aformat} explicitly in the filtergraph, | |
383 | specifying the exact format. | |
384 | ||
385 | @example | |
386 | aformat=sample_fmts=s16:channel_layouts=stereo | |
387 | @end example | |
388 | ||
389 | @section Why does FFmpeg not see the subtitles in my VOB file? | |
390 | ||
391 | VOB and a few other formats do not have a global header that describes | |
392 | everything present in the file. Instead, applications are supposed to scan | |
393 | the file to see what it contains. Since VOB files are frequently large, only | |
394 | the beginning is scanned. If the subtitles happen only later in the file, | |
395 | they will not be initially detected. | |
396 | ||
397 | Some applications, including the @code{ffmpeg} command-line tool, can only | |
398 | work with streams that were detected during the initial scan; streams that | |
399 | are detected later are ignored. | |
400 | ||
401 | The size of the initial scan is controlled by two options: @code{probesize} | |
402 | (default ~5 Mo) and @code{analyzeduration} (default 5,000,000 µs = 5 s). For | |
403 | the subtitle stream to be detected, both values must be large enough. | |
404 | ||
405 | @section Why was the @command{ffmpeg} @option{-sameq} option removed? What to use instead? | |
406 | ||
407 | The @option{-sameq} option meant "same quantizer", and made sense only in a | |
408 | very limited set of cases. Unfortunately, a lot of people mistook it for | |
409 | "same quality" and used it in places where it did not make sense: it had | |
410 | roughly the expected visible effect, but achieved it in a very inefficient | |
411 | way. | |
412 | ||
413 | Each encoder has its own set of options to set the quality-vs-size balance, | |
414 | use the options for the encoder you are using to set the quality level to a | |
415 | point acceptable for your tastes. The most common options to do that are | |
416 | @option{-qscale} and @option{-qmax}, but you should peruse the documentation | |
417 | of the encoder you chose. | |
418 | ||
419 | @chapter Development | |
420 | ||
421 | @section Are there examples illustrating how to use the FFmpeg libraries, particularly libavcodec and libavformat? | |
422 | ||
423 | Yes. Check the @file{doc/examples} directory in the source | |
424 | repository, also available online at: | |
425 | @url{https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples}. | |
426 | ||
427 | Examples are also installed by default, usually in | |
428 | @code{$PREFIX/share/ffmpeg/examples}. | |
429 | ||
430 | Also you may read the Developers Guide of the FFmpeg documentation. Alternatively, | |
431 | examine the source code for one of the many open source projects that | |
432 | already incorporate FFmpeg at (@url{projects.html}). | |
433 | ||
434 | @section Can you support my C compiler XXX? | |
435 | ||
436 | It depends. If your compiler is C99-compliant, then patches to support | |
437 | it are likely to be welcome if they do not pollute the source code | |
438 | with @code{#ifdef}s related to the compiler. | |
439 | ||
440 | @section Is Microsoft Visual C++ supported? | |
441 | ||
442 | Yes. Please see the @uref{platform.html, Microsoft Visual C++} | |
443 | section in the FFmpeg documentation. | |
444 | ||
445 | @section Can you add automake, libtool or autoconf support? | |
446 | ||
447 | No. These tools are too bloated and they complicate the build. | |
448 | ||
449 | @section Why not rewrite FFmpeg in object-oriented C++? | |
450 | ||
451 | FFmpeg is already organized in a highly modular manner and does not need to | |
452 | be rewritten in a formal object language. Further, many of the developers | |
453 | favor straight C; it works for them. For more arguments on this matter, | |
454 | read @uref{http://www.tux.org/lkml/#s15, "Programming Religion"}. | |
455 | ||
456 | @section Why are the ffmpeg programs devoid of debugging symbols? | |
457 | ||
458 | The build process creates @command{ffmpeg_g}, @command{ffplay_g}, etc. which | |
459 | contain full debug information. Those binaries are stripped to create | |
460 | @command{ffmpeg}, @command{ffplay}, etc. If you need the debug information, use | |
461 | the *_g versions. | |
462 | ||
463 | @section I do not like the LGPL, can I contribute code under the GPL instead? | |
464 | ||
465 | Yes, as long as the code is optional and can easily and cleanly be placed | |
466 | under #if CONFIG_GPL without breaking anything. So, for example, a new codec | |
467 | or filter would be OK under GPL while a bug fix to LGPL code would not. | |
468 | ||
469 | @section I'm using FFmpeg from within my C application but the linker complains about missing symbols from the libraries themselves. | |
470 | ||
471 | FFmpeg builds static libraries by default. In static libraries, dependencies | |
472 | are not handled. That has two consequences. First, you must specify the | |
473 | libraries in dependency order: @code{-lavdevice} must come before | |
474 | @code{-lavformat}, @code{-lavutil} must come after everything else, etc. | |
475 | Second, external libraries that are used in FFmpeg have to be specified too. | |
476 | ||
477 | An easy way to get the full list of required libraries in dependency order | |
478 | is to use @code{pkg-config}. | |
479 | ||
480 | @example | |
481 | c99 -o program program.c $(pkg-config --cflags --libs libavformat libavcodec) | |
482 | @end example | |
483 | ||
484 | See @file{doc/example/Makefile} and @file{doc/example/pc-uninstalled} for | |
485 | more details. | |
486 | ||
487 | @section I'm using FFmpeg from within my C++ application but the linker complains about missing symbols which seem to be available. | |
488 | ||
489 | FFmpeg is a pure C project, so to use the libraries within your C++ application | |
490 | you need to explicitly state that you are using a C library. You can do this by | |
491 | encompassing your FFmpeg includes using @code{extern "C"}. | |
492 | ||
493 | See @url{http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html#faq-32.3} | |
494 | ||
495 | @section I'm using libavutil from within my C++ application but the compiler complains about 'UINT64_C' was not declared in this scope | |
496 | ||
497 | FFmpeg is a pure C project using C99 math features, in order to enable C++ | |
498 | to use them you have to append -D__STDC_CONSTANT_MACROS to your CXXFLAGS | |
499 | ||
500 | @section I have a file in memory / a API different from *open/*read/ libc how do I use it with libavformat? | |
501 | ||
502 | You have to create a custom AVIOContext using @code{avio_alloc_context}, | |
503 | see @file{libavformat/aviobuf.c} in FFmpeg and @file{libmpdemux/demux_lavf.c} in MPlayer or MPlayer2 sources. | |
504 | ||
505 | @section Where is the documentation about ffv1, msmpeg4, asv1, 4xm? | |
506 | ||
507 | see @url{http://www.ffmpeg.org/~michael/} | |
508 | ||
509 | @section How do I feed H.263-RTP (and other codecs in RTP) to libavcodec? | |
510 | ||
511 | Even if peculiar since it is network oriented, RTP is a container like any | |
512 | other. You have to @emph{demux} RTP before feeding the payload to libavcodec. | |
513 | In this specific case please look at RFC 4629 to see how it should be done. | |
514 | ||
515 | @section AVStream.r_frame_rate is wrong, it is much larger than the frame rate. | |
516 | ||
517 | @code{r_frame_rate} is NOT the average frame rate, it is the smallest frame rate | |
518 | that can accurately represent all timestamps. So no, it is not | |
519 | wrong if it is larger than the average! | |
520 | For example, if you have mixed 25 and 30 fps content, then @code{r_frame_rate} | |
521 | will be 150 (it is the least common multiple). | |
522 | If you are looking for the average frame rate, see @code{AVStream.avg_frame_rate}. | |
523 | ||
524 | @section Why is @code{make fate} not running all tests? | |
525 | ||
526 | Make sure you have the fate-suite samples and the @code{SAMPLES} Make variable | |
527 | or @code{FATE_SAMPLES} environment variable or the @code{--samples} | |
528 | @command{configure} option is set to the right path. | |
529 | ||
530 | @section Why is @code{make fate} not finding the samples? | |
531 | ||
532 | Do you happen to have a @code{~} character in the samples path to indicate a | |
533 | home directory? The value is used in ways where the shell cannot expand it, | |
534 | causing FATE to not find files. Just replace @code{~} by the full path. | |
535 | ||
536 | @bye |