Edit history:
nate: 2014-09-29 01:42:31 pm
nate: 2013-10-26 10:23:49 am
nate: 2013-08-16 07:41:08 pm
nate: 2013-07-31 05:12:18 pm
nate: 2013-07-05 06:14:18 pm
nate: 2013-06-25 06:39:15 pm
nate: 2013-06-13 05:54:17 pm
nate: 2013-06-12 02:36:45 pm
nate: 2013-06-09 02:56:05 pm
nate: 2013-06-08 04:16:20 pm
nate: 2013-06-07 07:44:17 pm
nate: 2013-05-03 09:39:20 pm
nate: 2013-05-03 09:38:45 pm
nate: 2013-04-21 05:32:56 pm
nate: 2013-04-13 09:51:15 pm
nate: 2013-03-24 05:27:47 pm
nate: 2013-03-17 09:58:11 pm
nate: 2013-02-25 03:17:46 pm
nate: 2013-02-17 05:07:24 pm
nate: 2013-02-09 11:11:51 pm
yua encodes your runs to sda standards.
windows
mac
linux
source
system requirements
windows
mac
linux
the good
the bad
the ugly
Code:
--input="/path/to/your/input.avi" --append="/path/to/additional/input.avi" can be used multiple times --output-basename e.g.: --output-basename="MyRun" _HQ, _IQ labels and .mp4 extension will be appended automatically --qualities possible values: x i h m l e.g.: --qualities=mh --interlaced --progressive --f e.g.: --f1, --f2, etc. --d e.g.: --d4, --d1 --2d --3d --mono corresponds to "downmix to mono" option in the gui --statid enables the statid checkbox --statid# e.g.: --statid1="My Name" --statid2="Metroid Prime" --statid3="Single-segment [0:49]" --bff bottom field first (for interlaced input) --tff top field first (for interlaced input) --standard force 4:3 aspect ratio --widescreen force 16:9 aspect ratio interlaced d4 choices: --1-pixel-shift --alternate-1-pixel-shift --de-deflicker --alternate-de-deflicker --shutdown shut down the computer when finished encoding --trim e.g. --trim=123,1099 trims based on frame number (not including any statid)
i began this project after much consideration on 8 january of this year. in late 2008 and early 2009 i tried to rewrite anri in perl so that it would run under operating systems other than windows. i also needed to avoid windows-specific video processing tools, so anri.pl was calling mencoder, avidemux, afconvert (under os x), and so on. thanks to the help of several dedicated volunteers, i was able to get anri 4 nearly ready to release. there are actually a few runs on the site right now that were encoded using anri 4. but i had an "all or nothing" mentality - i was frustrated when i couldn't find a way to concatenate lossy input without the audio getting out of sync. i switched back and forth between mencoder and avidemux, rewriting a ton of code every time, and eventually i was exhausted, and i put the whole thing on hold.
today, among other things, i'm a qt c++ programmer for my day job. qt c++ is really interesting because it lets you write platform-independent code sort of like java, but it doesn't rely on a virtual machine like java, so you don't have nearly as many performance or memory problems. you can even compile qt c++ using microsoft's c++ compiler under windows if you want, and it's just as fast as native code, because it is, in fact, native code. anyway, qt c++ seemed like a good language for an anri replacement - and not just because i already knew it. i knew that i could use it to nail the cross-platform requirement. my primary concern going in was ffmpeg, which i would be forced to use to decode the input video and audio. for a while i toyed with writing my own decoder, but i realized that i would never be able to replace anri and the windows-based free video ecosystem it's built on without tapping the power of ffmpeg.
unfortunately, that power comes with a high price. i know people who work on ffmpeg may read this, so i want to be reasonable. i don't want to say ffmpeg is a piece of crap, because it's not - people have worked hard on it. the problem with ffmpeg is that it is written as though it's 1985. software has come a long way since then. it isn't about politics or "not invented here" - there are simply better ways of doing things now when you're writing code in a higher-level language. c++, all other things equal, is better than c for large projects, and *especially* for large projects intended for reuse as a library. that's what experience has taught me.
let me stop here and say that i'm not one of those guys who thinks the linux kernel should be rewritten in c++. that would be totally inappropriate. there is absolutely nothing wrong with using c to implement unix - in fact, that is its original purpose. i myself write c when it's appropriate. but ffmpeg is not the linux kernel. it's not even an operating system. it's just userspace code for decoding video streams, converting video data between various colorspaces, and so on. it's a library that should let you say "decode a video frame and convert it to this other format," but instead it takes over your own program with its unique memory management and data structure requirements. you may eventually do what you want with it, but you have reinvented the wheel ten times in the process.
when you use a library in c++, you should never have to know how it's managing memory or how it's storing data. well, it turns out there is no such thing as a c++ program that uses ffmpeg as a library. there is always going to be ffmpeg code in your program. even if you write an abstraction layer - as has been attempted in the past - the ffmpeg developers will just change the api so that it no longer works. it's not their intent to break your code - it's just that ffmpeg belongs to the ffmpeg authors and not to you. ffmpeg is not meant to be useful to you. it's just a hobbyist project.
and yet i had to use it. there was no better way. and here we are. i can't synchronize the video and the audio, because when i decode an aac stream, ffmpeg silences the right channel. the standalone ffmpeg binary, some 3,219 lines long, somehow manages to avoid silencing the right channel when it decodes an aac stream. that means there is an "error" in my ffmpeg code somewhere. asking for help yields nothing. so i just bundle the standalone ffmpeg binary and use it to convert the audio separately, resulting in desync in the final output unless the planets are perfectly aligned and your input is flawless. it's also impossible to concatenate input streams this way without desync, which leaves yua with around the same level of functionality as anri 4 from 2009. thanks to ffmpeg, i can't do what typing "++" does in the avisynth language.
aside from the synchronization problem, i believe i actually have ffmpeg mostly under control in yua at present - knock on wood. the other major challenge i face now is the d1 deinterlacing situation. anri 3.3 uses an avisynth plug-in by scharfis_brain called mvbob. several years ago, its output quality was unparalleled. now, there are better alternatives. nnedi3, an avisynth plug-in i ported to work under yua, equals the quality of mvbob for f2 streams because its characteristic fluttering on text is canceled out by the decimation to f2. f1 results are usually passable unless watched in slow motion, when fluttering becomes apparent.
i chose nnedi3 over mvbob because the latter is written in the avisynth language. sometimes, avisynth plug-ins are actually c or c++ programs, and these are much easier (relatively speaking) for me to repurpose for yua. this was the case with nnedi3. also, all other things equal, nnedi3 is much faster than mvbob - it can run in realtime on a four year old machine. unfortunately, the assembly-optimized parts of nnedi3 are written assuming 32-bit intel calling convention, and i haven't found the time to port all that code to amd64 as yet, so nnedi3 in yua is currently just as slow as mvbob.
overall, this situation is very difficult. qtgmc is the current state of the art in the avisynth ecosystem. ballofsnow has already stated that anri 3.4 will use it instead of mvbob. but qtgmc is written in avisynth, and it calls many avisynth plugins to do its work (one of them is nnedi3!). making it work in yua would involve porting the entire mvtools suite as well as rewriting the qtgmc core. at this point it almost makes more sense to write an avisynth abstraction layer - to attempt to parse the avisynth language, and so on. but that reeks of scope creep. avisynth is irrelevant to yua's purpose. i just want it to look good when i deinterlace d1 material at double the framerate.
i would appreciate any constructive input folks might have in these areas. thanks.
today, among other things, i'm a qt c++ programmer for my day job. qt c++ is really interesting because it lets you write platform-independent code sort of like java, but it doesn't rely on a virtual machine like java, so you don't have nearly as many performance or memory problems. you can even compile qt c++ using microsoft's c++ compiler under windows if you want, and it's just as fast as native code, because it is, in fact, native code. anyway, qt c++ seemed like a good language for an anri replacement - and not just because i already knew it. i knew that i could use it to nail the cross-platform requirement. my primary concern going in was ffmpeg, which i would be forced to use to decode the input video and audio. for a while i toyed with writing my own decoder, but i realized that i would never be able to replace anri and the windows-based free video ecosystem it's built on without tapping the power of ffmpeg.
unfortunately, that power comes with a high price. i know people who work on ffmpeg may read this, so i want to be reasonable. i don't want to say ffmpeg is a piece of crap, because it's not - people have worked hard on it. the problem with ffmpeg is that it is written as though it's 1985. software has come a long way since then. it isn't about politics or "not invented here" - there are simply better ways of doing things now when you're writing code in a higher-level language. c++, all other things equal, is better than c for large projects, and *especially* for large projects intended for reuse as a library. that's what experience has taught me.
let me stop here and say that i'm not one of those guys who thinks the linux kernel should be rewritten in c++. that would be totally inappropriate. there is absolutely nothing wrong with using c to implement unix - in fact, that is its original purpose. i myself write c when it's appropriate. but ffmpeg is not the linux kernel. it's not even an operating system. it's just userspace code for decoding video streams, converting video data between various colorspaces, and so on. it's a library that should let you say "decode a video frame and convert it to this other format," but instead it takes over your own program with its unique memory management and data structure requirements. you may eventually do what you want with it, but you have reinvented the wheel ten times in the process.
when you use a library in c++, you should never have to know how it's managing memory or how it's storing data. well, it turns out there is no such thing as a c++ program that uses ffmpeg as a library. there is always going to be ffmpeg code in your program. even if you write an abstraction layer - as has been attempted in the past - the ffmpeg developers will just change the api so that it no longer works. it's not their intent to break your code - it's just that ffmpeg belongs to the ffmpeg authors and not to you. ffmpeg is not meant to be useful to you. it's just a hobbyist project.
and yet i had to use it. there was no better way. and here we are. i can't synchronize the video and the audio, because when i decode an aac stream, ffmpeg silences the right channel. the standalone ffmpeg binary, some 3,219 lines long, somehow manages to avoid silencing the right channel when it decodes an aac stream. that means there is an "error" in my ffmpeg code somewhere. asking for help yields nothing. so i just bundle the standalone ffmpeg binary and use it to convert the audio separately, resulting in desync in the final output unless the planets are perfectly aligned and your input is flawless. it's also impossible to concatenate input streams this way without desync, which leaves yua with around the same level of functionality as anri 4 from 2009. thanks to ffmpeg, i can't do what typing "++" does in the avisynth language.
aside from the synchronization problem, i believe i actually have ffmpeg mostly under control in yua at present - knock on wood. the other major challenge i face now is the d1 deinterlacing situation. anri 3.3 uses an avisynth plug-in by scharfis_brain called mvbob. several years ago, its output quality was unparalleled. now, there are better alternatives. nnedi3, an avisynth plug-in i ported to work under yua, equals the quality of mvbob for f2 streams because its characteristic fluttering on text is canceled out by the decimation to f2. f1 results are usually passable unless watched in slow motion, when fluttering becomes apparent.
i chose nnedi3 over mvbob because the latter is written in the avisynth language. sometimes, avisynth plug-ins are actually c or c++ programs, and these are much easier (relatively speaking) for me to repurpose for yua. this was the case with nnedi3. also, all other things equal, nnedi3 is much faster than mvbob - it can run in realtime on a four year old machine. unfortunately, the assembly-optimized parts of nnedi3 are written assuming 32-bit intel calling convention, and i haven't found the time to port all that code to amd64 as yet, so nnedi3 in yua is currently just as slow as mvbob.
overall, this situation is very difficult. qtgmc is the current state of the art in the avisynth ecosystem. ballofsnow has already stated that anri 3.4 will use it instead of mvbob. but qtgmc is written in avisynth, and it calls many avisynth plugins to do its work (one of them is nnedi3!). making it work in yua would involve porting the entire mvtools suite as well as rewriting the qtgmc core. at this point it almost makes more sense to write an avisynth abstraction layer - to attempt to parse the avisynth language, and so on. but that reeks of scope creep. avisynth is irrelevant to yua's purpose. i just want it to look good when i deinterlace d1 material at double the framerate.
i would appreciate any constructive input folks might have in these areas. thanks.
Thread title: