Username:
B
I
U
S
"
url
img
#
code
sup
sub
font
size
color
smiley
embarassed
thumbsup
happy
Huh?
Angry
Roll Eyes
Undecided
Lips Sealed
Kiss
Cry
Grin
Wink
Tongue
Shocked
Cheesy
Smiley
Sad
of 37 ->
--
--
List results:
Search options:
Use \ before commas in usernames
Edit history:
nate: 2014-09-29 01:42:31 pm
nate: 2013-10-26 10:23:49 am
nate: 2013-08-16 07:41:08 pm
nate: 2013-07-31 05:12:18 pm
nate: 2013-07-05 06:14:18 pm
nate: 2013-06-25 06:39:15 pm
nate: 2013-06-13 05:54:17 pm
nate: 2013-06-12 02:36:45 pm
nate: 2013-06-09 02:56:05 pm
nate: 2013-06-08 04:16:20 pm
nate: 2013-06-07 07:44:17 pm
nate: 2013-05-03 09:39:20 pm
nate: 2013-05-03 09:38:45 pm
nate: 2013-04-21 05:32:56 pm
nate: 2013-04-13 09:51:15 pm
nate: 2013-03-24 05:27:47 pm
nate: 2013-03-17 09:58:11 pm
nate: 2013-02-25 03:17:46 pm
nate: 2013-02-17 05:07:24 pm
nate: 2013-02-09 11:11:51 pm





yua encodes your runs to sda standards.

windows
mac
linux
source


system requirements

windows
  • pentium 4/athlon 64 or better processor
  • windows xp service pack 3 or better (32- or 64-bit doesn't matter)

    mac
  • core 2 duo or better processor (sorry, no powerpc or core 1 duo support from me)
  • mac os x 10.7 lion or better

    linux
  • unknown, but hardware requirements probably similar to those listed under mac, since this is a 64-bit build - i've only tested it under ubuntu 12.10



    the good
  • identical under windows, linux, and os x. it's your choice.
  • no installation required. not even under windows.
  • open source. gpl v2 or later. yua is free and always will be.
  • with the possible exception of interlaced d1 f1 content (see below for more), output quality equals or exceeds anri's.
  • no size limits of any kind.

    the bad
  • no dvd ripping functionality right now. you can use anri for that, then drop the .vob or whatever into yua.

    the ugly
  • the build system is a little strange. you may need to change the .pro file to point to your .qrc files pointing to your helper binaries. in my experience the .qrc system is not reliable with larger (>1MB total) files. the compiler runs out of memory and bails. therefore you may need to split the binaries into 1MB chunks called e.g. x264.1, x264.2, x264.3, etc. the source distribution contains a unix shell script called make_qrc.sh that should help document this process. alternatively, you can just hardcode paths to the helper binaries on your system, dispensing with the resource system entirely. as for building the helper binaries, my recipes are included in the source distribution in build_helper_binaries.txt.

    Code:
    --input="/path/to/your/input.avi"
    --append="/path/to/additional/input.avi"
        can be used multiple times
    --output-basename
        e.g.: --output-basename="MyRun"
        _HQ, _IQ labels and .mp4 extension will be appended automatically
    --qualities
        possible values: x i h m l
        e.g.: --qualities=mh
    --interlaced
    --progressive
    --f
        e.g.: --f1, --f2, etc.
    --d
        e.g.: --d4, --d1
    --2d
    --3d
    --mono
        corresponds to "downmix to mono" option in the gui
    --statid
        enables the statid checkbox
    --statid#
        e.g.: --statid1="My Name" --statid2="Metroid Prime" --statid3="Single-segment [0:49]"
    --bff
        bottom field first (for interlaced input)
    --tff
        top field first (for interlaced input)
    --standard
        force 4:3 aspect ratio
    --widescreen
        force 16:9 aspect ratio
        
    interlaced d4 choices:
    --1-pixel-shift
    --alternate-1-pixel-shift
    --de-deflicker
    --alternate-de-deflicker
    
    --shutdown
        shut down the computer when finished encoding
    --trim
        e.g. --trim=123,1099
        trims based on frame number (not including any statid)


    i began this project after much consideration on 8 january of this year. in late 2008 and early 2009 i tried to rewrite anri in perl so that it would run under operating systems other than windows. i also needed to avoid windows-specific video processing tools, so anri.pl was calling mencoder, avidemux, afconvert (under os x), and so on. thanks to the help of several dedicated volunteers, i was able to get anri 4 nearly ready to release. there are actually a few runs on the site right now that were encoded using anri 4. but i had an "all or nothing" mentality - i was frustrated when i couldn't find a way to concatenate lossy input without the audio getting out of sync. i switched back and forth between mencoder and avidemux, rewriting a ton of code every time, and eventually i was exhausted, and i put the whole thing on hold.

    today, among other things, i'm a qt c++ programmer for my day job. qt c++ is really interesting because it lets you write platform-independent code sort of like java, but it doesn't rely on a virtual machine like java, so you don't have nearly as many performance or memory problems. you can even compile qt c++ using microsoft's c++ compiler under windows if you want, and it's just as fast as native code, because it is, in fact, native code. anyway, qt c++ seemed like a good language for an anri replacement - and not just because i already knew it. i knew that i could use it to nail the cross-platform requirement. my primary concern going in was ffmpeg, which i would be forced to use to decode the input video and audio. for a while i toyed with writing my own decoder, but i realized that i would never be able to replace anri and the windows-based free video ecosystem it's built on without tapping the power of ffmpeg.

    unfortunately, that power comes with a high price. i know people who work on ffmpeg may read this, so i want to be reasonable. i don't want to say ffmpeg is a piece of crap, because it's not - people have worked hard on it. the problem with ffmpeg is that it is written as though it's 1985. software has come a long way since then. it isn't about politics or "not invented here" - there are simply better ways of doing things now when you're writing code in a higher-level language. c++, all other things equal, is better than c for large projects, and *especially* for large projects intended for reuse as a library. that's what experience has taught me.

    let me stop here and say that i'm not one of those guys who thinks the linux kernel should be rewritten in c++. that would be totally inappropriate. there is absolutely nothing wrong with using c to implement unix - in fact, that is its original purpose. i myself write c when it's appropriate. but ffmpeg is not the linux kernel. it's not even an operating system. it's just userspace code for decoding video streams, converting video data between various colorspaces, and so on. it's a library that should let you say "decode a video frame and convert it to this other format," but instead it takes over your own program with its unique memory management and data structure requirements. you may eventually do what you want with it, but you have reinvented the wheel ten times in the process.

    when you use a library in c++, you should never have to know how it's managing memory or how it's storing data. well, it turns out there is no such thing as a c++ program that uses ffmpeg as a library. there is always going to be ffmpeg code in your program. even if you write an abstraction layer - as has been attempted in the past - the ffmpeg developers will just change the api so that it no longer works. it's not their intent to break your code - it's just that ffmpeg belongs to the ffmpeg authors and not to you. ffmpeg is not meant to be useful to you. it's just a hobbyist project.

    and yet i had to use it. there was no better way. and here we are. i can't synchronize the video and the audio, because when i decode an aac stream, ffmpeg silences the right channel. the standalone ffmpeg binary, some 3,219 lines long, somehow manages to avoid silencing the right channel when it decodes an aac stream. that means there is an "error" in my ffmpeg code somewhere. asking for help yields nothing. so i just bundle the standalone ffmpeg binary and use it to convert the audio separately, resulting in desync in the final output unless the planets are perfectly aligned and your input is flawless. it's also impossible to concatenate input streams this way without desync, which leaves yua with around the same level of functionality as anri 4 from 2009. thanks to ffmpeg, i can't do what typing "++" does in the avisynth language.

    aside from the synchronization problem, i believe i actually have ffmpeg mostly under control in yua at present - knock on wood. the other major challenge i face now is the d1 deinterlacing situation. anri 3.3 uses an avisynth plug-in by scharfis_brain called mvbob. several years ago, its output quality was unparalleled. now, there are better alternatives. nnedi3, an avisynth plug-in i ported to work under yua, equals the quality of mvbob for f2 streams because its characteristic fluttering on text is canceled out by the decimation to f2. f1 results are usually passable unless watched in slow motion, when fluttering becomes apparent.

    i chose nnedi3 over mvbob because the latter is written in the avisynth language. sometimes, avisynth plug-ins are actually c or c++ programs, and these are much easier (relatively speaking) for me to repurpose for yua. this was the case with nnedi3. also, all other things equal, nnedi3 is much faster than mvbob - it can run in realtime on a four year old machine. unfortunately, the assembly-optimized parts of nnedi3 are written assuming 32-bit intel calling convention, and i haven't found the time to port all that code to amd64 as yet, so nnedi3 in yua is currently just as slow as mvbob.

    overall, this situation is very difficult. qtgmc is the current state of the art in the avisynth ecosystem. ballofsnow has already stated that anri 3.4 will use it instead of mvbob. but qtgmc is written in avisynth, and it calls many avisynth plugins to do its work (one of them is nnedi3!). making it work in yua would involve porting the entire mvtools suite as well as rewriting the qtgmc core. at this point it almost makes more sense to write an avisynth abstraction layer - to attempt to parse the avisynth language, and so on. but that reeks of scope creep. avisynth is irrelevant to yua's purpose. i just want it to look good when i deinterlace d1 material at double the framerate.

    i would appreciate any constructive input folks might have in these areas. thanks.
  • Thread title:  
    Edit history:
    Heidrage: 2013-02-10 12:14:26 am
    Willing to teach you the impossible
    "you - a"? How did you name this program?

    Edit: I really enjoy the interface. Super easy to understand. Is it possible to have Yua be able to play the video file in the window so you can easily see the different effects? I dropped in a file and it gives me the 5 options once I pick interlaced. But I have to move the slide bar and it is rather hard to tell what option is best because of how fast everything moves. I had to find a spot in the video where the screen was stationary for a long period of time (really hard to find in a real run imo).

    Have not tried encoding with it yet, but I will soon enough.
    torch slug since 2006
    metroid prime ss in 0:49. nice job

    i feel like yua is really slow at encoding, it took me close to 1 hour to encode a 17 second long f-zero gx clip in mq (attached), on the other hand i havent compared it to anri, so maybe its susposed to be that slow, but idk.

    other than that, i love it. its UI is very nice, the installation is non-existant, etc. great job. i've been testing on windows 7 x64 for the record...
    Attachment:
    Fucking Weeaboo
    Overall the program does seem very simple and easy to use. About the only thing I've noticed that I miss from anri-chan is the lack of the "alternate" statID, which while I know was only there for creating a statID that didn't use the SDA logo, I used it often to create one for content I wasn't putting on SDA but I wanted my own personal logo (see pretty much anything on my YouTube channel).

    On a CPU standpoint, I think there maybe should be an option to not display the video on request, kind of like what you can do in VirtualDub while encoding which seems to make the process take a little less time. I haven't done any benchmarks on this though.
    thethrillness.blogspot.com
    I actually thought when I first looked at the screenshots this was going to be a GUI input for anri that when you put in all the values it automatically put you to the encoding step. I just hope all the challenges don't put you off as this looks really cool.
    Quote from Heidrage:
    "you - a"? How did you name this program?

    my three requirements were:

    1) cute, a real name people actually want to use and be called (not an acronym or jargon)
    2) pronounced nearly identically in every language i'm familiar with
    3) short, easy to type

    as a bonus, this thread will eventually be the #1 result on google for yua even though it's a name already used elsewhere.

    Quote from Heidrage:
    Edit: I really enjoy the interface.

    thank you. 072 helped me lay it out and line it up.

    Quote from Heidrage:
    Is it possible to have Yua be able to play the video file in the window so you can easily see the different effects?

    this is something i forgot to address in my initial post. i think it may be possible for me to eventually put in playback. it's a lot of work, unfortunately. now that i know there's interest i can put it in the back of my mind to work on. i will have to fix audio decoding first though, since the video playing without audio would definitely confuse people.

    Quote from DJS:
    metroid prime ss in 0:49. nice job

    thanks. was way easier than programming with ffmpeg.

    Quote from DJS:
    i feel like yua is really slow at encoding, it took me close to 1 hour to encode a 17 second long f-zero gx clip in mq (attached), on the other hand i havent compared it to anri, so maybe its susposed to be that slow, but idk.

    yeah, this is because yua does high quality deinterlacing with nnedi3 even for d4 output qualities of d1 input (obviously it's still just a field split for d4 input).

    i didn't make it clear enough in my initial post what is going on with this right now. basically nnedi3 is a realtime deinterlacer which means it's about 5-10x faster than you saw running on a system like you have. the problem is i had to rip out all the assembly optimizations to get it off windows 32-bit. so that's where the terrible slowdown comes from. and you can't just recompile assembly code like you can c++ under 64-bit or another platform. you have to actually rewrite it. actually by commenting on this, you just ensured that this is something i work on, because i'm using people's feedback as a way of prioritizing work that still needs to be done for beta 1. it's maybe 10 or 20 assembly routines that use 32-bit c calling convention, and i need to learn x264asm, which is a macro language that helps you abstract away stuff like 32- and 64-bit, so i don't have to maintain two versions if tritical (the original author) makes enhancements. of course that doesn't solve the fact that nnedi3 isn't the highest quality software deinterlacer anymore, but i'm determined not to stop this time.

    Quote from Sir VG:
    Overall the program does seem very simple and easy to use. About the only thing I've noticed that I miss from anri-chan is the lack of the "alternate" statID, which while I know was only there for creating a statID that didn't use the SDA logo, I used it often to create one for content I wasn't putting on SDA but I wanted my own personal logo (see pretty much anything on my YouTube channel).

    you'll have to refresh my memory on this. i thought i already had this functionality in yua, if you uncheck the "sda logo" checkbox. it just gives you the exact same thing but without the sda logo.

    Quote from Sir VG:
    On a CPU standpoint, I think there maybe should be an option to not display the video on request, kind of like what you can do in VirtualDub while encoding which seems to make the process take a little less time. I haven't done any benchmarks on this though.

    yeah, this is on my todo list. one thing to keep in mind is that i force hardware-accelerated display, so the cpu isn't doing any work to display the previews except to say "hay gpu, come 'n' geddit." that's why i'm pretty sure it's very small.

    Quote from TheThrillness:
    I actually thought when I first looked at the screenshots this was going to be a GUI input for anri that when you put in all the values it automatically put you to the encoding step. I just hope all the challenges don't put you off as this looks really cool.

    thank you. no worries. i am playing to win.
    Fucking Weeaboo
    Quote:
    you'll have to refresh my memory on this. i thought i already had this functionality in yua, if you uncheck the "sda logo" checkbox. it just gives you the exact same thing but without the sda logo.


    Basically I replaced the statid_alternative.png file came with anri-chan (an entirely black PNG file) with a custom one that has a custom logo, rather than the SDA logo, like what I attached.

    torch slug since 2006
    Quote from nate:
    Quote from DJS:
    metroid prime ss in 0:49. nice job

    thanks. was way easier than programming with ffmpeg.
    lol
    Quote from nate:
    Quote from DJS:
    i feel like yua is really slow at encoding, it took me close to 1 hour to encode a 17 second long f-zero gx clip in mq (attached), on the other hand i havent compared it to anri, so maybe its susposed to be that slow, but idk.

    yeah, this is because yua does high quality deinterlacing with nnedi3 even for d4 output qualities of d1 input (obviously it's still just a field split for d4 input).
    i didn't make it clear enough in my initial post what is going on with this right now. basically nnedi3 is a realtime deinterlacer which means it's about 5-10x faster than you saw running on a system like you have. the problem is i had to rip out all the assembly optimizations to get it off windows 32-bit. so that's where the terrible slowdown comes from. and you can't just recompile assembly code like you can c++ under 64-bit or another platform. you have to actually rewrite it. actually by commenting on this, you just ensured that this is something i work on, because i'm using people's feedback as a way of prioritizing work that still needs to be done for beta 1. it's maybe 10 or 20 assembly routines that use 32-bit c calling convention, and i need to learn x264asm, which is a macro language that helps you abstract away stuff like 32- and 64-bit, so i don't have to maintain two versions if tritical (the original author) makes enhancements. of course that doesn't solve the fact that nnedi3 isn't the highest quality software deinterlacer anymore, but i'm determined not to stop this time.
    i didnt really understand any of that, apart from the system i have. I am no longer on my amd phenom rig, im on my mac mini, as the phenom rig decided to finally die. if that makes any difference.... (im running via bootcamp btw, osx doesnt have everything I need yet.
    Very nice, looks pretty sleek so far. A couple observations:

    1) Is the "downmix to mono" anri's "NES?" question? Looks like it is, but I don't have any vobs with NES footage at the moment, so I can't test it.
    2) Per the statID discussion, something I had envisioned when I was doing my own little project was to just have the program select any file you have to use as a statID.
    3) When I set my vob to D1 F1, slid the bar around a bit, the program had a hard time catching up, and when I closed it crashed. I'm guessing this was more on my crappy desktop which is a 1.8GHz with 1 gig of ram. :p

    Overall, I like.
    Cool stuff nate. This qt c++ stuff is beyond me so I doubt I'll be of any help there.

    I'll be testing this out. Just one immediate thing, don't assume only 4:3 or 16:9 as possibilties. Anri first tries to guess your output aspect ratio based on video dimensions, or a dg log, and 4:3 as a fallback. Maybe have a third option as an input box for user specified ratio.
    Quote from Sir VG:
    Basically I replaced the statid_alternative.png file came with anri-chan (an entirely black PNG file) with a custom one that has a custom logo, rather than the SDA logo, like what I attached.

    gotcha. it's on the list.

    Quote from DJS:
    i didnt really understand any of that, apart from the system i have. I am no longer on my amd phenom rig, im on my mac mini, as the phenom rig decided to finally die. if that makes any difference.... (im running via bootcamp btw, osx doesnt have everything I need yet.

    shouldn't make much difference. ultimately you will see a 5-10x speedup for d1 input. at that point yua should be faster than anri even saying yes to the nmf question. there will still be the quality tradeoff for f1 material though.

    Quote from Terribleno:
    1) Is the "downmix to mono" anri's "NES?" question?

    it is. wonder how many people will discover the tooltips.

    Quote from Terribleno:
    2) Per the statID discussion, something I had envisioned when I was doing my own little project was to just have the program select any file you have to use as a statID.

    it'll be in alpha 2.

    Quote from Terribleno:
    3) When I set my vob to D1 F1, slid the bar around a bit, the program had a hard time catching up, and when I closed it crashed. I'm guessing this was more on my crappy desktop which is a 1.8GHz with 1 gig of ram. :p

    nah. see the above discussion about d1 input. basically i decided i didn't want to wait any longer since it does work how it is now if you wait long enough lol.
    Quote from bos:
    Cool stuff nate. This qt c++ stuff is beyond me so I doubt I'll be of any help there.

    I'll be testing this out. Just one immediate thing, don't assume only 4:3 or 16:9 as possibilties. Anri first tries to guess your output aspect ratio based on video dimensions, or a dg log, and 4:3 as a fallback. Maybe have a third option as an input box for user specified ratio.

    good to see you here. yeah, aspect ratio is kind of a fuckfest right now. don't hesitate to look at the source. it should be readable, check out the set_sizes() implementation in yua.cpp. i need your help getting the algorithm figured out. basically right now the radio buttons don't actually do anything most of the time.
    Not a walrus
    So does this support weird input codecs that only have Windows versions, such as Lagarith, Amarec, or Fraps? I think that sort of thing is possible through some sort of Wine trick, but I can't think of any concrete examples.
    the decoder is ffmpeg. lagarith and fraps are supported. dunno what amarec is.
    Not a walrus
    Ah, didn't realize ffmpeg could support those codecs. Interesting.
    torch slug since 2006
    i think this is what UA meant: http://www.amarectv.com/english/amv2_e.html

    amarecTV can record in whatever codec one chooses though, i use lagarith personally.
    doesn't look like it's supported. good thing you don't have to use it, lol.
    torch slug since 2006
    uhh yeah

    Attachment:
    unsupported: with a vengeance.
    Not a walrus
    Quote from DJS:
    amarecTV can record in whatever codec one chooses though, i use lagarith personally.


    Yeah but everybody I've heard just calls is Amarec.

    Not that it really matters anyway since you have to pay for the AMV codec or it sprays a watermark on your videos, and it doesn't offer any significant advantages over Lagarith.

    Pretty funny that it crashes trying to load it though.
    Edit history:
    Heidrage: 2013-02-10 12:21:04 pm
    Willing to teach you the impossible
    Side note: Grats nate on +10,000 posts
    Edit history:
    IsraeliRD: 2013-02-10 10:03:20 pm
    IsraeliRD: 2013-02-10 10:01:54 pm
    Dragon Power Supreme
    Tested it but I'm having a problem:
    Run details: 2:56 minutes (4.98gbs) from lagarith. Original video source was avi from camtasia but re-encoded via VDub.
    Anri-Chan had no problem doing LQ/MQ/HQ
    Yua... failed? not sure, but it stopped working. It took roughly 9-13 minutes to make the HQ encode but then it stopped. Wouldn't move to LQ/MQ. I checked in temp directory, no movement there. Waited for over hour, nada.
    Basically it got to the end of the video source, and then it just stopped.

    I use Windows XP SP3, Core 2 Duo 2.4GHz, 4GB RAM, GeForce 8800GTS.

    Few requests for later versions:
    - Ability to pause/resume
    - Ability to decide output directory location
    - Rather than use Temp directory, can you use the same directory as the output one just like Anri-Chan?
    - I'm not sure if it does this but if not: can you encode the output AAC/WAV once and then just re-use it for every quality? This way rather than create the audio file for all five qualities (which in Anri-Chan can take ages), it just re-uses it and therefore save encoding time at the end?
    Quote:
    wonder how many people will discover the tooltips.

    If you hadn't mentioned them, I never would have bothered to check to see if they existed. <_<

    Quote:
    yeah, aspect ratio is kind of a fuckfest right now. don't hesitate to look at the source. it should be readable, check out the set_sizes() implementation in yua.cpp. i need your help getting the algorithm figured out. basically right now the radio buttons don't actually do anything most of the time.

    While I'm not ballofsnow, I looked at it and wrote comments based on if I believe I'm reading this correctly. I'm probably not, though, since I haven't done C++ in about 2 years:
    Code:
    void Yua::set_sizes() {
            native_width = original_native_width;  // set width to the source file's width?
            native_height = original_native_height;  // set height to the source file's height?
            native_aspect_ratio = (double)native_height / (double)native_width;  // set our aspect ratio now, in case of a progressive source, but...
    
            if (interlaced_button->isChecked() && native_height <= 576 + 8) { //analog tv (20130123)
                    native_width = 640;
                    native_height = 480;
                    native_aspect_ratio = 3.0/4.0;  // ...we overwrite what aspect ratio was based on computation earlier, but ONLY if it's interlaced
                    if (widescreen_button->isChecked()) {
                            native_height = 360;
                            native_aspect_ratio = 9.0/16.0;  // ...we overwrite what aspect ratio was based on computation earlier, but ONLY if it's interlaced
                    }
                    if (d4_button->isChecked()) {  // D4 is half resolution, so...
                            native_width /= 2;
                            native_height /= 2;  // cut the resolution in half
                    }
            } else if (d4_button->isChecked()) {  // at first I misread this and thought this would always execute if the previous D4 did, but luckily I reread the code and recognized "else if"
                    native_width = 320;
                    native_height = 240;
                    if (widescreen_button->isChecked()) {
                            native_height = 180;
                    }
            }
    
            lq_width = 320;
            mq_width = 320;
            hq_width = 640;
            if (hq_width > native_width) hq_width = native_width;  // 640 if D1, 320 if D4
            iq_width = 1280;
            if (iq_width > native_width) iq_width = native_width;  // > 1280 if D1, N/A if D4
    
    				// take the width values we just set, and round them up; however, it seems as though only high quality and insane quality will possibly be affected.
            lq_width = round_up_to_mod_4(lq_width);  // this should always be 320?
            mq_width = round_up_to_mod_4(mq_width);  // this should always be 320?
            hq_width = round_up_to_mod_4(hq_width);
            iq_width = round_up_to_mod_4(iq_width);
    
            lq_height = round_up_to_mod_4(lq_width * native_aspect_ratio);
            mq_height = round_up_to_mod_4(mq_width * native_aspect_ratio);
            hq_height = round_up_to_mod_4(hq_width * native_aspect_ratio);
            iq_height = round_up_to_mod_4(iq_width * native_aspect_ratio);
    
            xq_width = round_up_to_mod_4(native_width);
            xq_height = round_up_to_mod_4(xq_width * native_aspect_ratio);
    
    				//  force set low, medium, and high qualities, since they are defaults
            lq_button->setChecked(true);
            mq_button->setChecked(true);
            hq_button->setChecked(true);
            iq_button->setChecked(false);  // uncheck insane quality for now
            xq_button->setChecked(false);  // uncheck extreme quality for now
            if (native_width > 640) {
                    iq_button->setChecked(true);  // set insane quality only if the width is greater than 640, do most DVD recorders at highest quality do 720?
                    if (native_width > 1280) {
                            xq_button->setChecked(true);  // set extreme quality only if the width is greater than 1280, should this be greater than or equal to?
                    }
            }
    
    
            preview_size = QSize(native_width, native_height);
            emit set_preview_size(preview_size.width(), preview_size.height());
            return process_current_image();
    }

    Feel free to laugh if I got anything wrong. Cheesy

    Quote:
    - I'm not sure if it does this but if not: can you encode the output AAC/WAV once and then just re-use it for every quality? This way rather than create the audio file for all five qualities (which in Anri-Chan can take ages), it just re-uses it and therefore save encoding time at the end?

    I'm guessing anri has to do that because each quality has a different bitrate, and re-encoding an already encoded file would take a further dive in quality, IIRC.
    Edit history:
    UraniumAnchor: 2013-02-11 12:59:35 am
    Not a walrus
    Well, you could still optionally dump a WAV and use that as the source for the subsequent AAC files, which can potentially be quite a bit faster. I've noticed that encoding an AAC directly from a DVD source can take a surprisingly long time. And it's not a CPU thing either, because the processes involved will be using <20% of a core. It's always the same files, but it doesn't happen with all of them.
    ird, would it be possible for you to either upload that video (not likely, i know) or to describe it more, i.e. the dimensions, framerate, game the run is on? i'm pretty sure i know what happened but i'd prefer to be more sure before i make a change. basically right now there are two numbers, the number of frames encoded and the number of frames to stop decoding at. when yua hears x264 say "i just encoded frame 200," it sets the number of frames to stop decoding at to 350 (or something), 150 higher in this case. the problem is that sometimes x264 gobbles down more than 150 frames before it bothers to tell yua that it encoded anything, and in this case yua will hang forever waiting for x264 rather than decoding more frames and passing them to x264, which would break the deadlock.

    the good news is that if switching over to using the x264 c api works, then i will be in control of what x264 does, and that should solve this permanently. to be honest i'm not that surprised it happened to someone since it was happening all the time in my tests before i bumped that arbitrary (150) number up.

    you can already choose the output directory by clicking the "change" button in the ui. were you expecting it to be called something else? or is this not how you were expecting to set the output directory?

    noted the other requests. one word about temporary files for audio though. if it's not cpu bound then i don't see what the point is. the temporary file will probably be larger than the original file (at least it probably won't be any smaller) which would make it even slower. unlike anri, yua encodes the video and audio simultaneously (during the video first pass). so any "wasted" cpu is soaked up either by decoding the input or by x264.

    have to head to work now, will check more when i get back.