Username:
B
I
U
S
"
url
img
#
code
sup
sub
font
size
color
smiley
embarassed
thumbsup
happy
Huh?
Angry
Roll Eyes
Undecided
Lips Sealed
Kiss
Cry
Grin
Wink
Tongue
Shocked
Cheesy
Smiley
Sad
<- 1  -   of 69 ->
--
--
List results:
Search options:
Use \ before commas in usernames
Quote from tate:
Some things did confuse me about how you did stuff but I guess i don't understand what you mean with you say infinite number of languages. Unless you mean different ways to go about programming in the language because of its flexibility.

yep. for example going with package variables was a mistake. should have used lexicals (my) like every other modern perl programmer.

Quote from tate:
So would you mind if I started to rip and tear at it slowly and make it object oriented. I've been tempted to do this ever since I first looked at the code and constantly got lost in it.

i hate oo and never use it unless modules i'm using use it (which is very often with perl modules). probably if you're going to write oo perl today you'll want to use moose. i don't have any experience with it but i know that writing oo perl without something like moose is not fun and even less fun to maintain.

but no i don't mind.
It turns out ffmpegsource doesn't even recognize lagarith. I must have been doing my previous testing on a different codec. I'm not sure I want to completely throw out this idea just yet, since I can't think of any alternative method that will allow loading many avi files.
you can check the fourcc though right?
We can... you have an idea?
yeah i mean just check it and if it's lagarith then don't use ffmpegsource() lol. can throw up a warning too about don't use lagarith if you want to append hella avis.
Still working on the avi thing. Interesting thing about AviSynth is the try..catch control structure. Can be used to test multiple things in a single script while ouputting the errors to a text file. Something like:

Code:
try{
	avisource("AN088.avi")++avisource("AN089.avi")
}
catch(err_msg){
	WriteFileStart(blankclip,"verifysource.log","err_msg",append=true)
}
never seen that before that i recall. nice.
FFmpegsource won't be used, going with the NMF method.

The video verification step is being updated to do 2 passes on avi files. The first pass uses try..catch to test pairs of avi files (1,2 - 2,3 - 3,4 - etc) writing any errors to a file. The second pass tests all videos joined together. If the first pass succeeds but the second pass fails, that should be a clear indicator that the problem is that there are too many files being joined.
heh, cool.
I did a test with x264 by encoding Sonic 3 & Knuckles and FF9 (part 1).
I found that for the same settings for both encodes, and with the bitrate allocated for S3K was roughly 2200 kbps, the bitrate for FF9 was roughly 3.7x less than that.
I took the size of the encodes and divided by the number of frames to get a rough calculation on how much bitrate was spent on each frame, and it turned out S3K had about 3.7x more than FF9.
But this leads to the question: wouldn't it simply be better to use CRF and set the quality high enough to properly encode a game such as S3K? It could significantly cut the size of less demanding encodes, and cut the time needed to encode in roughly half while still providing excellent quality.

More tests are probably needed, but is there an off-hand reason that this is a bad thing to do?
Quote from Mystery:
I did a test with x264 by encoding Sonic 3 & Knuckles and FF9 (part 1).
I found that for the same settings for both encodes, and with the bitrate allocated for S3K was roughly 2200 kbps, the bitrate for FF9 was roughly 3.7x less than that.
I took the size of the encodes and divided by the number of frames to get a rough calculation on how much bitrate was spent on each frame, and it turned out S3K had about 3.7x more than FF9.
But this leads to the question: wouldn't it simply be better to use CRF and set the quality high enough to properly encode a game such as S3K? It could significantly cut the size of less demanding encodes, and cut the time needed to encode in roughly half while still providing excellent quality.

More tests are probably needed, but is there an off-hand reason that this is a bad thing to do?

Again: it's too unpredictable.

What CRF value did you use anyways? Must have been pretty high to get an assumably high motion game such as Sonic to get only 2200 kbps.

We already use a raised minimum quantizer that potentially reduces the final average bitrate of less complex video.
Yes, size is unpredictable, but does it matter when you get smaller sizes? I can understand not wanting inflated videos, but smaller ones...
I used 21 IIRC.

Basically, the commend line (IIRC) was something line
x264_64 --crf 21 --tune film "input" --output "output"
CRF, 1 pass: unpredictable, potentially lower average bitrate on less complex video.

2 pass with minimum quantizer: predictable (won't go past set bitrate), potentially lower average bitrate on less complex video.

If we're arguing 2 passes vs 1, I still prefer two passes since it is more predictable.
Quote from ballofsnow:
CRF, 1 pass: unpredictable, potentially lower average bitrate on less complex video.

Isn't that a good thing? If the bitrate is lower with the same quality, what's the harm?

Quote from ballofsnow:
2 pass with minimum quantizer: predictable (won't go past set bitrate), potentially lower average bitrate on less complex video.

It's also possible to limit quantizer in crf mode.
I wouldn't really say it lowers bitrate much on less complex video since you're telling to hit a target filesize, after all. It can vary something such as +/- 10%, but that's it.

Quote from ballofsnow:
If we're arguing 2 passes vs 1, I still prefer two passes since it is more predictable.

I don't get your predictable, exactly.
Do you expect to be able to take a formula, put in values, and get final size, regardless if the bitrate is needed or not? And if so, why? What's the harm in smaller sizes?
Or is predictable in quality sense, such as we cannot determine if the video will have good quality or something?

Seeing the benefits of crf mode, I honestly think it's good to experiment with it. If SDA could go over to crf mode sometime in the future, it would be great.
Quote from Mystery:
Quote from ballofsnow:
2 pass with minimum quantizer: predictable (won't go past set bitrate), potentially lower average bitrate on less complex video.


I wouldn't really say it lowers bitrate much on less complex video since you're telling to hit a target filesize, after all. It can vary something such as +/- 10%, but that's it.

This is wrong.

Before we discuss any further, do you understand the concept of the minimum quantizer? Also, have you looked at the x264 settings that SDA is currently using?
i think snow is having trouble coming across here. i don't know much about x264 but i do know that the way we do it now (and anri does it) allows any appropriate bitrate at a given quantizer up to a bitrate cap (i've seen ~200 kbit in hq which is 2048 kbit cap). bitrate cap comes from the 2-pass and keeps the file from being huge which is what he means by predictable (not too small which no one cares about). we can't have for example an hq that goes above 2048 kbit. now maybe 2049 kbit is ok but 20048 kbit is not which is what would happen if someone encoded something really crazy with no bitrate cap. that's where iq, xq etc come in and pick up the slack. anyway moore's law has pretty much destroyed x264 as the encoding bottleneck so discussions on saving time by cutting it down are increasingly academic.

i guess i'm in a documenting mood lately or something because now i feel like we need a "tech support board faq" where there are links to past discussions on things like this because i'm sure there are several ...

edit: it would also be a fun place to look at things people used to argue about all the time but have since stopped like why don't you support mkv or why do i have to make lq. then again it seems like i just got someone recently through email saying he wasn't going to submit lq because it looked low quality so who knows.
Technology changes so it's good to have new discussions every once in a while. We established the whole 2 pass with min quant process like... three years ago. But it would be nice to be able to point to old discussions simply for the fact that we wouldn't have to explain the concepts all over again.
Quote from ballofsnow:
Quote from Mystery:
Quote from ballofsnow:
2 pass with minimum quantizer: predictable (won't go past set bitrate), potentially lower average bitrate on less complex video.


I wouldn't really say it lowers bitrate much on less complex video since you're telling to hit a target filesize, after all. It can vary something such as +/- 10%, but that's it.

This is wrong.

Before we discuss any further, do you understand the concept of the minimum quantizer? Also, have you looked at the x264 settings that SDA is currently using?

OK yeah, maybe I ignored that minimum quantizer part.
Anyway, minimum quantizer basically puts a limit to how low quantizers x264 may use. Lower quantizer = higher quality = higher filesize. Of course, too low quantizers are usually useless since there is no visual difference and they bloat the size.
And by predictable you mean that you don't want your encodes to be bloated -- ie too big size.

OK, that's understandable. That's what I'm after, too.
But so theoretically, what happened if we took a crf constant that gives "perfect" quality on difficult content (or, if you will, allocates around 2000 kbps for High Quality) and then used that everywhere? It would seem, then, that the difficult content gets enough bits to make it good quality at a predictable size and all other content gets less bits and less filesize, but still good quality.
You can still enforce a minimum quantizer in crf mode. Or a constant rate factor.
what's difficult content though?
Whatever give x264 most trouble, I suppose. If we'd have to guess, that would be something like Mario Kart Wii. I think you mentioned that somewhere back in the thread.
What if we encoded MKW with a crf factor that gave equal bitrate to, say, High Quality and tried that on other sources? Especially troublesome ones like Sonic 3 & Knuckles.
problem is what happens when something more difficult than mkw comes along? now the settings in 83289328989 downloaded copies of anri are wrong and we can't count on people updating. i mean if we were talking about realtime software deinterlacing with the quality of mvbob then i would be very willing to listen but as it is the opportunity cost is just too high. it's trading endless uncertainty for an incremental improvement in encoding time that grows smaller by the day.
I don't see the issue. We're talking about allocating X bitrate for movie Y, right?
What if movie Z comes along and requires 2X bitrate?
If we're using 2-pass, then we're, well, screwed, because we're not going to get more bitrate. So quality varies.
If we're using crf, then bitrate will scale. And even if it does not, it's in the same boat as 2-pass.

So either filesize will suffer (crf) or quality will suffer (2-pass). Which is best of these two? I'm voting crf, but your mileage may vary.
Besides that, by putting limits on quantizers, we should be able to actually minimize the damage should such a situation occur.
And then there's the question of how many bytes will crf save in the end? Probably a lot more than 2-pass, so the bytes saved vs some oversized game would probably weigh in crf's favor.
Forcing people to upgrade versions is always a good idea, as well.
yeah you see it from the perspective of the audience who wants consistent quality and has disk space/bandwidth to spare and i see it from the perspective of the system administrator. i want many things - i love everyone always all of the time but i know how little disk space and bandwidth we have so i'm my own worst enemy. if someone makes an hq that is 8238923893 gig then it's possible it's not a problem for them and not a problem for people downloading it but it's certain that it will be a problem for me since i have to host everything. eventually it becomes a problem for people downloading other runs too because now dl is out of disk space and even if i pay for the new disk the bandwidth is the same so everything crawls even more than it does now.

so then i have to come in and say sorry but that video that you encoded i can't actually host so download this new version and try again. but that hardcoded value change is also going to lower the quality of everything else everyone who updates anri encodes which seems kind of wacky. so back in 2006 we went through something very similar to what you're proposing and figured out that 17 was great for d4 and 19 was great for d1 and have used those with bitrate caps ever since with no problems.

but having said all that i don't think sda is losing any quality in the videos by capping the bitrate since we require iq and above when you get to d1 f1. h.264 is just really good especially when you talk about game video which is often pretty different from hollywood stuff.
Edit history:
Mystery: 2010-06-07 08:06:22 am
I also see it from your perspective. Hard disk space and bandwidth is a problem - so why make a video bigger than necessary?
It's been what - almost 4 years since it was last discussed? I think it is a good idea to bring it up again.
I see how loaded the SDA servers are, so reducing the bandwidth is certain to be a good idea.
I think we agree that we just need a good way of reducing the possibility of a 0xFFFF GB video being produced, yes?

EDIT:
I think I may have found a solution for both our problems.
Basically we do a hybrid 2-pass approach. In the first pass, we use crf mode, eg: x264_64 --slow-firstpass --pass 1 --crf n --input "input" --output "output".
If and only IF the bitrate exceeds our cap for specified quality, we do a second pass, eg: x264_64 --pass 2 --bitrate n --input "input" --output "output".
x264 is also nice enough to let us known the average bitrate it used for a given encode in its output. It's easy to base our decision on this.
I have a working prototype in code that enables this scheme, and it works from I'm told. I could help integrate such a system, or at the very least, donate some code for it. Although I don't know what anri is coded in.
snow, what do you think of this? if i understand it correctly it would save people from encoding the second pass if 17 for d4 and 19 for d1 doesn't cause x264 to write something bigger on average than the bitrate cap for the quality being encoded.

also mystery - dunno how we keep missing each other on this, basically i have seen anri make very low bitrate files due to our min quants ... so your proposal wouldn't make things any smaller, would just save a little time by skipping the second pass sometimes.