Username:
B
I
U
S
"
url
img
#
code
sup
sub
font
size
color
smiley
embarassed
thumbsup
happy
Huh?
Angry
Roll Eyes
Undecided
Lips Sealed
Kiss
Cry
Grin
Wink
Tongue
Shocked
Cheesy
Smiley
Sad
<- 1  -   of 69 ->
--
--
List results:
Search options:
Use \ before commas in usernames
Again, I duno if this has been suggested before, but how about an "add statID to existing movie" option on the main menu? MP4Box supports appending files with the -cat option, so it'd basically consist of making the 5-second statID into its own movie file (at the right res/frame rate, duno how hard that part would be), then running

MP4Box -cat statid.mp4 -cat originalmovie.mp4 -cat statid.mp4 newmovie.mp4

Quote:
$ "/cygdrive/c/Program Files (x86)/anrichan3.0/MP4Box.exe" -cat EmpireCity_Day1/EmpireCity_Day1_XQ.mp4 -cat Spagonia_Day1/Spagonia_Day1_XQ.mp4 ph33r.mp4
Appending file EmpireCity_Day1/EmpireCity_Day1_XQ.mp4
No suitable destination track found - creating new one (type vide)
No suitable destination track found - creating new one (type soun)
Appending file Spagonia_Day1/Spagonia_Day1_XQ.mp4
Saving to ph33r.mp4: 0.500 secs Interleaving


Seeing as there are times when the completion time is decided on after the movies have been encoded, I think this'd be a good feature.
Edit history:
ballofsnow: 2010-03-09 04:56:12 pm
The idea has come up in the past, problem is audio desync since most (all?) audio codecs can't be accurately cut to the millisecond. I might not be phrasing it correctly since I'm no audio expert. It could be for the same reason that a Yv12 video has to have a mod 4 width and mod 2 height. Pad that 399 width to a mod 4 400 width.... pad that 5.000 second audio to 5.010 and you now have a gap or desync.

This problem goes a long way back for me personally, when I was trying to seamlessly loop an mp3 in some flash video and there would always be a stutter. Had to resort to using PCM.

Maybe one day when super high video bitrates are the norm, we won't mind using PCM audio at a paltry 48 khz / 1536 kbps.
Edit history:
bmn: 2010-03-09 06:04:39 pm
I thought that audio stutter thing was exclusive to MP3? I remember a time when I had that issue with looping* and I switched to Vorbis to avoid it.

But yeah I see what you're getting at. I'm thinking though, there'd be no issues with the last sid as it's only silence, so the problems are all with putting the first sid together with the movie. What's the possibility of creating a statID with audio that's accurate to... well, not the millisecond, it'd be to the frame so 16.6..ms? I know you just pulled the numbers out, but wouldn't 10ms of desync be acceptable anyway?

It's not like we're dealing with an unknown quantity like with most of the movies anri sees; we know the video's 300 frames, and the audio's going to be practically the same. I thought most of the desyncing in captured video was from other causes like dropped frames and the like.

Is there a tool that shows the length of the channels in ms?

* Not in Flash, though I actually used a technique in Actionscript to loop MP3s properly when that one came up. In Flash though, only the actual loop setting in the Sound object is capable of doing a seamless loop - if you use the on (Sound.complete) { Sound.play } method, it falls foul of Flash's audio buffer which delays the Sound.complete trigger.
Edit history:
bmn: 2010-03-09 07:34:02 pm
Right, yeah it's a bigger issue than I made out (shouldn't have doubted you ;p). But it was interesting:

I hacked the AVS scripts to create LQ/MQ/HQ versions of a statID. In each case the video was exactly 5 seconds as you'd expect. However, in all three cases, the audio was 5.056 seconds. Obviously 56ms is a hefty desync, but it is a definite number that was the same in all three cases.

Edit: Made another set with different lines and again the audio length was exactly 5.056s. So that likely means that the difference between the video and audio lengths will be the same for every statID. How that knowledge could be used to deal with desync... I duno.
Edit history:
bmn: 2010-03-09 08:01:02 pm
That reminds me actually. A hag at TASvideos did some research on the YV12 conversion and how it screws with the chroma. Obviously this is more suited to what they do having an RGB source off the bat, but might have some use here.

Basically they found that it was the direct conversion from YUY2/RGB straight to YV12 that did the damage, and that converting to YV24 (only available in the beta versions of AviSynth 2 unfortunately) then YV12 produces a marked improvement. Code:
Code:
ConvertToYV24(chromaresample="point")
ConvertToYV12(chromaresample="point")

The argument is necessary to have AviSynth use the new resampling in the beta, rather than the ballz resampling that's already in the stable version.

On top of that, apparently having D4 videos at D1 when the colourspace conversion is done, and then doing the resize, causes an improvement as well. Even if the source video is already D4, it's best (in terms of final quality at least) to double the dimensions with PointResize, then put it back down. I can't actually vouch for this one though.


I had the opportunity to test this info (as you do) on emulator footage with a video project for the Sonic Stadium. I know they're meant to be D4 for SDA (they're PointResized to 2x dimensions), but can't really be bothered to go back and capture more screens. Anyway!

The original, RGB at 2x dimensions:


PointResize & Simple RGB > YV12 conversion:


PointResize & RGB > YV24 > YV12 conversion:
this is really interesting. i guess it's like that upconv=1 argument to mpeg2source() that magically does a better colorspace conversion, like you wouldn't want that for some reason normally. wonder when this will hit stable. i know they're still releasing new versions of avisynth so it seems like it should happen eventually. won't matter for much stuff except digital capture (e.g. fraps) obviously but it's still a big deal for people like kibumbi.

about the appending mp4s thing, if you can get mp4box to generate an mp4 that quacktime 7 can open after concatenation, let me know. i've never been able to do it. never been able to split either. as far as i'm concerned the audio sync stuff is all moot unless quacktime opens the results.
A thought about deinterlacing. When a D1 F2 source is encoded, mvbob is used, as it is with F1 sources. Unless I'm mistaken, you can recreate a D1 F2 image by merging the two fields together in the same way that the capturing apps we love to hate do it, so should the procedure be changed to do this instead?
two reasons we don't do this. first the minor one and then the major one.

minor one is it causes slightly more artifacts especially if the signal isn't clean.
http://nate.metroid2002.com/blog/ (ctrl-f for 2006-06-16 02:19:21)

major one is it turns out very few games are constant f2. basically only first party nintendo stuff that i've seen. if it strays from f2 even by one frame in either direction then bam, interlacing artifacts.
sux

Does make sense that mvbob would do a sexual job having twice the picture to work with. The major issue you mentioned is... actually something I saw about an hour ago o_O I figured it was a "source has dropped a frame" moment, but was the first time I saw it.

Might the significantly quicker leakkernelbob be an option though, seeing as the task is less complex than a normal deinterlace?
i guess it would depend on how variable the framerate is. it gets too weird and now it's just as bad as f1. i don't know how to decide that and that's when i gave up on relieving f2 people of mvbob ...
Looking at leakkernelbob a bit closer, looks like it requires a stable field order anyway.
http://speeddemosarchive.com/forum/index.php?topic=11454.msg319460

seems like it should be trivial to put in a yes/no for sda statid. non sda statid can just be a black 640x480 image. no need to change where the text is drawn imo.
Edit history:
ballofsnow: 2010-04-11 01:52:26 pm
Nate, do you have some PAL VOBs or MPEG2s I can use for testing? I have.. one.. only.. Tongue

I'm experimenting with a new (to Anri) method of processing NTSC and PAL. Basically Anri will see no distinction and rely on PAR to set the DAR to 4:3 or 16:9. edit- GBA 3:2.

Also see Does this video play correctly in your media player?
sorry, didn't see this until just now. i think smf is glitching even more than usual and losing my unread flags. anyway:

http://nate.quandra.org/colosseum16.vob
http://nate.quandra.org/ceres.vob
Cool. I'm getting a 403 forbidden on the colosseum link though.
crap. try now.
You are the man. Thanks.
Edit history:
ballofsnow: 2010-04-17 01:29:15 pm
Having problems with PAR.. The syntax in mp4box is -par trackid=numerator:denominator. Seems the numerator and denominator are unsigned integers (0 - 65535), if you go over it takes the remainder. For example if you specify a value of 70000, it'll result in a value of 4464 (70000 - 65536), giving really odd aspect ratios. The reason there may be such large numerators and denominators is when you try to avoid division or floats.

Blargh.

edit- Euclidean algorithm ftw!
I've been wondering this. I don't know how it's in anri-chan, but at least in the knowledge base, you recommend using bitrate 2-pass. Why?
It seems to me it would be better if you chose crf instead. It cuts encoding time in roughly half while providing the same quality.
With 2 pass you have control of the bitrate. Crf mode is basically encoding at a certain quality level but with unpredictable results in bitrate, and we don't really like that at SDA. Try encoding a fairly static game, like one of those RPG games, versus a game that has lots of motion like a Sonic game and you'll see a big difference in bitrate at the same quality.
You want a strict control over the final filesize, I take it? Otherwise it's usually just to experiment with different crf values.
Yes, control the bitrate to control the file size.

Not sure what you mean by your second statement. We did initially want to use crf and we experimented with different values, however we came to the conclusion that with such varying content the file size would be too unpredictable.
Alright, yeah, that's basically what I meant.
if someone wants to test this for some reason, the least complex game i can think of off the top of my head would be one of the early silent hills. you wouldn't think so at first but the clean video (impossible on nes and very hard on the 16-bit systems) combined with usually at least 90% of the screen being total darkness in a given frame means there is very little for the encoder to store.

most complex i can think of right now would be something i just saw - mario kart wii. other d1 f1 racing games may take the lead, not sure, but mario kart wii is pretty bright and detailed. of course once you go beyond d1 all bets are off - that's cheating.