Username:
B
I
U
S
"
url
img
#
code
sup
sub
font
size
color
smiley
embarassed
thumbsup
happy
Huh?
Angry
Roll Eyes
Undecided
Lips Sealed
Kiss
Cry
Grin
Wink
Tongue
Shocked
Cheesy
Smiley
Sad
<- 1  -   of 38 ->
--
--
List results:
Search options:
Use \ before commas in usernames
The Dork Knight himself.
Yeah I see your point. The main reason I wanted to test this was to make the final HQ video a bit clearer. I'm attaching my two test files. One is encoded in the proper D4 resolution, and the other in D1 resolution. Aside from the obvious bobbing that gets introduced, I prefer the D1 encode because it has a good balance between clarity (especially with the word Magic and other text) and enough softness so that it doesn't look super pixelated.

The only solution that would give proper D1 resolution in this scenario would be to shift non-dominant field (for my capture card it's always top field first) to eliminate the bobbing, although I'm pretty sure that wouldn't be a 100% fix. I tried using Anri and adding in every bob fix that is available (nate_retard_bob is such a great name btw) and none of them worked out. Using post-processing filters in MPC to the D4 video still didn't really make things really clear.



if you're interpolating lines like what yua does with nnedi3 for interlaced d1 then i believe you want top field quarter line shift down, bottom field quarter line shift up (before interpolating). after interpolating i guess it would be half lines. not sure what it would do to the clarity since it isn't whole lines. may be easier to just make sure there is no 1-pixel shift on field split, then treat every field as a top field for input to nnedi3. as always this stuff is dodgy for me to visualize.
The Dork Knight himself.
Hmmmm, assuming every field is a top field after separation sounds like it could work. Is there any way for me to do this now in Anri just to test? The only other thing I could think of would be to shift every bottom field up since those are the frames where the image shifts down.
hmmmmmmmmmmm. hard to say honestly because when i said that i was thinking of modifying where yua passes the image to nnedi3 (it specifies top or bottom field). i don't think you can do that with the original nnedi3 (for avisynth) but i'm not 100% sure.
Edit history:
ballofsnow: 2013-04-13 09:39:57 pm
ballofsnow: 2013-04-13 09:38:52 pm
ballofsnow: 2013-04-13 09:34:12 pm
Jay's vid just needs a 1-pixel bob. I tried it on the d1 mp4.

Code:
loadplugin("ffms2.dll")
import("FFMS2.avsi")

input="s-vid-d1_HQ.mp4"
FFIndex(input)
AudioDub(FFVideoSource(input), FFAudioSource(input))

converttoyuy2
interleave(selecteven,selectodd.addborders(0,1,0,0).crop(0,0,0,-1))


So I guess run it through anri, choose 1 pixel bob, then modify the avisynth file(s) manually with the resize stuff.

edit- Actually, it's not clear what the goal is. A simple pointresize, but not really D1 - or - do some kind of interpolating for pseudo-D1?
released alpha 7. quite a few bug fixes. --trim=firstframe,lastframe is available via the cli. additionally:

- --1-pixel-shift, --alternate-1-pixel-shift, --de-deflicker, and --alternate-de-deflicker cli options
- screenshot functionality
- new-style statid image
- swap e.g. _HQ and _part01 if e.g. name_part01 is specified as the output basename to conform with sda file naming convention
Interesting tidbit, when switching the field dominance with StatID checked, the statID preview vanishes when it switches the field order. It returns if you type anything in the three lines (name, game, time/category etc).
thanks. noted.
Edit history:
honorableJay: 2013-04-13 11:46:55 pm
The Dork Knight himself.
Quote from ballofsnow:
Jay's vid just needs a 1-pixel bob. I tried it on the d1 mp4.

Code:
loadplugin("ffms2.dll")
import("FFMS2.avsi")

input="s-vid-d1_HQ.mp4"
FFIndex(input)
AudioDub(FFVideoSource(input), FFAudioSource(input))

converttoyuy2
interleave(selecteven,selectodd.addborders(0,1,0,0).crop(0,0,0,-1))


So I guess run it through anri, choose 1 pixel bob, then modify the avisynth file(s) manually with the resize stuff.

edit- Actually, it's not clear what the goal is. A simple pointresize, but not really D1 - or - do some kind of interpolating for pseudo-D1?


Well the point (for me anyway) was to see if taking footage from a D4 source (this example is Golden Axe II played on a Wii with s-video) could be encoded in D1 resolution to keep text clarity. If you look at the videos I linked, encoded at the proper D4 makes the text really blurry. But when encoded at D1 resolution the text is easily readable and the video is just soft enough to not look super pixelated. Check out the word 'Magic' and the numbers showing the magic level to see what I'm looking for.

I'm attaching the HQ avs script (along with the batch file to run it) used to make the D1 encode. If you can tell me how to properly edit it to give a proper 1 pixel bob so I can get another test encode I'd appreciate it. All of the included bob fixes in Anri don't take this scenario into account so I can't get rid of the bob on my own. The problem that I have with getting the bob in there is for D1 encodes you're not given the option to choose a bob method. I'm not sure why, but when I manually put the bob fixes in the end video came out the same (I used a D4 avs script as a reference to where the bob fixes are supposed to be, but I could've been wrong).
Edit history:
ballofsnow: 2013-04-14 12:10:55 am
ballofsnow: 2013-04-14 12:07:50 am
ballofsnow: 2013-04-14 12:07:00 am
Oh, didn't realize you were choosing D1 in Anri. Just stick with D4, then modify the avs files afterwards.

Here's one from an input of 720x576, D4 F1 2D, 1 pixel bob. Anri will set an aspect ratio of 4:3 in the mp4, so the 720x576 will display as 768x576.

This is HQ avs.

import("C:\Program Files (x86)\anrichan3.3\plugins\plugins.avs")
import("AC21804_source.avs")
converttoyuy2
AssumeBFF
sourcewidth=last.width
sourceheight=last.height
DAR=4./3 * float(last.width)/last.height * float(sourceheight)/sourcewidth
separatefields
prenmfrate=last.framerate
Try{import("AC21804_source_nmf.avs")
nmf=true}catch(err_msg){nmf=false}

#last.height > 700 ? (last.width > 480 * DAR ? lanczos4resize(sda_even(round(480 * DAR)),480) : lanczos4resize(last.width,480)) : (last.width > last.height * DAR ? lanczos4resize(sda_even(round(last.height * DAR)),last.height) : NOP)
#last.height % 2 == 1 ? AddBorders(0,0,0,1) : NOP
#last.width % 4 <> 0 ? AddBorders(floor((4 - last.width % 4) / 2.), 0, ceil((4 - last.width % 4) / 2.), 0) : NOP
nate_alternate_1_pixel_bob_fix
pointresize(last.width, last.height*2)

changefps(prenmfrate/1)
statid=nate_statid(last,"\n\n\n","","")
statid=statid.AddBorders(int((statid.width * float(4*statid.height)/float(3*statid.width) - statid.width) / 2.), 0, int((statid.width * float(4*statid.height)/float(3*statid.width) - statid.width) / 2.), 0).Lanczos4Resize(statid.width,statid.height)
changefps(last.framerate)
assumeframebased




Here's one from an input of 720x480, D4 F1 2D, no bob though. This time I resized the width to 640, then did a pointresize for the height. You could argue to keep it at 720x480 (while still displaying at 640x480) but you probably won't notice a difference. Also less bitrate used up at 640x480.


import("C:\Program Files (x86)\anrichan3.3\plugins\plugins.avs")
import("AC2688_source.avs")
converttoyuy2
AssumeTFF
sourcewidth=last.width
sourceheight=last.height
DAR=4./3 * float(last.width)/last.height * float(sourceheight)/sourcewidth
separatefields
prenmfrate=last.framerate
Try{import("AC2688_source_nmf.avs")
nmf=true}catch(err_msg){nmf=false}

#last.height > 700 ? (last.width > 480 * DAR ? lanczos4resize(sda_even(round(480 * DAR)),480) : lanczos4resize(last.width,480)) : (last.width > last.height * DAR ? lanczos4resize(sda_even(round(last.height * DAR)),last.height) : NOP)
#last.height % 2 == 1 ? AddBorders(0,0,0,1) : NOP
#last.width % 4 <> 0 ? AddBorders(floor((4 - last.width % 4) / 2.), 0, ceil((4 - last.width % 4) / 2.), 0) : NOP
lanczos4resize(640,last.height)
pointresize(last.width, last.height*2)


changefps(prenmfrate/1)
statid=nate_statid(last,"\n\n\n","","")
statid=statid.AddBorders(int((statid.width * float(4*statid.height)/float(3*statid.width) - statid.width) / 2.), 0, int((statid.width * float(4*statid.height)/float(3*statid.width) - statid.width) / 2.), 0).Lanczos4Resize(statid.width,statid.height)
changefps(last.framerate)
assumeframebased
Edit history:
honorableJay: 2013-04-14 12:51:21 am
The Dork Knight himself.
With a little editing, it actually worked. I definitely like the results. I haven't tested the first script yet (from a true D1 encode) but the second script (from a D4 encode resized) worked like a charm. Here's some stats:

Encoding D4 as D1 with Anri 3.3 normally (with no bob fix): fps was ranged from 1 to 5
Encoding D4 as D1 with Snow's script (also with Snow's bob fix): fps was ranged from 100 to 160
Encoding D4 as D4 with Anri 3.3 normally (with no bob fix): fps was ranged from 250-350

Final encode file size: D4 is 8.84MB, D1 is 15.2MB

The file size doesn't surprise me since it's double the resolution, but the encoding framerate really surprises me. I'm not sure what Snow did compared to the normal D1 encode, but I'm assuming it has to do with the encoder trying to reconstruct the full D1 resolution from an interlaced D1 source. In my case though since it's just a resize of the raw interlaced capture there isn't as much work involved.

I'm probably just a voice in the minority, but I'd like to see this implemented as an IQ or XQ style encode for D4 sources. Of course this would be up to the runner to do the encoding themselves to get this type of encode, but the picture clarity to me is worth it. It's as close to the source quality as possible.
The Dork Knight himself.
For those interested, I'm attaching the two encoded videos to show the difference.



I prefer the D4 since it isn't so pixelated. IMO, if you are going to up the resolution, you are going to have to do some anti-aliasing to make it look good instead of just duplicating the pixels.
is this not something that could be done after the fact to the hq? it seems mostly just like a pixel resize to d1 instead of whatever interpolation people's players normally use (probably bicubic or bilinear).
Edit history:
ballofsnow: 2013-04-14 09:37:35 am
ballofsnow: 2013-04-14 09:35:42 am
Media player classic has only nearest neighbor other than bicubic and bilinear. It looks bad - well, not BAD, but not the best it can be.

For VLC, you can go to Tools -> Preferences -> Show settings ALL -> Video -> Filters -> Swscale. I didn't find anything good.

Loading up a 320x240 into avisynth with ffmpegsource and doing a pointresize still doesn't look good. So basically no video players that do a pointresize will look good, if they even offer pointresize at all.

What might be an interesting test is producing a final encode in RGB or yuv444, then doing the above 3 tests.
Edit history:
ballofsnow: 2013-04-14 02:12:38 pm
Testing Yua A7. Also, A5 for reference.

VFR messed up on a VOB file. See MPEG2_AC3_720x480i_4x3_2997_YUV420_D1F13D---DMC3.VOB
The Dork Knight himself.
Snow: if you want to test our your theory with RGB/yuv444 here's my raw file I used for the pointresize.
Edit history:
ballofsnow: 2013-04-15 08:17:55 pm
ballofsnow: 2013-04-15 08:17:32 pm
ballofsnow: 2013-04-15 08:14:53 pm
Thanks, that's a keeper. Will be used in Yua testing too.

I think it's the resize. There's no easy way to resize a capture of 640x480 down to 320x240 and expect to blow it back up to 640x480 with perfect result. You'd need a 640x480 capture where each game pixel is a 2x2 square, such that a pointresize of factor 2 in each dimension brings it back down perfectly to 1x1 game pixels and not throw away or make up information.

For example imagine you have a perfect 640x480 capture of a 320x240 game. Each game pixel is 2x2 in the capture:
########
########
########
########


Pointresize factor 2 should be easy here, right?:
####
####


OK, but what do you do about:
########
########
########
########

Yeah...uh....

I'd need to do more testing though to be sure.. generate myself a perfect grid and then see how avisynth pointresize behaves.

Maybe you could try capturing at 320x480i@30 if the end result needs to be 320x240p@60. thumbsup
Edit history:
honorableJay: 2013-04-15 10:26:06 pm
The Dork Knight himself.
Ya know I would do that if the Dazzle supported that resolution Smiley It is a shame that a simple scanline doubling doesn't give better results. Hopefully the RGB/yuv444 tests come out good. Do you think the encoder is introducing some sort of anti-aliasing into the final video effectively breaking any type of pointresize?

One thing that caught my eye when checking out the avs scripts is why do a color conversion to yuy2 if the source material is already in yuy2 or RGB? Wouldn't it be easier to not do the color conversion for yuy2 or use RGB for the entire process until the end?
Edit history:
ballofsnow: 2013-04-15 11:06:01 pm
ballofsnow: 2013-04-15 11:05:30 pm
I did some yuv444 on x264, but like I said above it's really the resizing that causes the problem. Don't even need to encode really, just resize and then pointresize in avisynth and see what it looks like. If you want to try on that raw:
Code:
avisource("amarec20130411-2020.avi")
assumetff
separatefields
trim(0,100)
assumeframebased

# Resize to 320x240.
pointresize(last.width /2, last.height   )

# Bring it back to 640x480.
pointresize(last.width *2, last.height *2)

I doubt there's a solution other than not resizing down in the first place. Anyway, no harm in making your own pseudo-d1 videos for your own use.

yuy2 -> yuy2, no effect.
RGB, I can't remember which off the top of my head, but some avisynth filters only work in yuy2/yv12 color space. With anri I didn't see many worthwhile reasons to work in RGB. Also, this page writes: "Conversion back and forth is not lossless, so use as few conversions as possible. If multiple conversions are necessary, use ConvertBackToYUY2() to convert back to YUY2, when you applied a YUY2->RGB conversion prior to that in your script. This will reduce color-blurring, but there are still some precision lost.  In most cases, the ConvertToRGB filter should not be necessary."
Working in yuy2 seems to work pretty well, so that's what Anri does. I believe Yua works differently since it's not dependent on avisynth and its filters.
Edit history:
ballofsnow: 2013-04-15 11:46:08 pm
ballofsnow: 2013-04-15 11:37:10 pm
Well I don't know why I doubted it, but pointresize works as expected in avisynth.

Also a before and after with pointresize of a non perfect grid, like what I posted above.
Attachment:
The Dork Knight himself.
Not resizing down is what I was thinking would be the best solution. Since the base capture is already at 720x240@60fps (after field splitting) it's better to just pointresize the 240 up to 480 and not even touch the width at all before actually encoding (effectively doing a scanline double effect). I guess the only way to minimize the amount of encoding artifacts in this case would be to capture at 640x480 interlaced, split the fields, and hope that pointresize to 480 doesn't mess with the image too much.

It's too bad we can't get more clarity out of the 320x240 encodes during playback resizing Sad
Edit history:
nate: 2013-04-16 10:52:40 am
would a playback filter that lanczos resizes a 320x240 original to 640x240, then point resizes to 640x480 be equivalent?
Edit history:
ballofsnow: 2013-04-16 04:27:20 pm
ballofsnow: 2013-04-16 04:22:08 pm
ballofsnow: 2013-04-16 04:21:53 pm
Nah, once you resize down there's no way back, unless you start off with a perfect capture which probably won't happen. You'll only be zooming in to imperfections in the source+resizer.

I guess if you're concerned about both crispness (can't think of better word now) AND file size maybe you can encode at 640x240, set aspect ratio in the H.264 stream to 4:3, and let your video player do the resize. Only thing is the video player will probably try to play it as 320x240 (resizing width instead of height) but I think that'll be offset as soon as you zoom in 200%+. Also, you'd have to set your video player to pointresize (or nearest neighbor?) instead of the default of bicubic. Just an idea, probably impractical.