Username:
B
I
U
S
"
url
img
#
code
sup
sub
font
size
color
smiley
embarassed
thumbsup
happy
Huh?
Angry
Roll Eyes
Undecided
Lips Sealed
Kiss
Cry
Grin
Wink
Tongue
Shocked
Cheesy
Smiley
Sad
1 page
--
--
List results:
Search options:
Use \ before commas in usernames
Edit history:
tate: 2010-08-29 04:39:55 pm
Hi all, a few of you may remember my attempt to write a frame server for anri-chan last year around this time. Well it has changed a lot and actually does what it should this time, serve frames. It still has a long ways to go before it will be usable but I believe this to be a good step. I attached my c code in an eclipse project export. I also added some perl scripts in the run directory to help show how to use it. Currently its only useful abilities are trimming and resizing. My current audio trimming implementation can still use some work but it does a decent job at the moment.

amemiya depends on ffmpeg 0.6 to read and decode video files.
it depends on x264, faac, and mp4box to write and mux videos together.
If you have ffmpeg, x264, and faac installed and place mp4box in the run/exe directory you can start encoding videos using the runl.pl inside the run/exe directory.

This is not intended to replace avisynth on windows. This is mostly for the mac and linux users out there so someday we can use anrichan like all the windows people to encode our speed runs. It still has a long ways to go and hopefully it will evolve into something much better then I have started here.

To do list:
Improve audio trimming to be more accurate.
Deinterlacing
multiple input files support
croping/padding?
image reading for statids
avisynth like file reading
and last a viewer
Thread title:  
Attachment:
it kills me because this is exactly what needs to happen for anri for unix to finally be workable.

a little background.

in late summer 2008 i had just started learning perl and realized that i could rewrite anri in perl to be platform-independent. over the next few months i accomplished this as "anri 4" or "anrix". i was helped in this task by several other programmers who are much more talented than me including dex and selbymd. unfortunately the backend toolchain (for e.g. deinterlacing and especially appending) was never solid enough for me to formally release it. it would work with certain input but not with other input and i wanted it to be identical on all platforms.

i actually started out with mplayer, then went to avidemux, then went back to mplayer, then finally dumped it for avidemux again before finally giving up on it entirely last year. it was always something - in mplayer you could append anything just by dumping raw frames in rgb but then the audio sync would drift with each appended video. on the other hand avidemux's scripting was completely out of hand. to give you an idea, it's javascript. a lot of things didn't work as documented and a lot of things just didn't work at all on certain input - the avidemux executable would just sit there apparently waiting for input and you'd have to kill it manually. so i spent day after day trying to find a way out for every reason anrix was failing simple encoding tests and i never did. it was heartbreaking as i'm sure you can tell from reading this because i would get it stable with encoding e.g. d4 f1 2d from dvd but then it would fail miserably doing that from avi.

i know a lot more about perl today so i would probably want to rewrite my parts of anri.pl if we get to the point where we can do appending and deinterlacing with amemiya. i dunno how many people remember this but anrix also has sasami which is like a toy avisynth interpreter for mplayer or avidemux or the flavor-of-the-day backend. i did this so that the part of anri that writes .avs could be the same on all platforms (then on unix sasami would translate the .avs anrix just printed to whatever the backend is expecting).

the audio i think is going to be the hardest part. ideally someone would bring in big grant money and then i could just pay grenola to do it because i know he has the right background. i know almost nothing about audio unfortunately. when i was trying to keep it in sync working with these tools with anrix it was like a caveman hitting a keyboard with his club.
All the hard work to bring anrix alive is why I started this whole thing. And I know exactly how you feel when you say it was like a caveman hitting a keyboard with a club when i was trying to do anything with audio for amemiya before it died the first time. Still feel that way.

Anyways maybe we can get audio trimming good now, I just need some help with figuring out where to trim at. If someone can help with the math I should be able to implement it. The audio comes in what is called frames. The number of audio frames is almost always different then the number of video frames, and sometimes by quite a lot. Right now I am just doing simple math to figure out which audio frame is closest to the chosen video frame and cut there. In all my tests so far the audio doesn't go out of sync but I really just think I have gotten lucky. I think we can cut the audio in a much more byte accurate way. If someone knows a good deal about audio could you help me with the algorithm. The information I have to use for trimming is: total audio frames, total video frames, audio sample rate, audio hz, audio channels, and the number of bytes in an audio frame. I just don't know enough about how audio works to be doing the math, I am trying to learn but it is going to take some time.