
Welcome to AnimePassion
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!
Register now to gain access to all of our features. Once registered and logged in, you will be able to create topics, post replies to existing threads, give reputation to your fellow members, get your own private messenger, post status updates, manage your profile and so much more. If you already have an account, login here - otherwise create an account for free today!

rip (encodes) vs original
Started by
Chaossaturn
, Jul 15 2012 03:37 PM
#21
Posted 17 July 2012 - 03:35 PM

Of course there will be a difference in compression if you change the compression settings (i.e. preset level), but assuming a constant preset, there would be no real-world difference between encodes from two computers of varying CPU power beyond the time they would need.

#22
Posted 17 July 2012 - 03:44 PM

oh i meant to note on there that better CPU's can do the slower presets while maintaining the same fps...
#23
Posted 17 July 2012 - 04:24 PM

oh i meant to note on there that better CPU's can do the slower presets while maintaining the same fps...
Ah, gotcha.

#24
Posted 19 July 2012 - 03:46 AM

Excellent thread is excellent - more of these insight (into encoding, TVs, ODs, etc.) giving threads pls!
#25
Posted 21 July 2012 - 03:40 PM

I would also like to note that this is theoretically true for a thread (either threads=1 or just have a single-processor CPU): a faster CPU (with more megahertz and more overclocked, mainly) will process more frames with the same settings or process the same amount with higher compression settings.
But a better rig will have a CPU with a higher processor count (like a quad-core hyper-threaded i7), which increases encoding speed at identical settings because it divvies up the work (x264 automatically sets threads=12 for a 4-real, 4-hyperthreaded processor system; it uses 1.5*amount of seen processors to set threads), When you divvy up the work, you have a situation where quality is decreased at the same settings, proportional to the amount of threads used. This quality loss is usually visible around threads=16, which is pretty close to what I have (in fact, you can by an i7 CPU out there that reaches that count now).
You can fool around with settings a little more and get a better quality: a non-deterministic x264 encode would usually give a better quality for the same settings, but now you can't predict what's gonna come out of the encoder (becuase encode now depends on what's happening inside the CPU).
Anyways, that's my rant.
But a better rig will have a CPU with a higher processor count (like a quad-core hyper-threaded i7), which increases encoding speed at identical settings because it divvies up the work (x264 automatically sets threads=12 for a 4-real, 4-hyperthreaded processor system; it uses 1.5*amount of seen processors to set threads), When you divvy up the work, you have a situation where quality is decreased at the same settings, proportional to the amount of threads used. This quality loss is usually visible around threads=16, which is pretty close to what I have (in fact, you can by an i7 CPU out there that reaches that count now).
You can fool around with settings a little more and get a better quality: a non-deterministic x264 encode would usually give a better quality for the same settings, but now you can't predict what's gonna come out of the encoder (becuase encode now depends on what's happening inside the CPU).
Anyways, that's my rant.
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users