3D Realtime Rendering?
|
|
Nuverian |
Date: Monday, 19.10.2009, 19:18 | Message # 1 |
|
It's been a long time since rendering was introduced and in my opinion there isnt much changed there....Realtime capabilities have grown so much, that I just cant understand why softwares does not implement those capabilities! I realy can't see something that cant be done anymore in realtime engines.. Machstudio Pro, is maybe the closest software of what Im talking about. Even then not exactly and the software is realy overpriced. Rendering hours and hours realy drives me crazy and its one of the reason Im rethink of my job... :-/ Any concerns, or pipelines are welcome.. Added (19.10.2009, 19:18) --------------------------------------------- waryeasyruary |
|
| |
Guest |
Date: Monday, 19.10.2009, 19:23 | Message # 2 |
|
You can upgrade your computer, learn rendering optimisation, or\and change the render engine. |
|
| |
Guest |
Date: Monday, 19.10.2009, 19:24 | Message # 3 |
|
Hacks will surely improve rendering speed and I use them quite a lot. Even then you dont have an instant feedback fro tweaking.. And when I told that I cant see anything that cant be done, it is from what I've seen on presentation videos on the net..those things are mostly open for developers and not for artists taking in account that you should know programming.. By the way, I use XSI...not much options in there for renderers... Maybe I should move on..dunno |
|
| |
Guest |
Date: Monday, 19.10.2009, 19:25 | Message # 4 |
|
a demo of an isolated feature doesn't constitute something you can implement in a rendering engine alongside with the other 500 things it has to do and expect to work or perform on par. You're simply grossly oversimplifying the problem I'm afraid, and those game technologies don't do a fraction of what you need for the average commercial shot, not to mention require considerable frontload investment in the assets and procedures that software rendering seldom gets. |
|
| |
Guest |
Date: Monday, 19.10.2009, 19:25 | Message # 5 |
|
You have 3Delight pretty tightly integrated, Affogato as a free bridge to all RMan renderers if you don't want to pay for 3Delight, Arnold currently in beta and accepting newcomers (and apparently doing pretty well), and some other solutions like maxwell and turtle (I believe, haven't followed those in ages). |
|
| |
robcat |
Date: Monday, 19.10.2009, 19:27 | Message # 6 |
|
I went to a CG lecture in about 1991 or so and the speaker revealed to us that in 10 years, for all practical purposes, storage would be free, CPU power would be free, and render time would be instantaneous. Well, storage is pretty cheap. And the sort of rendering quality we did in 1991 IS just about instantaneous now. But every increase in processing power seems to be consumed by extensions and refinements in the rendering software. |
|
| |
Guest |
Date: Monday, 19.10.2009, 19:27 | Message # 7 |
|
That's was step two on its way to where it is now. In more recent years it was used extensively in Sony Imageworks/Animation, and as of a few months ago they started closed beta of engine+XSI plugin and the first drafts of the licensing model (which apparently might be geared towards larger studios at the beginning) for a release that, afaik doesn't have a precise date yet. Given the use in Sony I imagine they also have a maya bridge already in place, but I don't know if that's up for betaing too. The stuff coming out of beta seems very interesting insofar, and it's good to see a decent bruteforce ray tracer with very cheap rays that already has a track record in being used in production before it even entered beta (monster house, cloudy with a chance of meatballs according to the developers). This is pretty much all I know of it, and it's 2nd hand from friends or beta leaks. I'm not testing it and I don't have any ties to the developers themselves, so if you want to know more you'll have to get in touch with them somehow, but I can't be arsed checking what the name of the dev team/company is now tbh |
|
| |
Guest |
Date: Monday, 19.10.2009, 19:28 | Message # 8 |
|
With the ongoing increase within 10 years we will definitely see such avaliable horsepower that will definitely make working with lighting easier and more productive. Sure you can refine the existing techniques and introduce new ones, but we have pretty much what is needed for usual tasks. It's far better than what we had 20 years ago. So despite we use what we have, what we have is so much better than what we had and it will be just easier and faster with every year. Every computer within 1000 dollars can work with word, play HD video, and yet it's quite a good workstation for many users. With today's 8 gigs of ram for 100$ it's unelievable amount of ram and can be used only by those who do really heavy work. Imagine within 10 years this number will increase let's say 5 times - 40 gigs of ram is hard to kill unless you work with HD data and lots of heavy textures and models. The change from 8 to 40 is drastically bigger than from 64 mb to 512 mb of ram. At some point it can handle almost anything you throw at it. We have GI, motion blur, DOF, soft shadows, raytraced refractions\reflections, IES, HRD support, SSS, physical camera, fur, caustics, displacement, volumetric effects like fog and volume caustics. What else can be invented? I can think only of measured BRDF data and something like maxwell. |
|
| |
Guest |
Date: Monday, 19.10.2009, 19:29 | Message # 9 |
|
Real-time is subjective.. There are some tools that do great real-time rendering right now, Showcase, bunkspeed, RTT etc etc but these are very specific tailored applications.. GPU acclerated rendering on the other hand i think pretty much here there are examples of Renderman running fully on the GPU already with considerable performance increases also who can forget what the V-ray guys demo'd at siggraph. Exciting times! |
|
| |
Czoss |
Date: Monday, 19.10.2009, 19:29 | Message # 10 |
|
When are more people going to realize that RT (and yes, I realize that that is a subjective term) is already possible on today's hardware... it's the software that is lagging woefully behind. There are no standards to support GPUs. There is no application-independent way to use RT technology. It seems that every company is trying to re-invent the wheel. We need the RT equivalent of OpenGL or D3D... something that is videocard and CPU independent, and can be used by any app maker. It needs to be scalable to support faster and faster hardware, and also upgradable for when new rendering engines come out. There is really nothing more frustrating when it comes to 3D than using some of the semi-RT technology that is built into a few of these 3D apps today, that lets you move lights around, move objects and play with materials - and all in near RT. But then when you need to get a final image saved, you have to wait minutes or even hours for a slightly more detailed version of the same scene. Why is it that the near RT image (which might be 90+% accurate) can be computed in seconds, but the one that is 100% accurate takes an order of magnitude longer? It is absolutely asinine. Right now 3D app makers treat the viewport rendering and final image rendering as two separate things, when the reality is that they should be considered the same thing. |
|
| |