INTERVIEW : Redshift, Nicolas Burtnyk
Soumit par Administrateur le mercredi, 12/11/2013

Jeanneau _ Westimages


The public beta is just around the corner and we’re aiming for a 1.0 launch in the first quarter of 2014.  In terms of features, we’re about  to release some very significant optimizations mostly related to blended (layered) materials. "

http://redshift3d.com/



Q1. Redshift is a renderer accelerated by the GPU. Why do you develop a new renderer ?

GPUs have a massive amount of raw computational power and high memory bandwidth when compared to CPUs.  And rendering is computationally expensive.  Faster rendering can often directly translate into money saved.  Also faster rendering means shorter iteration times which directly lead to better results.  We developed Redshift because we’re big believers in the power of the GPU and we felt that the existing GPU renderers did not cover enough of the features and flexibility that users want or need.

Q2. Rendering engine algorithms are often relatively simple, however the integration into DCC ( 3ds Max , Maya ... ) is something much longer. Can you confirm this analysis?

The basic rendering algorithms are indeed pretty simple. However, once you enable enough flexibility/customization in the rendering pipeline, things can quickly become more complicated.  And once you start bringing the GPU into the equation, many of these simple algorithms can actually be quite difficult to implement in a way that uses the GPU efficiently.

To achieve a high level of DCC integration, you need a reasonably complete feature set on the rendering side, otherwise all the knobs and switches in the DCC aren’t going to do much!  Light linking, per-object visibility settings, flexible shading networks with texturable inputs, etc.. etc…
Sometimes a less-than-stellar DCC integration can simply be due to a lack of support of these kinds of mini features on the part of the renderer.

Now of course, building a solid DCC integration is very challenging.   Properly and thoroughly translating and handling a broad range of DCC settings requires a lot of work and testing.  One of the most challenging aspects is that you’re dealing with the DCC APIs which can be buggy, slow, incomplete, poorly documented or any combination of the above.  You almost always have to resort to dirty hacks and workarounds which themselves can introduce bugs or hurt performance.  On top of that, DCC plugins can be more challenging to debug compared to full applications.  If your code is causing the DCC to crash, you often don’t know much more than “when I do this it crashes” so it can be a bit of a wild goose chase to track down the source of the bug.  In many cases you aren’t even sure whether your code is doing something wrong or the crash is caused by a bug in the DCC itself.

So is doing a solid DCC integration for a renderer harder/longer than writing the renderer itself?  In our experience, I would say no, but certainly the DCC integration is not to be underestimated in terms of difficulty and complexity.

Q3. What about the benefits of GPU in terms of speed ? In the end, what are the speed gains for the users ? Software engines such as Mental Ray or Arnold does not seem to suffer so much of GPU competitors ?

It gets very messy very quickly when you try to do fair and empirical performance comparisons  of GPU and CPU renderers.  There are simply too many variables!  Even comparing CPU renderers is murky.  Is Vray faster than Arnold?  I’m pretty sure the answer is “it depends”.  The scene complexity, lighting, textures and rendering techniques all play very significant roles in the frame time.  And when you try to compare GPU renderers to CPU renderers, you’re throwing hardware into the mix.  If you compare Redshift to Arnold on a top-of-the-line 12 core Xeon workstation with a low end Quadro card, you probably won’t be too impressed :)

That being said, given the computational power and memory bandwidth of today’s GPUs, the performance advantage of a GPU renderer can be very significant.  Depending on the hardware, we’re hearing from our users that Redshift is anywhere from 4-5 times faster and up to 30-40 times faster than CPU renderers.  But again, “your mileage may vary” so to speak, so we don’t like making direct comparisons to other renderers.  Funnily, there was a thread going on poplar 3D forum recently where the author of a GPU renderer (not Redshift!) published very direct comparisons between his renderer and other popular renderers.  He got some pretty strong responses, and not in a good way. We obviously do our own internal testing against most renderers out there and know exactly how many times faster we are against  them in a variety of shading/lighting situations on specific hardware configurations, but we really want our users or potential users to find out for themselves what kind of performance gains they can expect with Redshift by running their own tests.

As for why we’re not seeing GPU renderers take away any significant market share from CPU renderers… Well when you’re talking about Mental Ray, Vray or even Arnold, these are mature products that have over time had pretty complex pipelines built around them.  Plus, as CPU renderers, they’re easily programmable which is likely a must-have for any large-ish studio.  Also, at least in the case of Mental Ray and Vray, there are a ton of resources online to help get the most out of these renderers, from tutorials, to shaders, to tools.  We believe, however, that the main reason GPU renderers haven’t been widely adopted is that they’re typically pretty thin on so-called “production-ready” features, which is of course very vague, but true.  The GPU renderers we’ve seen so far feel like toys beside the big boys.  We’re trying to change that with Redshift.

Q4 . Redshift uses CUDA , is it also based on OptiX ? Can you explain the reasons of those choice ? Does it definitely close the door to AMD GPU ?

Redshift is indeed running CUDA but it does not use OptiX. We’re using our own GPU code for ray tracing, shading, etc. We considered and rejected OptiX as a choice very early because we did not want to rely on a 3rd party library for such a fundamental part of our technology.  We didn’t want to get into a situation where we were waiting on NVidia to fix something or add a feature.  Also, at least at the time we were looking at OptiX, it felt a bit ‘rigid’ and we were finding ourselves wondering if certain things we wanted to achieve would simply not be possible with OptiX.

Now just because Redshift makes heavy use of CUDA (and likely always will), the door is by no means closed on AMD GPUs!  Developing an OpenCL version of Redshift is on our long term roadmap, though we haven’t yet put much actual dev time into it yet as we’ve been focusing on features and stability.

Q5 . An important issue is the GPU memory management. How does Redshift solve this problem?

Redshift does out-of-core texturing and ray tracing. This means that the textures and geometry are split into ‘tiles’ and are sent to the GPU as required.  If the GPU memory is full, tiles are swapped out to make space.  It’s conceptually very simple and similar to paging data from RAM to your hard disk.  Of course, if you are constantly swapping tiles out to make room for the tiles you need, you take a performance hit, but this mostly affects geometry.  Texturing on the other hand works very well.   We’ve done tests with many GBs worth of texture data and we never ran against excessive swap out situations.  For geometry, going out of core can translate into a bigger performance hit, but we’re working on several new ways to deal with this, some of which should be ready in the next few months.  Generally speaking, Redshift tries to be really efficient when it comes to memory management.

Q6 . From a qualitative point of view, how do images from Redshift GPU renderer perform vs images produced by CPU engine?

In terms of precision/quality, I’d say there are no differences.  We get asked about this a lot, probably because people associate GPU rendering with video game graphics which obviously look nothing like high-quality pre-rendered content.  I’m not trying to put down games - they’re looking absolutely amazing and getting better all the time, and in fact we come from a video game development background - but it’s just not the same sport.  Nothing about Redshift is like a game renderer.  Redshift implements ‘offline’ rendering techniques accurately without any sort of short cuts.

Q7 . Is Redshift ready for large projects : multi-gpu rendering, network, render farm ...

Multi-GPU rendering is supported. Network rendering is not natively supported but many of our (very resourceful!) users have successfully set up Redshift render farms using Royal Render.  Anyone interested can find the information they need for this in our forums.  In the future we’ll have native support for network and distributed rendering. We were originally planning on implementing this earlier during development  but, given that solutions already exist, we decided to focus on implementing features that have no obvious workarounds.

Q8 . Today, many non-biased engines emerge ( octane , cycles, arion iRay ... ). Why developing a biased engine? Can Redshift evolve to a physically correct renderer?

I am going to use the term ‘Physically Plausible’ instead of ‘Physically Correct’ because it’s less absolute (you can easily say that no renderer is physically correct) and more appropriate for what renderers are trying to do which is make pretty and (sometimes) believable pictures rather than to run physics simulations.  Anyway, Redshift is certainly already capable of physically plausible rendering.  For example, Redshift supports IES lights, physically plausible materials and even ‘brute-force’ GI.  Biased rendering and physically plausible rendering are not mutually exclusive.  ‘Biased’ means providing users the choice of rendering techniques that might be less precise (compared to unbiased rendering) but are faster and less noisy. ‘Biased’ can sometimes also mean ‘flexible’. While some 3D artists need to emulate reality as closely as possible (where an unbiased renderer excels), many care more about flexibility and speed than about how closely the images matches reality in a theoretical sense.
.
Q9 . We enjoyed the integration of Redshift into Softimage . From the render region , to the pass render everything seems perfectly integrated . Do Maya and 3ds Max versions benefits of a great integration?

Thanks! As I said before, solid DCC integration can be tricky to get right but we strongly believe that it’s critical.  Maya has a similar level of integration to Softimage. 3DSMax is currently being developed and we expect a similar level of integration there too.

Q10 . Redshift is currently in beta, but seems already used in production. Do you expect such a fast adoption?

I think Redshift is the renderer many people expected when they first heard about GPU accelerated rendering… but never quite received! To be honest with you, we weren’t 100% sure how users would react to Redshift but we weren’t entirely surprised by the excitement. 

Like you said, Redshift has already been used in several productions. If you’ve been on our forums, you know that we (the Redshift team) are very active answering questions, offering advice, fixing bugs as quickly as possible - often the same day - and adding the odd missing feature.  We want users to feel confident that, if they use Redshift and get ‘stuck’ somewhere, they won’t be left out in the cold.

Q11 . What are the next milestones for Redshift (RC , new features ...)?

The public beta is just around the corner and we’re aiming for a 1.0 launch in the first quarter of 2014.  In terms of features, we’re about  to release some very significant optimizations mostly related to blended (layered) materials. We have hair, ICE strands and support for ICE attributes in the render tree up next, some of which are at least partially implemented.  And of course, we’re fixing bugs and clearing up small annoying things on an ongoing basis.

Q12 . You came from the game industry. It's strange that you are now focused on the precalculated rendering, isn't-it ?

It might seem strange, but I would say that our experience in games is what allowed us to make Redshift.  We’ve been working intimately with the GPU for many many years so that gave us a really good and solid technical foundation on the hardware side.  Also, we’re here now with perhaps a more open mind which allows us to think outside the box.  The other thing to remember is that there is a significant amount of precalculated rendering happening in the videogame industry. All those “Call Of Duty” lightmaps are pre-rendered!  Our technical lead, Panos, had already developed a full featured lightmap generator from scratch well before we starting throwing around ideas about Redshift.

Q13 .Cloud services becomes more popular, do you have any plan in this area?

‘The cloud’ is a popular topic these days!  We are indeed thinking about a cloud version of Redshift but I’d say this is at least a year out.  So there’s nothing to announce just yet.

Q14 . What do you think of the realtime/precalculated convergence? Do you have any other projects?

I’m not totally convinced it’s converging in the sense that I doubt we’ll be rendering final frames for Hollywood in real time any time soon.  But certainly realtime rendering is making huge strides in quality with new solutions for GI, fuzzy reflections, more accurate DOF, motion blur, etc.  But let’s not kid ourselves, it’s not really even close to pre-rendered quality.  And yes, as hardware and software improves, ‘offline’ rendering is getting faster.  But despite some exaggerated marketing claims by some, it’s nowhere near realtime.  And let’s not forget that audience expectations are actually outpacing the improvements.  Last time I checked, Shrek’s law still holds up - CPU render hours for feature length animated films double every 3 years.  Try watching something you remember as looking awesome 10 years ago.  You might not be so impressed today.

Of course, that is not to say that the advances in realtime rendering are not going to be useful to film and television production and other areas that make use of ‘offline’ rendering.  Previz is seeing a huge benefit from realtime tech and I’m sure realtime will make it’s way deeper into the pipeline.  Given our video game experience we are obviously thinking about all these things all the time ;)


Connexion & inscription | Mentions Légales | A propos de 3d-test et contact | ® 2001 2010 3d-test.com