Interview with FRYRENDER

Intervirew with FRYRENDER. Is a new unbiased renderer that simulates the real behavior of light to produce true-to-life renders with a minimal set of light and material parameters.

Why did you choose such name for your new product – FRY render?

The answer is easy: the letters F-R-Y are an acronym of the words “Fversoft RaYtracer”. It’s a short name and summarizes what FRYRENDER is: Feversoft’s propietary rendering technology.

How many people work in your team?

I am the only programmer behind FRYRENDER’s core, RC4’s core and their SDKs. Aside from that, we would like not to disclose too much detailed information regarding our internal structure, sorry. All I can tell you is that our idea is to keep a rather small group of highly talented, young and passionate people. :)

When have you founded your company? Why have you started developing software for CGI?

My partner Pedro Osma and I met at University when we were studying Computer Engineering. We became best friends and started developing software together. Our first commercial software was a free SMS management software that sold nicely during a year. That gave us enough money to start our company as soon as we finished University. On the other hand, I had been working on the first version of Random Control (our real-time engine) since a few years ago and we shared a passion for Computer Graphics and interactivity. So “Feversoft – Virtual Reality Technology” was born with the purpose of creating high-end interaction and visualization technology, and harnessing it behind powerful and easy-to-use tools.

How did the idea of creating new render has come? Why did you choose unbiased render technology? Which principles of the light behavior lay in the FRY render?

During our first 4 years as a company, we worked extensively in RC2, RC3 and RC4, the saga of versions of our Virtual Reality engine, which is a quite mature piece of software as of this moment. Our experience with real-time technology has always told us that there are no easy and operative ways for producing data for high-quality interactive environments. There are several very good VR engines and several very good render engines out there, but there is no global solution for both problems simultaneously. Our (very ambicious) target since the beginning has been to fuse both worlds and try to make something remarkable in both fields.

RC2, RC3 and RC4 had their own light compilers. For a long time I worked on the cores and their associated light compilers heavily, first using radiosity, later using Photon Mapping, and even GPGPU techniques. By the beginning of 2006 RC4 was already very mature and we decided that we needed to work on a lighting tool that would par our real-time technology to make it look as good as (or very close to) a classical render. So putting together all the experience, conclusions and ideas, I started working on FRYRENDER, to which 100% of my time has been devoted during the last 16 months.

The reason why we chose unbiased technology is because after writing our own radiosity and photon mapping compilers we decided that we wanted to do better. The better the render engine, the better the real-time quality that we could offer, so physically-based unbiased algorithms looked like the best way to go.

Regarding the principles of the light behavior that lay in the core of FRYRENDER: except for some minor effects, it handles 99% of the visual phenomena that you see with your eyes during a normal day: Full GI, transmission, Fresnel, Custom ND, real camera optics, reflective and refractive caustics, SSS, SSS caustics, lens diffraction, … pretty much all of the standard features for such a render engine.

What makes FRY unique? What are the advantages of your render comparing to others?

I think that it can be explained from 2 different points of view.

Technical point of view:

First, fry will be the first engine capable of mixing traditional render and Virtual Reality data production seamlessly, in a single software tool. Second, it offers the highest-end output in the market wrapped with an ever-growing list of features that is already huge. We are also working hard on code and algorithms optimization, so we expect significant improvements in the not-so-distant future.

From an emotional point of view:

For me Computer Graphics have been my life since I was a kid, and this passion is shared with all of us at Feversoft. We really love what we do and still feel thrilled by watching people return to us with positive feedback. I think that this passion shows in our customer support through the forum, emails, etc… This makes people know that we are close to them. And, as a matter of fact, we are.

What are the ways of using HDRI in FRY? Tell us a little bit about MPDM.

HDRI is one of the formats supported by fry for both input (maps) and output (for post-processing or external usage). They can be used seamlessly in all the map slots in the engine. For example, for High Dynamic Range Image-Based Lighting, both in emitters and environments.

MPDM (Micro-Polygon Displacement Mapping) is a technique by which you define a highly refined surface with a base mesh and a map. The base mesh is a coarse representation of the final mesh (let’s say, a plane) and the map describes all the details and rugosity of the surface (let’s say, the shape of the rocks in a wall). MPDM is very interesting mainly because it is strikingly good-looking. But also because it allows for an extremely compact representation of the final (highly-tessellated) mesh, since only the base mesh and the map need to be stored in memory. The final (billions) of micro-triangles never exist in memory and are only ‘unrolled’ when the light paths are about to hit the surface. MPDM is fully finished and available in the upcoming vBeta1.7.

Is it possible to adjust ISO, shutter speed, light intensity before and after rendering?

Yes. There are several ways. You can configure the camera optics and film properties and also the power of the emitters before you hit render. Then, while fry is rendering (or when it’s finished rendering) you can fine-tune all these parameters with our tonemapping controls. These include the most basic ones (exposure control, gamma, etc…) and also a full Layer Blending control with which you can modify the powers, colors and temperatures of all the light sources in the scene selectively.

What are the ways of adjusting the final image directly in FRY?

As explained above, the Tonemapping and Layer Blending controls. Aside from color perception parameters (exposure, gamma, key, …) some useful HDR RGB controls are present (temperature, R,G,B levels, histogram previsualization, contrast…).

8. What features of the material editor could you note? Is it possible to create absolutely physically correct materials using IOR files?

It’s not only possible, that’s the idea that rules all the architecture decisions behind the MTLED. The main feature in the Material Editor that I would remark is its flexibility. A single parametrizable shader models all the possible behaviors, from the very basics, to complex effects such as SSS or thin-film interference.

Regarding IOR files, fry used to support them in the very early betas, but I decided to dismiss them. There are two reasons for this. The first one is that the difference between a hand-tuned material and a complex IOR one are not so easy to spot, while the difference in terms of speed is huge. The second reason is that there is only one set of IOR materials out there, where 99% of the measured materials are pretty useless. Anyway, it’s not ruled out that Complex IORs will come back to fry if our customers ask for us to do so.

9. What was your main aim: create fast and easy-to-use render for archviz or for the product design visualization?

Archviz is probably the biggest market for a render engine such as fryrender. But fry was never created with a constrained mind focusing on a particular market. The reason for this is that unbiased technology is very, very generic. That is its main beauty, in fact. You set up the environment, the geometry, the materials, hit render and voila. But the easiness of use and the realism gotten once the render is ready are something that all the markets involved with CG can benefit from, and our list of features is quite large also, not only archviz-related.

You mentioned that it would be possible to create real-time presentations using your own unique technology. Tell us a little bit about Virtual Reality engine.

Like I mentioned before, during the first years of existence of Feversoft we worked on all kinds of interactive stuff. All of it revolved around our propietary Virtual Reality engine, which is called Random Control (version 4 as of today). Fry will provide a straight and seamless connection with RC4 so a conversion of a fry scene into a RC4 file will be one mouse click away. This means that, at the same price of creating a traditional render of a scene you will be able to walk through it.

FRYRENDER as people is preordering it already, will include a RC4 player and the exportation capabilities will be enabled. So they will be able to export into RC4 and visit their scenes interactively.

The most remarkable features of this fry+RC4 combo are:

– They both form a seamless combo. They have been developed by the same company and have been designed to co-operate and feed each other. You won’t have to rework a fry scene to be RC4-ready. All you will have to do is render for RC4 instead of for traditional render.

– Being fry a physically-based engine and keeping in mind that it has been designed specifically (amongst other features) to export data into RC4, we can promise that the final quality RC4 will be able to display will be beyond the expectations people are used to in terms of interactive imaging.

Are there any limitations at the moment? Is there any polygon or number of emitters limit? What is the max resolution it’s possible to render with FRY?

The existing limitations are due to the maximum RAM available to the application. Fry is (as of now) a 32-bit application, so the limit is set by the operating system to 2 GBs. That limits the maximum resolution to 5000×3500 without render cropping, and the maximum number of polygons to a few millions.

Anyway, there are plans for a 64-bit port in the future. Also, there are some features for memory expense reduction, such as geometry instancing, MPDM, render cropping, low supersampling, etc…

What hardware would you recommend for quite fast FRYing?

There is a very straight answer to this question: "the best beast that you can afford”. There is a very significant boost when using dual CPUs, dual or quad cores, and the latest architectures available. So, if possible, a very CPU weaponized machine is the best choice. Then, 4+ GBs are also a good thing. Our partner Azken Muga gives us the change to benchmark fry in the latest architectures and the conclusion is clear: the newer and thicker the CPU, the better.

Are you planning to provide an expanded support for 3D platforms – support for procedural textures plug-ins, tree and plants generators, hair&fur plug-ins and so on?

Fry exports everything that generates geometry into the host app. Most plant generators do, so fry works with them with no need to take care of their inner workins on our side. As for hair & fur, fry will integrate its own solution for that. But, at least for the time being, it’s not in our plans to work on specific code for third party plugins. That would make our engine dependent on technology not under our direct control and would require the development of very specific code, while our aim has been to create code as general as possible to ensure maintainability and expansibility. In fact, pretty much everything is deliberately kept inside the core and the SDK so that all the plugins share the core’s features no matter what platform you’re working from.

Is it possible to create different atmospheric effects using unbiased render, such as fire, fume, etc.? Are you planning to create some kind of volumetric shaders for FRY?

It is possible, of course, to render participating media effects such as fog, fume, fire, … using unbiased techniques. But multiple scattering in unbiased rendering becomes very very very heavy computationally speaking. We have plans for some solutions for volumetric lighting in fry, yes.

Are there any new features that we should expect in the next version?

Some of them have been mentioned in the forum already, being the most remarkable ones Micro-Poly Displacement Mapping and Geometry Instancing. The overall memory footprint has been greatly improved also. There’s been extensive work on the tonemapping routines and we have performed serious calibration with real photography. The file formats have been consolidated to ensure further compatibility. As usual, some other surprises will pop up when vBeta1.7 comes out that we will keep to ourselves for the moment. :)

Are you going to make the demo version and educational license for students and training centers?

Definitely, yes. There will be a downloadable demo to help people decide whether or not fryrender is worth the money investment. This demo will be free, and have some sensible limitations. With regards to educational licenses, they will be available as soon as we exit our current Beta status. The prices and conditions for them will be the usual ones for this kind of software.

Tell us a little bit about yourself. What education have you got? What is your job experience? Have you got a hobby?

I studied Computer Engineering at the U.P.M. in Madrid. When I got the degree I started pursuing a PhD in Applied Math, although I quit to devote 100% to Feversoft. Aside from this, I would say that most of my REAL background in CG programming comes from the fact that I started coding at the age of six. Until then there’ve been very few days that I have not spent hours and hours programming. When I was a kid I used to code my own small video-games. Later when I grew up it became a fact that real-time and render engines were my passion until they became my job.

Regarding my job experience, I used to work during the years when I was studing Computer Engineering. For example, I worked on a company where we were creating a visualization and analysis tool for dentists. I was in charge of the 2D and 3D visualization modules there. Then, as soon as Pedro and I got our degrees, we founded Feversoft to work for ourselves.

Regarding my hobbies, I feel quite fortunate since my main hobby is and has always been what I do as an everyday job now: Computer programming, CG, etc… As for my ‘real-world’ hobbies, the main one is long running (marathon training and such). Also playing music (guitar and keyboards). But, to be honest, during the last months I have had little or no time to exercise my hobbies much due to FRYRENDER. :)

Thanks for taking your time to answer our questions, best of luck to your team and thx to all FRY beta testers for their awesome renders!

Related links:

Interviewed by DesignerV
3D Tutorials & Courses Online at 3DExport

About The Author

You might be interested in

3DExport Buy & Sell 3D Models