This “Intro to Rendering” video series is a very topical overview of the elements of creating a rendering using your 3D model. This is a topic that you could literally spend years and thousands of dollars learning about over your life. So I’m going to try to make this as concise as possible and at the end of this 7 part series, there will be a section devoted to pointing you to resources so you can learn more in greater depth.
To keep this in the format of Polyplane philosophy, we need to ask the question, “Why render?”
To answer this let’s take a look at some of the possible uses of the final output of a rendering. Rendering an image can be used for several reasons.Some of the more obvious include:
Entertainment – Motion pictures, video games, comic books
Media- Promotional materials, print marketing, web marketing, commercials
Internal communication- concept development, client presentations, selling ideas internnally (this goes back to “concept development)
Personal hobby/education: Creating a portfolio of your 3D work, just messing around, using it for visuals of your next great invention.
So let’s ask the question again, “Why render?” For our intents and purposes: It is to visualize a simulated reality. Now before I paint myself in a corner, I also need to give some definition to the word “rendering” because technically line drawings, photoshop paintings, and screen captures are all renderings according to Merriam Webster’s fourth definition of the word:
a (1) : to cause to be or become : make (2) : impart
b (1) : to reproduce or represent by artistic or verbal means : depict (2) : to give a performance of (3) : to produce a copy or version of (4) : to execute the motions of
So we need to define this via the elements of what we are using to create the rendering. The elements of rendering break down into five different categories. In this first video, I’m going to briefly go over these categories and how they relate to the overall rendering. These categories include:
Environment The environment is the virtual space in which you are rendering your model. This can include: A lightbox A studio A simulated environment (other 3D models in the space to represent the scene) A composite scene (usually added in later after the components are rendered or, like with some programs, Included in the rendering itself…like Keyshot does)
Lighting Lighting usually refers to the source or sources of simulated illumination in the scene. The lighting can come in various forms including but not limited to: Spot lights Point lights Ambient lights (Including Global Illumination) Directional lights
Objects Objects in the scene are what indicate the form and the shadows and sometimes, depending on the material that is applied, can also be another source of lighting.
Materials Materials indicate four basic channels of information to the scene: Diffuse: The general surface color/visual texture qualities Emit: General illumination qualities. Reflect: How much light/what kind of like is rejected by the surface back to the camera and the environment Refract: How much light passes through the surface and manipulates the position, color, shadows, etc. on the way out. the other side. On top of these four categories, additional surface and texture information can be applied to aid in adding information to the scene. This can include bump maps, displacement maps, alpha maps, and all sorts of other visual indicators. We’ll get into this a little bit in the material video.
Camera The camera is much like the modeling viewport in that it is a portal that allows you to see what your are doing. with the model. However the largest difference is that it also is your output window for when the render is done. Most rendering engines such as Keyshot and VRay allow you the opportunity to select physical camera attributes and apply them to your camera. These include depth of field, aperture, lens length, vignetting, etc. We’ll touch on a few of these in the camera video.