Monday, 1 May 2017

Leadwerks PBR #2

0DA7FDF32E4C3DBC9982F6F89ADD4995AA34B00F (2048×1152)
Crytek's Sponza atrium rendered in Leadwerks using PBR


Physically Based Rendering (PBR) is a workflow and rendering method. It couples a set of shaders that approximate physically accurate optical effects with input that is based on the properties of real surfaces.


PBR has become the standard method for rendering surfaces over the last few years. It is the default rendering method for game engines such as UE4, Unity 5, FrostBite, Cryengine and more. It is also being used in film studios such as Disney and Pixar. 

This project aimed to implement this rendering method into the Leadwerks game engine to bring the benefits of PBR to it's users. The flexibility of the engine, coupled with recent updates to it's graphics engine such as environment probes and HDRi have allowed this to happen.


Features:


PBR is supported in both the C++ and LUA versions of the engine.

Physically based shading model

The system uses a physically based shading model based on EA's Frostbite engine and Unreal engine 4's implementation. The shading model uses Disney's principled BRDF for diffuse lighting, with the frostbite normalisation factors. 

Specular highlights are generated using a BRDF consisting of; GGX for the NDF, Shlick's approximation for Fresnel effects and Smiths shadowing for the visibility function.
For more information on the details on physically based shading models I recommend the work done by S├ębastian Lagarde for remember me as well as the SIGGRAPH physically based rendering courses. 

Metalness / roughness workflow

To create content for the PBR system a roughness/metalness workflow is used. The system has been deigned to match the output from Substance painter/designer as closely as possible.

  • Roughness: determines how rough a surface is, black = shiny, white  = rough. 
  • Metalness: determines whether a surface is metal or not, white = metal, black = dielectric(non-metal). this should really be fully white or fully black.
  • Specular: Specularity can also be modified, specularity affects the intensity of specular reflections, this value relies on the Index of Refraction of a material. 
    • For shaders that don't specify specular, the value defaults to 0.04. This value represents an ior of 1.5, most materials have ior's near this value. If a texture is used then the value of the texture is mapped to a 0.001 - 0.08 range. 99% of surfaces have specular values in this range. This precedent is taken from the UE4 system.

Decal support:

Decals allow for areas of extra detail to be added to areas.

Decals modify the underlying surface properties, this includes roughness and metalness.  

Terrain / Vegetation support:

Vegetation and terrain are supported.
Image provided by Jen from the Leadwerks forums - showcasing the vegetation and terrain shaders

The vegetation shaders all work the same as the built in defaults, with distinctions for static, leaves and ground foliage. For best results I advise using shadows on vegetation objects, if performance allows.

Terrain also works in the same way as default leadwerks. Because of this metalness, roughness and specular cannot be controlled by textures. Therefore terrain is assumed to be dielectric, Roughness uses a combination of the Albedo's value and a set value of  0.05 and specular is set to 0.04 (this is the same value used when there is no specular map).

ToneMapping

Also included in the PBR system is a Filmic tonemapping post process shader. This allows artists to take full advantage of the HDR system available inside Leadwerks. 



Download


The project is available from GitHub




Limitations / improvements

  • Requires a gamma-correction post process shader. Not a huge issue, adding a pp is pretty easy but still something to remember.
  • Currently the built in environment probes are stored in a low dynamic range. This leads to clamping and precision errors as HDR values move towards extremes. this limits the usefulness of HDR. This is an engine issue I can't fix.
  • Probes also use simple mipmapping for different roughness values, PBR often performs a convolution on these values to better match reflection blurring due to roughness. A fix may be possible for this, but would require C++.





References:


Alamia, M. (-). Article - Physically Based Rendering - Cook–Torrance. http://www.codinglabs.net/article_physically_based_rendering_cook_torrance.aspx 
Alamia, M. (-). article_gamma_vs_linear.  http://www.codinglabs.net/article_gamma_vs_linear.aspx
Alamia, M. (-). Physically Based Rendering. http://www.codinglabs.net/article_physically_based_rendering.aspx
Brent, B. (2012). Physically-Based Shading at Disney.
Carmack, J. (2013). The Physics of Light and Rendering. QuakeCon 2013.  https://www.youtube.com/watch?v=P6UKhR0T6cs 
Chan, D. (2015). Real-World Measurements for Call of Duty: Advanced Warfare. Siggraph 2015. Sledgehammer Games/ Activision Blizzard.
Christian. (2011). The Blinn-Phong Normalization Zoo. Retrieved from The Tenth Planet: http://www.thetenthplanet.de/archives/255
Driscoll, R. (2009). ENERGY CONSERVATION IN GAMES.  http://www.rorydriscoll.com/2009/01/25/energy-conservation-in-games/
Gotanda, Y. (2012). Practical Physically Based Rendering in Real-time. Game Developers Conference. tri-Ace, Inc.
Gotanda, Y., Hoffman, N., Martinez, A., & Snow, B. (2010). Physically Based Shading Models for Film and Game Production. Siggraph 2010.  http://renderwonk.com/publications/s2010-shading-course/hoffman/s2010_physically_based_shading_hoffman_a.pdf
Hable, J. (2010, December 5). Everything has Fresnel. http://filmicgames.com/archives/557
Hable, J. (2010, November 7). Everything is shiny. http://filmicgames.com/archives/547
Hoffman, N. (2010). Crafting Physically Motivated Shading Models for Game Development. Siggraph 2010. Activision. http://renderwonk.com/publications/s2010-shading-course/hoffman/s2010_physically_based_shading_hoffman_b.pdf
Hoffman, N. (2013). Background: Physics and Math of Shading. Siggraph 2013. http://blog.selfshadow.com/publications/s2013-shading-course/hoffman/s2013_pbs_physics_math_slides.pdf
Iwanicki, M., & Pesce, A. (2015). Approximate models for physically based rendering. Siggraph 2015. Activision.
Judge, K. (2011, August 23). Shader Code for Physically Based Lighting. http://altdevblog.com/2011/08/23/shader-code-for-physically-based-lighting/
Karis, B. (2013). Real Shading in Unreal Engine. Siggraph 2013. Retrieved from http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_slides.pdf
Klint, J. (2006-2016). Leadwerks Engine. http://www.leadwerks.com
Lafortune, E. (n.d.). Mathematical Models and Monte Carlo Algorithms for Physically Based Rendering. Leuven: Department of Computer Science, Faculty of Engineering, Leuven University.
Lafortune, E. P., & Willems, Y. D. (1994). Using the Modified Phong Reflectance Model for Physically Based Rendering. Department of Computing Science, K.U. Leuven.
Lagarde, S. (2011). Adopting a physically based shading model. https://seblagarde.wordpress.com/2011/08/17/hello-world/
Lagarde, S. (2011). Feeding a physically based shading model. Retrieved from https://seblagarde.wordpress.com/2011/08/17/feeding-a-physical-based-lighting-mode/
Lagarde, S., & Hardui, L. (2013). The Art and Rendering of Remember Me. seblagarde.wordpress.com/2013/08/22/gdceurope-2013-talk-the-art-and-rendering-of-remember-me
Lagarde, S., & rousiers, C. d. (2014). Moving FROSTBITE to PBR. Siggraph 2014. Electronic Arts(EA), EA Dice.
Lazarov, D. (2011). Physically based Lighting in Call of Duty: Black-Ops. http://advances.realtimerendering.com/s2011/index.html
Lazarov, D. (2013). Getting More Physical in Call of Duty Black Ops 2. Siggraph 2013. Activision. http://blog.selfshadow.com/publications/s2013-shading-course/lazarov/s2013_pbs_black_ops_2_slides_v2.pdf
Meinl, F., & Dabrovic, M. The Atrium Sponza Palace. Crytek. Retrieved from http://www.crytek.com/cryengine/cryengine3/downloads
Neubelt, D., & Pettineo, M. (2013). Crafting a Next-Gen Material Pipeline for The Order:1886. Siggraph 2013. ReadyAtDawn Studios.  http://blog.selfshadow.com/publications/s2013-shading-course/rad/s2013_pbs_rad_slides.pdf
Pharr, M., & Humphreys, G. (2012). Physically based rendering: From Theory to Implementation. Morgan Kaufmann.
Seymour, M. (2013). Game environments – Part A: rendering Remember Me. https://www.fxguide.com/featured/game-environments-parta-remember-me-rendering/
Wilson, J. (2013). PBR Practice. http://www.marmoset.co/toolbag/learn/pbr-practice

Friday, 21 April 2017

Wip

A little project I've been working on. Still wip, will probably do a proper video once it's come along a bit.



Using SDL for controller input
It's all C++

Wednesday, 17 February 2016

Physically Based Rendering in Leadwerks (C++)

As part of my final year university work I’ve been looking into Physically Based Rendering(PBR) and its implementation. PBR is a method of generating images that takes physical values into account, this means lighting behaves much more predictably and accurately.

This link https://www.fxguide.com/featured/game-environments-parta-remember-me-rendering/ gives a fantastic, and in depth look into how PBR works in Remember Me. Notably, PBR systems aren’t all the same, often times they use different algorithms to generate images, and there are many ways to generate reflections for them.

This PBR uses a runtime rendered cubemap to generate reflections and ambient diffuse lighting on objects, nice for avoiding the flat lighting in shadowed areas problem. It then uses a physically based BRDF to generate specular highlights.Currently static models and animated models have shaders, there is also preliminary decal support. All light types are supported with shadowing. The materials use metalness/roughness maps.
There are a number of example assets in the zip file.

You’ll need the C++ version of Leadwerks for this one, the reflection generation code is written in C++ and relies on a number of OpenGL calls to function properly. This does mean however that you can generate a new reflection map at any point.

I was aiming for a similar system to what was used in GTAV. For that game a low poly, diffuse only environment map is generated each frame in order to render reflections. However for my work the performance isn’t quite high enough to generate a reflection map per frame yet. Reflections are generated as cube maps, that map is then sampled with GL_SEAMLESS_CUBEMAPPING enabled, the lowest mipmap is used for diffuse lighting then higher mip values are used for the specular reflections.

(If you are using the lua version of leadwerks, you may be able to use pre-blurred cube-maps and get acceptable results)

Here’s an example of the shaders working on a complex PBR asset. Built by Andrew Maximov,http://polycount.com/discussion/130641/cerberus-ffvii-gun-next-gen-asset-giveaway

Without-PBR












With-PBR











I’m looking for feedback on how robust the system is, so here is a download link, you're free to use this as you please.
https://dl.dropboxusercontent.com/u/23307723/Leadwerks_PBR.zip


Issues
I’ve had to use the alpha channel for gloss values so transparency is a bit borked. Decals suffer from the alpha problem as well.
I’ve looked into binding reflection textures to the lighting shaders which would avoid the alpha hijacking. But it doesn’t seem to work if any post processing or water is used. If anyone can point me in the right direction here it would be appreciated.

This implementation uses normalised Blinn-Phong for the distribution term, Schlick’s approximation for Fresnel, and Smith’s shadowing function for the geometry term.

there was weird overlapping going on but putting an image here seemed to fix it... so here's my avatar! will fix this soon.

Thursday, 3 July 2014

Custom GLSL shaders in BGE: View independent reflections

Reflections are an important part of creating realistic materials, unfortunately they are difficult to render in realtime. The specular component is a vague approximation of a light reflection but for metals or glossy surfaces it isn't really good enough.
Using raytracing is an elegant solution, however it is beyond the abilities of modern graphics cards to render accurate realtime reflections with this method and maintain playable framerates.

Thus we get image based techniques for rendering realtime reflections. There are 3 main techniques for displaying reflections, sphere mapping, dual paraboloid mapping and cube mapping.

Sphere mapping is the simplest method of generating reflections, it is also the easiest to use in the blender game engine as it is natively supported by the material editor.


A sphere mapped Suzanne
The main drawback of sphere mapping is the fact that it's view dependent, meaning, whatever angle you view the object from, the reflection will always be the same.

This is where view independent reflections come in!

Two techniques exist to render view independent reflections, the current industry standard method is cube mapping, but there is also a technique named dual paraboloid mapping.

Dual paraboloid mapped Suzanne's
The benefit of using dual paraboloid mapping is that you only need two textures to achieve the effect, rather than the six required for cube mapping.


2 Textures representing the reflection on an upper and lower paraboloid
Dual paraboloids do have disadvantages compared to cube maps, they are more prone to distortion and are less well used so resources on them are limited, if you are interested in learning more, these are the resources I used when implementing them:
Dual paraboloid reflections - Kyle Hayward
The art of texturing using the opengl shading language - Environment Mapping - Jerome Guinot

Finally we have cube mapping, as previously mentioned cube mapping is the most common technique you will see for displaying image based reflections. Compared to dual paraboloid maps, cube maps have less distortion, a more consistent pixel density and are also easier to generate.

Cube mapping

Cube maps do however require 6 separate images making them the most expensive to render in realtime. Blender has support for cubemaps (via custom GLSL shaders), it does however have a few issues with it's implementation the most notable being the inability to change the cube map with bge.texture (as far as I can tell).

A blender cube map
Information on cube maps can be found everywhere, here is a link to the site I used to create the shader. There is also many advanced techniques that can be used to enhance cube maps, notably these include box projected cube environment mapping. Remember me and Ryse son of rome also used a technique called Parallax corrected cube mapping, the details of that went a little over my head however!

===================== File Download ========================

This was quite a broad overview of these techniques, there is plenty more to discover if this interested you.

Again, there will be a post up in the coming weeks continuing on from this.







Thursday, 26 June 2014

Custom GLSL shaders in BGE

Has it been over a year since the last post!? Whoops.

As you may or may not be aware the blender game engine has support for custom GLSL shaders. These allow greater potential variation than the material panel by giving an artist access to how the game renders at a lower level.

I've been having a go at writing these recently:

First off we have a standard Phong shader, This uses Lambert diffuse and a stretched Phong for specular highlights. There's a version with clipped alpha support as well.



Next up cube mapping! For non-planar reflections, cube maps have pretty much become the standard in real-time applications. Blender's support of them is unfortunately a bit limited, I was unable to get variable roughness with cube maps as there is no support for choosing the mip level with them. I was also unable to swap them around with bge.texture, making changing the reflections based on the location in a level difficult to achieve.



Finally we have something a little more interesting, normalized Phong! This is a simple tweak to the Phong shader that provides quite a pleasing result. it takes principles seen in PBR (physically based rendering) systems and applies them to a simpler model.



Essentially what this shader is doing is using 1 texture to control specular intensity and gloss values, but it does so in a physically correct manner. this makes for small, bright highlights on shinier objects and more matte, dimmer specular on rougher objects.

All the shaders you see above are gamma corrected so they should look fine on most monitors, I may modify them to accept external values for the gamma. It would be trivial then to make a gamma slider option, that'd be all professional and whatnot.
Unfortunately they don't have shadow map support and only seem to recognize up to 8 lights, that said future versions of blender have been pegged to be getting some updates in the shadows area (hopefully), not sure about the light limit.

===================== File Download ========================

Hopefully someone will find them useful!

There will be another post up next week continuing on from this one.





Friday, 24 May 2013

Ody-C : Update 23/03/2013

It's been a while since the last update as usual, but here's a few little updates:






I posted this on the blog a while ago, with a short video. The laser gun is a fast firing but inaccurate and weaker rifle. this is one of the enemies weapons so you pick it up from beating them.





I originally removed jumping as it didn't fit in with what I wanted the game to feel like, however people seemed to lament it's removal so it's been added back in, with a twist!When aiming down the sights of a weapon the player will still roll, however when firing from the hip pressing space will cause the player to launch into the air via jet boots.There's double jumping. Double jumping that makes sense according to the laws of physics, again due to the jet boots.The direction of the jump is dependant on where the player was walking, so is similar in feel to the roll.






I could have just used martinsh's ssao, but I wanted to see if I could write my own. It still needs tweaking to improve the effect as some of the artefacts it produces are a little obvious but overall I'm happy with it.It is quite a subtle effect but I still intend on baking most ao, this is just to give more "pop" to dynamic objects.The 2D filters have all been tweaked slightly as well to give the current look, The bloom could do with being a little more obvious I feel.

Wednesday, 20 March 2013

Quick Ody-c update

I've finally got round to finishing the animations for the laser gun, so here's a short update video showing them off.
There'll be a decent dev-log soon(hopefully), this is just a quick preview for people reading the blog (and those on twitter).

Monday, 11 March 2013

BGMC 7

This weekend saw the 7th Blender Game Making Challenge, The theme for the 4 day event was sky!

My entry was a first person platformer called SKY. the objective is simple, you have to reach the end of each level. In order to do that you need to use a special wing-suit

Here's a link to the bgame page for the game , You can play if you wish!




Any problems with the game? e-mail me here: mattline1contact@gmail.com

Saturday, 16 February 2013

AI Breakdown - Part 2

Not only does the AI have alternate states, there is also a python module called AIFunctions, which holds a variety of external functions. Most are small functions such as aligning the AI to a vector, searching for visible nodes. However some are needed to make sense of the AI code.

def Attacking():
The attacking function simply aligns the AI to the player, and fires it's weapon, in other words attacking the player.
States - detailed

Constant
The constant state, is just that, it runs constantly. The actions associated with it run all the functions that the AI requires to operate.
e.g.
The death action:
 if Ai["Ai_health"] < 1:
     Ai.state = 17
(Notably this is the only action in constant which changes the AI's state, because constant runs at all times, it will always override any actions that change the AI's state, from other states below it.)
The Cycle action:
This relates to activating AI's, the actual function to activate the AI is run on the entire scene(AI independent), this function merely updates the AI when it does become active.
The cycle function's main purpose is to update the AI's awareness of it's surroundings, it checks the following two points.
  • can it see the player.
  • is it defending.
The Aggro action:
This simply switches the player from it's initial Idle state to an aggressive state if it realises the player is near.
Idle
The Idle state is not often used, it is more of a transition state.
If the AI  is aware of the player the idle state defines whether the AI should attack the player, of whether it should defend the point it is at.
It also allows the AI to move to key points on the map, whereupon they will be set to the defence state.
Turtle
The Turtle state is how the AI deals with close proximity to the player, or when the AI needs to react aggressively at a certain point. 
The AI is able to quickly dart from side to side, there is two triggers for this to happen.  
The Shuffle action:
  • A random value (two values trigger this, one to shift left, one to shift right).
  • If the player hits the AI it will shuttle (or die).
The AI is also able to attack the player whilst in this state, this is handle by the AIFunctions module.
The Attack action:
  • The Turtle action uses the Attack Function from the AIFunctions file.
Attack
The attack state is more focused on getting the AI into a position to attack the player, rather than the attacking itself.
The Follow action:
The attack action runs simultaneously with the follow action so the AI can attack the player whilst moving. Bone constraints are used to maintain orientation and aim towards the player.
 The Turtle action:
 Whilst in this state the AI can trigger the turtle state, this happens when the player gets too close to the AI.
 The Attack action: 
The follow action uses the path-finding functions defined in the AIFunctions file, it causes the AI to pursue the player.
  • The Attack State calls the Turtle state when the player gets too close, (see above).
  • The Turtle action uses the Attack Function from the AIFunctions file.
Defend
The defend state is nearly identical to the attack state, The only difference being how the AI deals with attacking the player.
  • The AI will pursue the player within a certain radius of the node it  is defending (see post 1 for nodes).
  • If the player exits the radius then the AI will move back towards the node.

Dead
Exactly what it says, this state deals with cleaning up the AI when it dies.

Wednesday, 16 January 2013

AI Breakdown - Part 1

In this update I showed the AI I had been working on for the game, I felt it was important to get the AI to a reasonably intelligent level, as a substantial part of the game will be combat. This post is simply going to explain how I went about making the AI.

Disclaimer: This is simply how I made the AI for my game, it is by no means the only way, and it's definitely not the best. This post is hopefully to show people that an expectable level of intelligence for AI can be based on a relatively simple system.

Research
Research is important for AI, look how others have gone about it. there's no point re-inventing such a complicated wheel. With the videos try and look for the specific actions the AI's perform.


Overview
The AI uses a state machine system(article 2) with a little bit of algorithmic AI thrown in. the main tasks are organised into states, but within those states are multiple potential actions. These actions are chosen based on the environment and the AI's interaction with other AI's and the player.

States - overview
Constant - this runs perpetually whilst the AI is "alive", it defines actions that the AI must always be running.
Idle - When idling the AI simply waits for instructions.
Turtle - Similar to idling but occurs when the AI is targeting an enemy.
Attack - the AI acts aggressively towards it's target.
Defend - the AI act's aggresively but stays within a certain area
Stunned - the AI enters this state when they are "killed", this state always results in the AI being destroyed.
An AI in the turtle(left) and attacking(right) state.

Active?
In Article 1 you may have read how the Halo 2 AI only activates 1 enemy at a time, this not only saves processing time but helps to slow the AI down so it doesn't instantly react to everything. My AI uses this system as well, as it negates the need for artificial timers to simulate reaction times as well as the aforementioned performance bonus.

Path-finding
The AI's path-finding is not as precise as implementations such as A* , if you put the AI in a maze it would get lost. But! this leads to what I feel is more realistic movement. instead of instantly finding the shortest possible path to the player, they "guess" their way to the player!

There are 3 Functions used for path-finding.

  • closest node
  • path-find to node
  • furthest node (rarely used)
In order to understand how these functions work it's important to know how the node system works.



When the scene is first loaded a script loops through all the verts on a nav-mesh, at each point a node is added, each node then casts a ray to every other node in the scene, if the ray is unblocked a "link" is made between the two nodes(each node has a dictionary of it's visible nodes).

So, you end up with a system of points with paths to other points. the AI then simply use the 3 functions mentioned previously to select the appropriate node. when the AI reaches a node, it is set as occupied by that AI


  • closest node - This looks for the closest visible node to the AI's target, and is often used when the target is visible
  • furthest node (rarely used) - Works exactly like the closest node, but the furthest visible node is picked.
  • path-find to node - This one is slightly more complicated, it is called when the target is out of site, instead of only looking only at visible nodes, it looks at the nodes stored for the visible nodes. e.g.




Here node A would be chosen, because even though the player isn't visible, node A can see the player, therefore node a is selected. the actual algorithm uses three such links. 

Here even though B can't see the player, node A can, and as A is visible from B, then B is chosen as the best node to take.

The player isn't stored as an object! it is actually stored as co-ordinates. so even if the player was hidden from the view of A, the AI would make it's way towards the last known position.

(currently it treats the last known location as the player so attacks it, this is a bug that needs fixing, either by switching to the defend state, or perhaps searching for the player).

Part 2 will explain how each state works but in more detail, it will also explain the actions associated with each state.