Monday, 9 December 2013

Horror Project

WORKFLOW

Inital Brief:
-3D level
-Use UDK
-About a single phobia

Pre-Production:
-Phobia Research
-Horror Research
-Art Mediums
-Inital Sketches
-Detailed Concepts
-Final Concepts
-Layouts

Production:
-Create Assets
-Layout Test
-Build Main Sections
-Import assets, define details, materials
-Light Level
-Kismet/Script Level

Post-Production:
-Feedback
-Evaluation

------------------------

Pre-Production
Phobia Research

Fear is a primal feeling of dislike, a desire to run, a instinct to protect oneself. This means that fear occurs under rational circumstances, where danger is present. A phobia is typically an irrational fear, where the same emotion is felt but there is no danger, simply something which the experiencer is scared of without reason.

Examples of common phobias:
-Spiders
-Clowns
-Heights
-The Dark
-Being Alone
-Small Spaces
-Large, Open Spaces
-Snakes

More esoteric phobias:
-Cotton Wool
-Being Touched
-Mouths
-Being Ridiculed
-Throwing Things Out (Hoarding)
-Sex

Evaluation of usefulness per phobia:
Spiders: Very typical, used often, sublety is difficult. Personal fear of spiders is detrimental to working on this.
Clowns: Same as above, easier to be subtle... interesting designs, may require lots of face models
Heights: Same again, once more easier to be subtle, not much room for different aspects of horror/terror
The Dark: Unsurprisingly, common phobias are commonly used. Probably overused, new directions difficult.
Being Alone: Difficult to convey without much conventional narrative.
Small Spaces: Easy to demonstrate, possibly difficult to create tension with, without other elements.
Large, Open Spaces: Again, maybe use in conjunction with other phobias without explicitly naming it?
Snakes: See spiders.
Cotton Wool: Would probably require yet to be invented periphals that convey the texture of a surface to the touch.
Being Touched: Possible difficulties similar to cotton wool but maybe less so... also, not easy to convey or maintain for an extended period.
Mouths: Unusual, possible difficulties with sublety, scope is mild, overall promising.
Being Ridiculed: Initial ideas require recording multiple sounds, though staging possibilities seem interesting. More narrative based.
Throwing Things Out (Hoarding): Probably too specific to sufferers. Discarded for the most part.
Sex: Requires a maturity and a great deal of understanding that I feel I lack. Also probably requires extensive modelling at a high level.

The most promising phobias appear to be Spiders, Clowns, Heights, Mouths or Being Ridiculed.

Pre-Production
Horror Research

Consider the difference between horror and terror. Horror is the feeling of revulsion or disgust after witnessing a frightful occurrence, terror is the anticipation, the feeling of unease before the actual act.

Typically, in a horror game, gore and blood and scary things are used to induce a feeling of "horror" in the player. However, terror is most often claimed to be more effective by horror enthusiasts. They believe that horror is a cheap scare, and really, the more satisfying of the two is the insidious paranoia brought on by a feeling of terror.

Horror Games:
-Silent Hill 2
-Amnesia: The Dark Descent
-Fatal Frame
-System Shock 2

It is also useful to examine films, the closest entertainment medium to games we currently have.

Horror Movies:
-Psycho
-Saw
-The Ring
-The Shining
-A Nightmare On Elm Street

Pre-Production
Art Mediums

During the production of the concept work for my level I experimented with various different mediums, in effort to explore the strengths and weaknesses of each. The four mediums that I used were pencil, Copic markers, charcoal and digital (Photoshop).

Pencil
The standard, basic material that works decently well for almost every situation.  It's good for creating clean and detailed work, has a wide range of values for creating shadows and is also great for giving surfaces texture. It's relatively forgiving as it is easy to lay down light strokes and erase mistakes. A slight disadvantage is a the relative difficulty in laying down silhouettes in comparison to other mediums which makes it marginally less useful in the initial concept stages.

Copic Markers
A set of monotone markers ranging from a dark-as-night black to a barely-there pale grey, these each have two tips: at one end a flat headed marker and the other end a standard tip. These are perfect for initial concepts and the flat headed ends are perfect for silhouetting out interesting shapes. I use the standard tip for detail and it works quite.  The one negative to using these markers is the lack of a sharp point that makes fine details more difficult to achieve. Still, the ability to create good silhouettes, blend different values together and some capacity to do detailing make these a pleasure to work with and situationally useful.

Charcoal
Of this, I am not a fan. I find it messy, difficult to control, inconsistent and not very effective in the way of creating details. On the more positive side it creates what I feel to be quite dank looking environments, with lots of soft, dark shadows, which create a sense of mystery on the page. For this horror project, that effect was ideal and as such, despite my personal distaste for the material, I did, on occasion (read: once), manage to make it work enough to create a piece that worked well.

Digital (Photoshop)
My personal favourite of the bunch, working digitally is the most forgiving and requires the least amount of artistic skill to produce something that one can be relatively happy with. Much like pencil, it requires very little effort into learning the medium itself. One can simply jump in with a brush and start creating. Furthermore, the nature of digital means I can work non-destructively, always tweaking, changing things without having to worry about mistakes. It allows me to experiment more openly and push further than I would feel comfortable doing in traditional for fear of ruining a piece.

Photoshop is versatile and allows for creating many different styles of work. It does have some difficulty imitating real life mediums and so loses some of that texture and unevenness that can prevent an image from becoming too clinical. However, adding actual texture to a material in an image is much easier and less time consuming. Overall, an incredibly powerful and relatively easy to use tool.

TL;DR - It's amazing and I love it and it's great.

Pre-Production
Initial Sketches

Brainstorm about aspects of the different phobias

A quick set of digital paintings to explore the variety of environments that each phobia gave me.

A mood board to stimulate the brain and provide accurate, detailed information about forms.

A mood board to stimulate the brain and provide accurate, detailed information about forms. There is not mood board for spiders because despite the fact I did look up reference images for some parts of work, my phobia of them begged me not to have a file on my computer containing nothing but images of them. Unprofessional, maybe, but even still... not happening.

A quick greyscale set of thumbnails to determine a style and tone to try and keep consistent. Also worked well as a warm up to get the brain thinking in the right sort of way for a horror concept.

Initial set of concepts of a few aspects of each phobia to decide what was actually scary about each one. The detailed mouth image actually inspired one of the set pieces in the final level.


Pre-Production
Detailed Concepts



These are the more detailed variations on that mouth idea from my thumbnails. I tried a few different variations before deciding on the bottom left line-sketch design with a tongue and flatter teeth which is less realistic. I coloured it to get a better sense of the depth of the image and used it to base my final concept off of.


Pre-Production
Final Concepts


One of my final concepts was done in photoshop so I can demonstrate it here. The other two were done in Copic markers and charcoal and are physical copies.


One of my final concepts, this was used as the design for the "ScaryFace" set piece.


Pre-Production
Layouts


After designing the set pieces I wanted to include in the level, I set about creating a flowing environment in order to piece them together.

Layout 1 : Basic Layout

Explanation
In this first layout I started simple. Lining them up, one to one to one, linearly. This, of course created a very disinteresting map but I decided I wanted to start on the basics so I could figure out which pieces should go in what order. The "TallRoom" is the spawn room, and from there, the four corridors split off, three being blocked and one leading directly into the "SquelchRoom". From there, at the top of the jumping puzzle goes directly to a normal corridor which ends with "ScaryFace". At the back of his throat is the "TableRoom" which then constitutes the end of the level.
Evaluation
The "TallRoom" as a beginning point has both positives and negatives. Firstly, it works because of the scale, which induces awe immediately. Also, the broken rubble blocking of more paths creates the sense of a larger area, a more real place than just a box room created for a game. It has problems, however, because the lack of a real build up. It doesn't provide any context and also feels slightly random, disconnected compared to the rest of the level. (Note, this could be a result of a poorly designed set-piece that doesn't fit with the rest of the level, rather than an actual problem with the order.)
The "SquelchRoom" also doesn't really fit, connecting straight from what is quite obviously a man-made structure to an organic environment. I feel as though it should go behind "ScaryFace" as this lead up makes more sense and provides greater context.
Next up is the corridor to "ScaryFace". I feel that going from inorganic, to organic, to inorganic, to organic once again is a poor order of things, without a clear build up, chopping and changing between two environments and doesn't make much sense.
Finally, the "TableRoom" makes up the end of the level. This, I feel, kind of fits, with the reveal of the words at the end.


Layout 2 : AppearingFace Layout

Explanation
The second time around I wanted to explore more interesting possibilities, as well as try and fix some of the issues with the first layout, especially the order of the set pieces. In a familiar set up, the level will start in the "TallRoom". Then into a generic corridor leading up to a trigger which replaces it with "ScaryFace". This then leads to the "SquelchRoom" and then to a final corridor which terminates in the "TableRoom".

Evaluation
The same problems exist as before, as well as the advantages.
From there, the available corridor leads to a few doors, some of which are locked, some lead into empty rooms and one to another corridor. This second corridor will have a door at the end. As the player works down the corridor, the lights will flicker off and during the darkness, "ScaryFace" will stream in, and the doors at either end will disappear, forcing the player in the desired direction. From his throat will lead into the "SquelchRoom" and then the table room once more.


Layout 3 : FallingDown Layout

Explanation
This time I wanted to try an idea I'd had where the level is essentially an infinite loop. The player would end up at the beginning and everything would reset and there would be no end. I wanted this to increase the feeling and despair and feeling lost. The player would start in a variation of the "TableRoom" that looked perfectly normal. Upon trying to leave via the only exit, the floor would disappear from underneath them and they would fall, landing into a copy of the "TableRoom" that was more disturbing. from there, a corridor into the "TallRoom" in which the only exit available would lead to "ScaryFace". From there to the "SquelchRoom" And then the exit of that would take the player back to the normal version of the table room for everything to reset.

Evaluation
Starting in a normal version of the "TableRoom" actually works really well to juxtapose the scarier version against and I feel it would heighten the feeling of unease in the player. Also, I really liked the idea of the player falling downwards, a similar idea to "falling down the rabbit hole". Going from there to the "TallRoom" made sense thematically, and helped me link the two separate environments of a normal building to the innards of a a person, introducing coherency in the level design and making a subtle narrative with the actual level design itself. From there a door opened to "ScaryFace" which set up for the "SquelchRoom". This, I felt was the weakest part of the level as the reveal for "ScaryFace" wouldn't be that scary. The exit of "SquelchRoom" would lead to the player falling once again into the normal version of the "TableRoom" so the level could start again. Initally I though this would increase the feeling of despair and feeling lost but in practice, because the level was so long and there was only one route, it because boring and repetative without increasding the tension at all.


Layout 4 : Final Layout

The layout that I decided to use in the end was a version of "FallingDown" with a few small additions from "AppearingFace". The overall layout would be pretty much the same as "FallingDown" except the corridor to "ScaryFace" would be the same as in "AppearingFace" and it would make the face appear in the same way. This negated the problem I had with the face not being a particularly big reveal in Layout 3. Also, I had hoped to include the rooms alongside the corridors but due to time constraints had to cut them. Creating additional assets for optional areas was extra time that I didn't want to take, instead focusing on polishing the level up as much as possible. Finally, because I felt the never-ending level idea would actually detract from the levels scariness, I cut it, instead opting to end the level when the player got to the end of the "SquelchRoom".


------------------------

Production
Create Assets


For this level I created a lot of assets, all UV unwrapped (to varying degrees of quality), textured and some even have custom collision meshes. Here is a list of everything I created:

-Curved throat piece (with custom collision)
-Straight throat piece (with custom collision)
-Chair
-Door
-Planks to bar the door
-A fake wall
-A small floor transition to hide the seam of floor textures
-Picture frame
-Pillar
-3 different platforms (each with custom collision meshes)
-Ramp
-Rubble
-Scary face (with custom collision)
-Eyes
-A set of teeth (seperate meshes, top and bottom)
-Tongue
-Uvula
-Squelch room (with custom collision)
-Table

This totals an overall count of 22 custom meshes created in Maya. I also experiemented and took the Picture Frame model into zBrush to make a highpoly version and bake out a normal. It worked quite well and although zBrush is deep and complex tool, it was easy enough to use the basics of and gave me decent results even from a newcomer to the software. The actual textures were mostly created from images composited from cgTextures and altered in Photoshop, with the majority of normals created using the diffuse map in a piece of software called SmartNormal, which often doesn't give the best of results. Furthermore, I created 7 tileable textures (again using cgTextures) all with normals from SmartNormal.



Production
Layout Test


For the SquelchRoom I needed to prototype the jumping section so that I knew how well it worked, whether it was even possible, readjust the layout, then evaluate the difficulty, readjust again, over and over till I felt it worked well. The UDK is ideal for this because of the inbuilt BSP system that allows for rapid creation of simple environments that are easy to test and then adjust.

The layout didn't change dramatically from it's first incarnation. I focused mostly on refining the difficulty of the jumping which I felt at the time (and still do) was a necessary mistake due to time constraints.




Production
Build Main Sections



Defining the main areas was a simple task. Normally, I would endeavour to create the entire layout in BSP and then see how well it flows, how good the pacing is and then be able to tweak or evening re-imagine as needed. However, due to the scope of my level and all the detailed assets creation required, I ended up forgoing this and simply creating the base rooms out of BSP. Essentially all the walls and floors except in the "SquelchRoom" are BSP with materials and so all of that was done at this step.

Production
Import Assets
Place Meshes
Materials


Importing the assets was time consuming but, thankfully, due to my extensive, vigorous and consistent naming conventions and folder organization, I could do it efficiently.

Placing the meshes was a simple job since I  had planned out the layout before hand and knew where everything went. At this stage I put in some height fog to give great depth and atmosphere, especially in the "TallRoom".

Materials was a simple enough job, much of it consisted of just plugging the right textures into the right slots for the decals it was a bit more involved.

Production
Light Level


Lighting things went smoothly for the most part. There was a skylight to kill any pitch black areas and I tried to use lights one where I had put light sources but in the end I needed a few to highlight doorways and also the "SquelchRoom" had no light source. Thankfully, it looked okay since the walls looked as though they believably glowed sufficiently to light up the room.

Production
Kismet/Script Level


The main sequence of Kismet in its entirety. Has four parts (clockwise from top left): StartOfLevel, EndOfLevel, FalseCorridor and False Room.

The second sequence, stored in a streamed level, this used a trigger to start a matinee which opened a door, whilst playing the correct sound at the right time.
StartOfLevel: When the level intially loads, I make sure that the FalseCorridor and FalseRoom levels are also loaded and visible. The wire exiting the bottom of the image initialises the normal music. Once the player is spawned, using console commands, I adjust the FoV, change the players walking speed and turn on godmode. In the future I would probably also hide the player model at this point.
FalseRoom: When the player touches the trigger, the normal music is turned off, a sudden sound plays in effort to make the player jump and creepy music starts playing which feeds back into itself to ensure it loops. And most importantly, the FalseRoom level is hidden and then unloaded so that the player falls down as desired.
FalseCorridor: Touching this trigger  turns of the lights around the player and plays a sudden sound. When the sound finishes, the lights are turned back on. To do this I used a delay but I should have just used the finished 'trigger' from the Play Sound node. "Trigger_2" also hides the FalseCorridor and then unloads it, simultaneously loading in the ScaryFace level and making it visible.
EndOfLevel: Finally, this trigger is used for the end of the level. It disable input from the player, preventing them from moving and then hiding the player model (I should actually do this at the beginning of the level). It also activates a matinee sequence uses a camera to fade to black. After that completes, I make sure the streamed levels are invisble so that the players screen remains black, signifying the end of the level.
 ------------------------

Post-Production
Feedback

An important part of the creative process is receiving and taking in feedback and applying it. As such, I created a small questionnaire designed to hone in feedback that would be useful to me. It had to be concise, specific, with no closed questions. This is what I came up with:

-What was your overall impression of the level?
-What do you think the theme of the level was?
-What do you think didn’t work very well/could have been improved in the level?
-What were your favourite moment(s) in the level and talk a little about why you enjoyed them.

Here is a link to a copy of my original set of feedback and also a corrected set, which retain the exact same feedback but is slightly easier to read.

Original

Post-Production
Evaluation

A horror theme isn't exactly my forte. I know very little about it, have very little interest in it and don't understand much of the nuance required to do it justice. However, researching into it did lead to some interesting discoveries that enabled me to concentrate on certain aspect that while I knew were important to the genre, previously wasn't too familiar with. Some of these things include introducing normality and then taking it away or the criticality of sound design and music in creating tension.

Concept work for this project flowed relatively well, especially after acquiring a graphics tablet and Photoshop my working at home. I find the digital workflow much more intuitive and allowed me greater creative possibilities. Furthermore, experimenting with different mediums, while interesting, didn't inspire me all too much.

The creation process itself was, as expected, lengthy. The creation of a face mesh was really difficult and the result really isn't great but at this time it is acceptable. Experimenting with level streaming went well and I think a lot of the level did work the way I wanted it to.

As for the level itself. I think it was creepy, it was tense but I'm not totally confident that it is scary. As Josh S. pointed out, it's very easy with no semblance of a failure state. As such, there is nothing in the level to really be afraid of. On further iterations I would correct this, though thinking of creative ways to punish the player that fit the phobia I chose would be difficult. The presentation of the phobia itself was subtle... so much so that not a single tester guessed it correctly. The closest was Georgia L. who guessed, "Being eaten alive". I am unsure as to whether I am pleased or displeased with this result and what, if anything, I would change. I didn't want the phobia to be obvious. If it were too clear then the level would be dull. However, I may have simply been unsuccessful at communicating the fear that I had set out to create and so may have failed in that respect. A few small errors in the level such as the Uvula not being static etc. would obviously need to be fixed.

Overall, I think the level worked well but had, as is usual, some room for improvement. The music I feel was a particular strong point and the lighting worked well. I also feel as though the jump scare was good and unexpected. The falling down portion of the level didn't work quite as intended since it was player and therefore the player had no sense of movement or direction. A few solutions come to mind though testing would be required.

Saturday, 16 November 2013

Virtual Exhibition

Part of working within an artistic field is understanding where ideas come from. Knowing why things are the way they and what came before is key to creating meaningful and interesting visuals. Furthermore, recognizing the importance of how information is imparted to an observer is a critical step to creating a subtextual or inferred narrative which lends a great deal of grounding to a world or character.

During the production of our curated museum of a decade me and my team considered these things, as well prioritising information: what was necessary for the viewer to understand the significance of the decade we were exploring.

We chose the 1950’s as our decade primarily due to the prevalence and induction of Pop Art. For me, this was an interesting movement because I see it as a realisation of the direction commercialism was taking us, leading to the world as we know it today. Furthermore, the music of the time greatly interests me as it is the precursor to much of the music I listen to personally. My own prior investment of these things made it an easy decision as I knew I would produce much higher quality results if working on something that enthused me.

As a team we worked together doing research since that enabled us to give immediate feedback to each other regarding what we felt was most important to go into the gallery. My contributions include the prominence of Elvis in the display as well as the idea of creating the entire area to be a diner in the style of 1950’s diners. This, I felt, would give a greater sense of immersion of the society at the time.

After deciding what we felt would be most important to display we separated to begin creating assets for use. I started work on bar stools to put into the diner. I also created a door, some walls and a table to go along with the bench that Kerry created and UV unwrapped all 5 and then textured them. After that I created a counter for the corner and a plate to go on the tables.

Once we had all the main parts of the diner produced I started work on putting them together to create the scene. After the bulk of that was done I created a few extra informational posters to give context to the objects. Then, with those placed in along with everyone elses hard work I had finally finished and we had produced a 1950s Diner with some information about the decade.



Considerations on the presentation of our work:
I feel as though the work was presented adequately but was not as detailed or thoroughly explained as it could have been. I also feel as though I failed to properly show off the hard work off our team and look at the diner itself and finished up too quickly, a result of nerves. Unfortunately, I also started talking at one point, picking up from where a teammate had gotten stuck a little- which would have been fine if I had then handed over the reigns afterwards. I zoned in, got carried away and didn't allow either Kerry or Tim demonstrate their ability to present or their knowledge.

Monday, 4 November 2013

Bin Final Concept

https://drive.google.com/file/d/0B-T5e8rbvN7HT0owZXctRGZUVUk/edit?usp=sharing

SciFi Gun - First Model

So, first model down, here's most of the process in video form:

Part1: http://www.youtube.com/watch?v=UJDXu31PuA0
Part3: http://www.youtube.com/watch?v=wMs4HKqPwh0
Part6: http://www.youtube.com/watch?v=W7UT3JXeJWI
Part7: http://www.youtube.com/watch?v=RLiWln5i5Pc
Part8: http://www.youtube.com/watch?v=aCrpqYLvP50
Part9: http://www.youtube.com/watch?v=NclKn5ObAfg

A quick note, for some reason the UVs on my file have screwed up and I don't know why. Look at the final video to see the textures the way they're meant to be. Hopefully I'll be able to fix that.

As for my evaluation:

I feel as though the design did exactly what I wanted it to do. I was aiming for near future Sci Fi but I am a bit concerned that because of this, the actual design appears lacklustre and bland.

Model itself I think works well. I was without orthographic images due to being unable to scan anything in. That certainly made things a lot harder for me. However, despite that, I think it came out great, without too many polygons and most of the topology is neat.

The UV mapping was difficult and I struggled getting the workflow down. Even now I am unsure if I was working in the correct way.

The textures, I feel, are the weakest part of the model. It is something I am hoping to be able to improve upon, perhaps attempting other styles than "realistic".

Science Fiction and it's Subgenres

Categorizing and classifying works of fiction into groups and genres always seemed a futile endeavour to me. Alongside almost all attempts by mankind insane desire to make everything fit into their own little groups, it often fails to realise the impossibility of such a task. The infinite shades and one offs and similar-but-not-quites mean that either the system is too general and as such provides no insight whatsoever or is too specific and therefore the system itself becomes meaningless. Of course, that's what makes fiction interesting, the little subtleties and details that change, that work into creating a deep and compelling narrative, conveying ideas and ideals or perhaps warnings or even wistful thinking. Be that as it may, let's talk about Science Fiction and it's sub genres.

One big line often drawn between Science Fiction stories is how "hard" or "soft" the science elements of the story are. Of course, this is no dividing but rather a huge gradient. Hard SciFi means staying close to reality, coming up with plausible, if not possible, solutions to scientific questions that are not yet answered. Soft SciFi, on the other hand, simply waves away any responsibilities of science and is often akin to magic more than anything.
Some examples of Hard SciFi include the Foundation series by Asimov and A Fall of Moondust by Arthur C. Clarke. On the other side of the spectrum are works like Star Wars which are actually very close to fantasy.

Other sub genres of Science Fiction include Apocalyptic, Cyberpunk, Space Opera, Space Western, Steampunk and many others.
All of these have their own storytelling elements and aesthetic. Apocalyptic stories talk about the end of the world, and tend to have a drab and dirty visual style.
Space Western is, as the name suggests, reminiscent of early days in America, tough, rough and rapid expansionism.
Steampunk has little specificity in regards to the narrative but the visual is perhaps the most distinctive; many moving parts and elaborate machinery.

Of course, this is minute snippet of the vast library of genres within the genre of Science Fiction, never mind the fact that most works cannot fit into just one or sometimes even any of these categories.

A few examples of such sub genres:

Apocalyptic
Fallout 3
War of the Worlds
System Shock 2
Planet of the Apes
Mad Max
The Matrix

Cyberpunk
Blade Runner
Deus Ex

Space Opera
Mass Effect

Space Western
Star Trek

Sunday, 3 November 2013

Geometric Theory

Video games in today's world require a multitude of artists working in tandem in order to create the technically and aesthetically beautiful visuals. In order to create the worlds and characters who inhabit them, these artists require tools.


But first, an explanation of what a character, prop or a landscape is actually made of. Most virtual objects are made of "polygons". These are flat planes, generated by a linking a series of points. Typically, these planes are triangular, created with three points, although most modelling programs use squares with four points (otherwise known as quads) as these are easier to manipulate into the desire shape.
The points that define a polygon contain its coordinates in three dimensional space, xyz and also uv coordinates, points on a two dimensional plane that instructs the model how to wrap textures around its surface.

Different 3D modelling software packages excel in different areas. Also, they tend to come in two different styles: sculpting and modelling. Some examples of modelling programs include Maya, 3DS Max, Softimage and Blender. The industry standard nowadays is Maya since it encapsulates modelling, animation and basic rendering tools all in one software package and does them all well to a certain degree. On the sculpting side of things its usually either zBrush or Mudbox, with zBrush being the clear winner at this point in time. Sculpting is a process used primarily for super high detail whereas the modelling programs are used for lower detail work.


A typical workflow involves a multitude of programs. Once the concept is finalised, the modeller could begin in either Maya and import that into zBrush or just start in zBrush and build up a million polygon model with immense amounts of detail. Then, if the model is being used for pre-rendered material its pretty much ready to go, after texturing. However, if it is being used for a game that requires real-time rendering, this model (if it is a main character in a realistic game) will be reduced down to around fifteen to thirty thousand polygons and then the detail from the high poly model will be “baked” out into a normal map and an ambient occlusion map.


Example of a normal map
A normal map is an image file that is applied to a model that gives the lighting calculations inside the game engine more information and so makes the object look more detailed without requiring millions of polygons. The files are stored as RGB images (an alpha channel is unnecessary and wastes precious memory) where the RGB values correspond to X, Y, Z coordinates of the surface normal.

Ambient occlusion maps simply detail darker areas around the mesh that might be difficult for the lighting engine to react with properly. Another common texture type is specular maps with describe the shininess of an object.

A quick point about normals, they describe the facing and direction of each face of the model. This helps with lighting an object and detailing how light bounces of the surface. In a basic sense normals can either point perpendicular from the center of a face or at each vertex, averaged from the direction of the surrounding faces.


In games, because they are being rendered in real time, keeping scenes as optimised as possible is an absolute must in order to maximise potential spectacle and frame rate. As such, many little tricks have been developed over the years in order to present the illusion of a detailed scene without taking a hit on performance. One such trick developers have at their disposal is the use of LoD (Level of Detail) meshes for the same object. Essentially, multiple copies of the same model are created, each with varying levels of detail (using less and less polygons). As the object gets further away lower detail meshes are swapped in. This allows for high detail when and where its needed without sacrificing performance and allows for more objects in a scene, especially in cases of things being far away. An example used prevalently include trees in almost all games and also guns in multiplayer first person shooters. When holding a gun from a first person point of view, the gun model is very close to the camera and so requires a great amount of detail. However when seeing other player characters holding a gun, it will always be further away and so using the same model is actually quite wasteful. Therefore, an artist will create a second model of every gun that has fewer polygons and use that for anyone other than the player.
A second use of LoDs is when certain objects are being seen in different contexts. While not common, it’s useful for being able to compare examples of LoDs that will have been noticed by every player of the game. A great example of this are the Final Fantasy games, specifically, Final Fantasy 7. Renowned for beautiful graphics, the series extensively uses pre rendered videos for dramatic set pieces. In Final Fantasy 7, this is what the main character, Cloud, looks like in a pre-rendered scene.


LoD = Super high
Obviously, by today's standards this is unimpressive. But at the time, this amount of detail in a scene is extraordinary and only possible because it has been rendered beforehand. Note the number of polygons spent on getting the hair and folds of the trousers to looks smooth and realistic. Compare this to Cloud's higher detail battle model. (Note: Cloud was unique in that he had two models for battle scene, one higher detail than the other for use in a specific sequence).
LoD = Quite High
Again, unimpressive for todays standards but remember, this was rendered in real time on a processor running slower that 34 Mhz. In comparison to the first image, however, differences are large. The amount of detail for the arms and hair is much lower and the folds of his trousers look much more blocky. Now, the real comparisons begin. Comparing between objects that both need to be rendered in real time. This is the lower detail model for battle scenes.

LoD = Average
As we go further and further down the LoD chain the blockiness and lack of detail becomes more and more apparent. But more importantly, a discussion about why this is necessary. For an artist, having your work look as good as possible is a given. Therefore (most of the time) you want to use as many polygons as possible, you want to push as close to the line as you can. But of course, performance issues become a factor. In a normal battle there are usually around six characters on screen at any one time. But in one specific scene (near the end of the game) there is a fight scene in which the designers knew would have less characters in and so knew they could push the limit a bit further. On the other side of the coin, of course, is when you have more instead of less things that you want to render. And that leaves us with this guy:

Barely recognisable, with cubes for hands, is the main world version of Cloud. This low detail model is used when running around towns and cities of the game, which often contain many characters at once, requiring each on to use less polygons.

There are two different kinds of rendering, pre-rendered and real-time. As games require updating based on player input they must be rendered in real-time at a fast enough pace to look smooth. Movies, on the other hand, contain no interactive elements and so the medium is static. This allows the power intensive computational work to draw each frame to be done before hand and then saved out to a simple video file. This allows for greater amounts of detail since each frame can take minutes or hours to render rather than milliseconds that games require.

Real time rendering in games is typically handled by an API, either OpenGL or Direct3D. This negates the need for programmers to write a graphics engine but there are pros and cons to both. OpenGL's greatest strength is being multiplatform. On the PC side it will work on Windows, Macs and Linux whereas Direct3D is the 3d graphics API of Microsoft's DirectX and so is limited to Windows. However, DirectX was designed from the get go to be used for video games whereas openGL was initially developed with high performance hardware in mind for engineering and CAD software.

There are multiple techniques that 3D artists use to create 3D models. A beginner level technique is Box Modelling and as the name implies, the artist starts with a box and adds in more geometry and gradually pulls out the desired shape, increasing in detail until they have the final model. Another technique is known as extrusion modelling.  Other techniques involve starting with a sculpting program such as zBrush. Using Dynamesh an artist can quickly create a high poly, high detail mesh which is then reduced down to a low poly mesh read for games.

Well, anyway, that's a quick look at Geometric Theory, until next time!

Wednesday, 2 October 2013

Primary and Secondary Sources

Reference images are part and parcel of working in an artistic design role. They are a necessary part of creating a believable world. There are two kinds of sources: Primary and Secondary.

Secondary sources are images obtained second hand. Nowadays this is mostly finding pictures from the internet. 

Primary sources involve obtaining the references yourself, and so getting information first hand.

Obviously there are advantages and disadvantages both ways. Primary sources are limited to what you have available but do allow you to find exactly what it is you want and also that you can examine it at any angle. Furthermore, resolution of an image won’t be a worry since you can ensure that the resolution is always adequate. Secondary sources have all these problems: sometimes it’s difficult to find exactly what you’re looking for, you only get the view angle that the image provides and quite often the resolution and/or quality of the image is sub-par. However, they do have the advantage of being more encompassing of… well, everything.
The final decision from these considerations differ on the person looking for reference images and their situation. Students, like me, are encouraged to use source our own images for reference but quite often the things we have available to us are limited, whether by money or circumstance. Therefore, whenever creating less mundane things I would most likely use secondary sources. Big Triple-A companies, however, have much more resources and access. They are much more likely to source their images themselves, although the artists still likely use the internet as a source frequently, just perhaps less so than someone in my situation.

For example, a video game set in France. For someone in my circumstance, Google would contribute greatly to helping me get the visuals look right. I could get an image of the Eiffel Tower easily enough and that would probably be more than adequate for me to use. But getting images of the little streets, the insides of the buildings, the signposts, bins, the general layouts of towns, the scale of everything... All of these things and more would be difficult to get good references for from Google. Secondary sources can be useful but aren't specific enough for me to get everything I’d want.
Easy enough to find this






Now, the same example but for a Triple-A studio. They could pay for a small team of people, maybe 2 or 3 to fly to France and stay for a while, taking extensive photographs of many different things from as many angles as possible. When they got home, they would have a veritable treasure trove of imagery from which to base the designs off of. Their designs can then be more detailed and more accurate and also more specific to the real place.

Sunday, 29 September 2013

3D Models and the Industries That Use Them

The development of the modern day computer has spurred along almost all aspects of our lives including the now all powerful entity that is the entertainment industry. The creation of digital content has radically altered the media that we consume and the processes that go into making it. If it weren't for the dawn of computer technology then, of course, video games would not exist. Furthermore, the digital age has dramatically affected movies and television and the use of CGI has (for most people) improved the experience. The content for such visual experiences is often done by creating 3D models. The technical aspects of those models, however, differs drastically when compared between different forms of media.

A multitude of factors affect the final result of a 3D model. Time, budget, rendering speed and target audience all affect it.

Rendering Speed

First and foremost is how the model is going to rendered; usually this means pre-rendered or real-time. Unsurprisingly, these terms describe what they actually mean. Real-time rendering involves rendering frames at a fast enough rate that the motion is still fluid and can also be interacted with at any point. Pre-rendering involves a very powerful computer (sometimes called render farms), rendering out each frame, one at a time and can take as much time as there is available, depending on the quality desired. Of course, the number of polygons for something that will be pre-rendered can be much higher than the number used to make something that is being rendered in real time. This comes at a sacrifice of quality but abstraction of detail techniques such as Normal mapping (which is sometimes used in films, but not regularly) can help even the disparity.

Here is an example of a high poly and low poly mesh. As you can see, the fewer polygons means the surface of the model is more faceted compared to the smoothness of the high poly version.



Video Games

If this were to be used in a video game, the high poly version would be "baked" into a Normal map and sometimes an Ambient Occlusion map which means converting all the high frequency detail into 2D textures which are then applied to the low poly model. The Normal map then works within the engine and helps define how the light reacts against the surfaces and gives the appearance of much more detail without the use of more polygons.

Movies

In a movie, the high poly model would be used with a Displacement map (for greater low frequency details) directly, with a different LOD (level of detail) model for the animators to actually use before rendering time. Of course, because of the greater number of polygons each frame can't be rendered within a 30th of a second. In fact, for the Transformers movie it took 38 hours to render just one frame of movement. In some extreme cases this can balloon to ridiculous degrees: in Transformers 3, during a scene in which a skyscraper is destroyed by a robot, took 288 hours per frame!

Overview

As can be seen, the difference the time available to render something dramatically changes what is and is not possible for the quality of the final model. Smaller productions such as advertising and TV series follow similar guidelines to movies but only allow for much shorter render times meaning the geometry still can't be as high fidelity as used in big blockbuster films but also isn't constrained to real-time rendering like games are.

Budget & Time

The budget of a production frequently influences how much time it has before release. As such, these are both related and affect the same parameters in regards to the final model. 

Video Games & Movies

The budgets of modern video games and movies tend to both be in the tens of millions range. The time per model however will be drastically different and means that 3D models used in movies can have more details in comparison.

Children's TV Series

A TV program for children would have a much smaller budget and a greater time constraint than in video games or movies, requiring work to be done for weekly episodes. This greatly affects the final look of the product; typically they are simpler designs as shown here by The Octonauts. The simpler aesthetic enables the models to be less labor intensive and as such, cheaper and quicker to make.



Overview

Budgets typically affect the number of models as well as influencing their design. Larger amounts of money means more time and therefore more complicated and higher fidelity designs. Movies will usually have the biggest budgets and so their models can be complicated and high fidelity. Video games have similar but slightly smaller budgets and so their models are slightly less complicated and much lower fidelity and finally, TV series tend to have less time than either movies or games and so have less complicated models.

Target Audience


Whenever a commercial entertainment first enters pre-production, one of the foremost decisions companies need to make is what the desired demographic their product is aiming to sell to will be. These groups can be created by a combination of gender, age, socio-economic status amongst other things.

For Young Audiences

Young children's games, movies and TV programs (as well advertising aimed at them) will use much simpler models, with rounder edges and smoother forms. This, combined with other factors will decide the final model.
Overveiw

The final quality and fidelity of an asset for any part of a project is wholly dependant on the intention and constraints that the project itself is under. More money, time and rendering power allow for greater scenes of complexity - providing it's appropriate for the intended audience.

Tuesday, 10 September 2013

Contextual Influences

Dwarf fortress, initally released in 2006 and in a constant state of development, is a game for PC created by two brothers, Tarn and Zach Adams. Describing the gameplays systems would be too complicated and since it's really irrelevant to the contextual influences on it's art I shall leave the reader to learn more about the game itself.
The beautiful ASCII in action

The game world is three dimensional but is displayed to the player in a 2d topdown veiw. For the graphics, the game use a slightly modified code page 437 characters in 16 different colors. This is often referred to as "ASCII" art, since it uses characters rather than images to convey information. This style of visual representation traces back many years back to games which ran on computers that couldn't display images. All of these games are influences to some extent but perhaps the one that could be considered most influential is Rogue.

Rogue, the original Roguelike
In fact, in the "Adventure Mode" portion of the game (which could be described as an open world roguelike) the character is represented by an "@" symbol; just like the aforementioned Rogue. Also, as seen in the screenshots above, both games use full stops to denote the ground, although Dwarf Fortress as taken this and, in an attempt to create a more visually pleasing aesthetic, has used a mix of full stops and commas, an example of a tiny but noticeable evolution in this graphic style.

Bob Rafei - Video Game Artist

Looking into specific artists or visual designers is not something I usually spend a lot of time doing. I enjoy analyzing and appreciating visual stimuli, especially for video games since I am looking at them quite often and because of the wide variety of art styles that can and have been used. Regarding the specific artists behind those styles however, seems unimportant; why does it matter who did it when I can spend time reviewing the final product? Nevertheless, there are a some artists that I do recognize based on the virtue of their work and one of these is Bob Rafei.

Bob Rafei was the first employee of Naughty Dog, a video games company founded in 1984 as Jam Software. He joined in 1995 and worked on Crash Bandicoot, helping to establish the visual style of the entire series. Bob was involved in many areas of development including background modelling, lighting and texturing to character rigging and animation. After Crash Bandicoot he worked on what has now become one of my favorite games of all time, Jak & Daxter: The Precursor Legacy.
The design of the main character as he matures through the series.

A colourful and vibrant 3d platformer in the early days of the PS2 era, Jak & Daxter is amazing, a technological marvel as well as an artistic delight. Even now, twelve years after release, the game looks gorgeous. It's combination of bright, highly saturated colors as well as stylized human-ish characters creates a veritable visual treat.
A screenshot of the HD version of the game.
Now, of course, a lot of different artists go into making an entire 3d scene or level so the praise may not be entirely his. A look at his concept work gives greater insight to his personal art and may be easier to explain my love of his work.

His website, http://www.bobrafei.com/, contains a wide selection of his art from many of the games he has worked on. A lot of the art are just concept design sketches and so don't really lend themselves to analysis of artistic merit but nevertheless, I shall attempt to do so.
A selection of poses from one of the main characters
This here is Keira. An ally non-player character from the game, she is very physical and expressive in her gesticulations, which lend to creating a sense of her character. I like the way Bob draws the poses using what are essentially overtly dramatic silhouettes which install a sense of movement and animation to the images. Some examples of this exageration of the pose is the third at the top, her legs crossed impossibly far and waist bent to an extreme, yet it doesn't look unnatural, it merely conveys a very specific emotion. Also, whilst a very common design choice, I still feel as though the addition of large eyes really does enhance the amount of emotion the character can display.

Another work of his that I enjoy is this environment that the game begins in:
The Green Sage's hut
This piece includes color and demonstrates a great use of bright, saturated colors that radiate a certain mood, namely one of safety. It also is great at showing the player the kind of world they are playing in. There is little technology and the bits that exist border on magic. It is warm, as evidenced by the trees in the back and the bright sky, as well as warm palette, and although not overly comfortable, it is a place of relaxation and contentment.

Overall I think Rafei did an amazing job on Jak & Daxter. His art direction of bright colors and clear preference of aesthetic over graphic fidelity allows the game to remain a great looking game. his individual artworks, although perhaps perfect on a technical, fundamental level, have plenty of character and zest and makes a believable and enjoyable world.