Difference between revisions of "GLESGAE"
(Adding third part - Event and Input Systems) |
(added headline to = GLESGAE - Managing Resources Overview = and = GLESGAE - The Resource Manager =) |
||
(15 intermediate revisions by 7 users not shown) | |||
Line 2: | Line 2: | ||
GLESGAE is a vehicle for a series of tutorials on building a game engine for the Pandora console from scratch.<br /> | GLESGAE is a vehicle for a series of tutorials on building a game engine for the Pandora console from scratch.<br /> | ||
− | I' | + | These will be written when I have time.. the uncreated ones below being an indicator of what's to come.<br /> |
+ | Unwritten parts may be split up, moved, or otherwise manipulated before actually going live, so don't take the following as set in stone till the link turns blue! | ||
== Table of Contents == | == Table of Contents == | ||
+ | Part One - Setup Stuff! | ||
* [http://pandorawiki.org/index.php?title=GLESGAE#GLESGAE_Overview GLESGAE Overview] | * [http://pandorawiki.org/index.php?title=GLESGAE#GLESGAE_Overview GLESGAE Overview] | ||
* [http://pandorawiki.org/index.php?title=GLESGAE#Engine_Design_Overview Engine Design Overview] | * [http://pandorawiki.org/index.php?title=GLESGAE#Engine_Design_Overview Engine Design Overview] | ||
Line 10: | Line 12: | ||
* [[GLESGAE:Setting Up A Window and Context]] | * [[GLESGAE:Setting Up A Window and Context]] | ||
* [[GLESGAE:The Event and Input Systems]] | * [[GLESGAE:The Event and Input Systems]] | ||
+ | Part Two - Show me Stuff! | ||
+ | * [[GLESGAE:Making a Mesh]] | ||
+ | * [[GLESGAE:Fixed Function Rendering Contexts]] | ||
+ | * [[GLESGAE:Shader Based Contexts]] | ||
+ | * [[GLESGAE:The Transform Stack]] | ||
+ | * [[GLESGAE:Fixed Function Transformations]] | ||
+ | * [[GLESGAE:Shader Based Transformations]] | ||
+ | * [[GLESGAE:Dealing with Textures]] | ||
+ | * [[GLESGAE:Making another Mesh with Vertex Buffers]] | ||
+ | Part Three - Manage my Stuff! | ||
+ | * [[GLESGAE:Managing Resources Overview]] | ||
+ | * [[GLESGAE:The Resource Manager]] | ||
+ | * [[GLESGAE:State Management Overview]] | ||
+ | * [[GLESGAE:Implementing The Game States]] | ||
+ | Part Four - The First Evaluation | ||
+ | * [[GLESGAE:First Evaluation Overview]] | ||
+ | * [[GLESGAE:First Evaluation Graphics]] | ||
+ | * [[GLESGAE:First Evaluation Resources]] | ||
+ | Part Five - Make it do Stuff! | ||
+ | * [[GLESGAE:Logic Processing Overview]] | ||
+ | * [[GLESGAE:Dealing with Entities]] | ||
+ | * [[GLESGAE:Playing with Scripts]] | ||
+ | Part Six - Push Stuff around! | ||
+ | * [[GLESGAE:Physics Processing Overview]] | ||
+ | * [[GLESGAE:Implementing Box2D Physics]] | ||
+ | * [[GLESGAE:Implementing Bullet Physics]] | ||
+ | Part Seven - Make Stuff squeak! | ||
+ | * [[GLESGAE:Sound Processing Overview]] | ||
+ | * [[GLESGAE:Implementing OpenAL]] | ||
+ | Part Eight - The Second Evaluation | ||
+ | * [[GLESGAE:Second Evaluation Overview]] | ||
+ | * [[GLESGAE:Second Evaluation Logic]] | ||
+ | * [[GLESGAE:Second Evaluation Physics]] | ||
+ | * [[GLESGAE:Second Evaluation Sound]] | ||
+ | Part Nine - Poke Stuff from afar! | ||
+ | * [[GLESGAE:Networking Overview]] | ||
+ | * [[GLESGAE:A Basic Networking System]] | ||
+ | Part Ten - Advanced Stuff! | ||
+ | * [[GLESGAE:Library Stubs]] | ||
+ | Part Eleven - Tool Stuff! | ||
+ | * [[GLESGAE:Tools Overview]] | ||
+ | * [[GLESGAE:Mesh Converter]] | ||
= GLESGAE Overview = | = GLESGAE Overview = | ||
Line 62: | Line 106: | ||
= Environment Setup = | = Environment Setup = | ||
+ | |||
+ | == I'm Lazy, Give Me A Pre-Configured Thing! == | ||
+ | Here you go: http://www.stuckiegamez.co.uk/apps/pandora/SimpleDev/zaxxon-premade-dev.tar.bz2 ~250mb | ||
+ | |||
+ | Extract to an ext2/3 formatted SD card, and boot. Simple! | ||
+ | |||
+ | '''NOTE:''''' This is a bit old, now.. and may have somewhat dodgy WiFi, but I'll get round to fixing this soon ( hopefully by 13th May. )''''' | ||
+ | |||
+ | == Tell Me What You Did == | ||
This weekend is essentially the overview and setup phase, so it's a bit boring I'm afraid.<br /> | This weekend is essentially the overview and setup phase, so it's a bit boring I'm afraid.<br /> | ||
To keep everyone on the same page, I'm going to assume you're using Angstrom from an SD card, that you've installed GCC et al on it, and you'll be booting from it for development purposes. | To keep everyone on the same page, I'm going to assume you're using Angstrom from an SD card, that you've installed GCC et al on it, and you'll be booting from it for development purposes. | ||
Line 98: | Line 151: | ||
'''''cd /media/mmcblk0p1'''''<br /> | '''''cd /media/mmcblk0p1'''''<br /> | ||
'''''sudo su''''' -- we'll need to be root for this, as we'll have no permission by default to touch this card.<br /> | '''''sudo su''''' -- we'll need to be root for this, as we'll have no permission by default to touch this card.<br /> | ||
− | '''''wget -c http://openpandora.org/firmware/pandora-rootfs.tar.bz2''''' - this grabs us the latest rootfs | + | '''''wget -c http://openpandora.org/firmware/pandora-rootfs.tar.bz2''''' - this grabs us the latest rootfs - though lately, these appear to be very out of sync between Pandora OE and Angstrom OE so be careful!<br /> |
'''''tar -xjpf pandora-rootfs.tar.bz2'''' -- you could add v to the arguments if you like.. it'll let you see what it's extracting and is slightly more exciting than just waiting for it to finish! The p is for preserving permissions, x to extract, j for bz2 support and f for file.<br /> | '''''tar -xjpf pandora-rootfs.tar.bz2'''' -- you could add v to the arguments if you like.. it'll let you see what it's extracting and is slightly more exciting than just waiting for it to finish! The p is for preserving permissions, x to extract, j for bz2 support and f for file.<br /> | ||
'''''rm pandora-rootfs.tar.bz2''''' | '''''rm pandora-rootfs.tar.bz2''''' | ||
Line 105: | Line 158: | ||
'''''nano autoboot.txt'''''<br /> | '''''nano autoboot.txt'''''<br /> | ||
Fill it with the following:<br /> | Fill it with the following:<br /> | ||
− | setenv bootargs | + | setenv bootargs root=/dev/mmcblk0p1 rw rootwait vram=6272K omapfb.vram=0:3000K mmc_core.removable=0 |
ext2load mmc 0 0x80300000 /boot/uImage-2.6.27.46-omap1 | ext2load mmc 0 0x80300000 /boot/uImage-2.6.27.46-omap1 | ||
bootm 0x80300000 | bootm 0x80300000 | ||
Line 136: | Line 189: | ||
'''''./main''''' <br /> | '''''./main''''' <br /> | ||
− | ''' Interesting Gotcha ''' - '' In the rootfs I downloaded (HF5 RC1), ncurses hadn't been installed... '''sudo opkg libncurses5''' if you get "cannot open shared object file libncurses.so.5" when invoking nano.'' | + | ''' Interesting Gotcha ''' - '' In the rootfs I downloaded (HF5 RC1), ncurses hadn't been installed... '''sudo opkg install libncurses5''' if you get "cannot open shared object file libncurses.so.5" when invoking nano.'' |
That's all for this week.. course you could go and install Geany, or whatever code editor you prefer.<br /> | That's all for this week.. course you could go and install Geany, or whatever code editor you prefer.<br /> | ||
Next time, we shall be opening up a window via badgering X11 directly, and getting a GL ES context up and running. | Next time, we shall be opening up a window via badgering X11 directly, and getting a GL ES context up and running. | ||
− | [[ | + | = GLESGAE - Setting Up A Window and Context = |
+ | |||
+ | == Introduction == | ||
+ | For the most part, opening up a Window and generating a rendering Context is a pretty simple task, and gets you on your way to pushing stuff to the screen.<br /> | ||
+ | Of course, doing so in a manner that's open enough to add differing platforms to at later date can be a bit tricky - especially so when some platforms have widely different ideas on what a Window actually is, and how that Window gets created. | ||
+ | |||
+ | For the [[GLESGAE]] engine, we will be using C++ predominantly to abstract things out for us.<br /> | ||
+ | That's not to say you couldn't do the same thing in C, but I find that even just the addition of classes makes things cleaner to work with - specially with multiple platforms. | ||
+ | |||
+ | With that out of the way, let's have a look at getting a window opened using X directly. | ||
+ | |||
+ | == Why X? Why not SDL? == | ||
+ | SDL is good.. I do like SDL.. but SDL can also be a bit heavy - especially if all you're wanting to do is use it to open up a Window!<br /> | ||
+ | Granted, generally you want to steal use of it's event system as well for access to controllers.. but our Pandora has some rather custom controllers so even then you're jumping out of SDL to use them.<br /> | ||
+ | We're also going to be using GL ES rather than pushing pixels directly, so.. perhaps in this instance, why SDL? | ||
+ | |||
+ | I also like coding down as close to the hardware if at all possible - as then if something goes wrong, it's my own fault, and not some black box library that I'm not sure what's going on in. | ||
+ | |||
+ | There's also the real possibility of perhaps porting your work to a platform that SDL hasn't been ported to yet; or has a poor implementation of. You'd need to start adding a ton of custom classes just for that platform, where if you ignored it from the start, your custom class framework already exists so you're not trying to jerry-rig it in! | ||
+ | |||
+ | Therefore, [[GLESGAE]] will be battering X directly, as that's what the base library is on Linux - and by extension - Pandora.<br /> | ||
+ | I'll also add WinAPI support into the SVN later on, but for now, only dealing with Linux/Pandora is best. | ||
+ | |||
+ | == Checking out the SVN == | ||
+ | '''''svn co -r 2 http://svn3.xp-dev.com/svn/glesgae/trunk glesgae''''' | ||
+ | This tutorial uses SVN revision 2, so be sure to check that out for the full code. | ||
+ | |||
+ | = Opening A Window = | ||
+ | Actually opening a Window is fairly trivial.<br /> | ||
+ | It sometimes looks like a crazy long piece of madness, but you get a lot of control over how your window appears.<br /> | ||
+ | WinAPI is especially mad - huge amount of code just to open a window, but you get quite fine-grained control over it. | ||
+ | |||
+ | As we're aiming to be cross-platform from the outset, we'll split some stuff up out the road, so let's go create some directory hierarchy.<br /> | ||
+ | * Graphics | ||
+ | ** Window | ||
+ | ** Context | ||
+ | |||
+ | We'll deal with the Window bit first. | ||
+ | == The Window Class == | ||
+ | Some systems allow us to create multiple windows, others use the entire screen as one window. In a game application, we're usually only interested in the latter, so we'll design primarily for that use case. | ||
+ | === Window.h === | ||
+ | #ifndef _WINDOW_H_ | ||
+ | #define _WINDOW_H_ | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | |||
+ | class RenderContext; | ||
+ | |||
+ | class Window | ||
+ | { | ||
+ | public: | ||
+ | Window() | ||
+ | : mContext(0) | ||
+ | , mWidth(0U) | ||
+ | , mHeight(0U) | ||
+ | { | ||
+ | |||
+ | } | ||
+ | |||
+ | virtual ~Window() {} | ||
+ | |||
+ | /// Open this window with the given width and height | ||
+ | virtual void open(const unsigned int width, const unsigned int height) = 0; | ||
+ | |||
+ | /// Set this Window's Render Context | ||
+ | virtual void setContext(RenderContext* const context) = 0; | ||
+ | |||
+ | /// Get this Window's Render Context - returning the specified type | ||
+ | template<typename T_Context> const T_Context* getContext() const { return reinterpret_cast<T_Context*>(mContext); } | ||
+ | |||
+ | /// Get this Window's Render Context | ||
+ | const RenderContext* getContext() const { return mContext; } | ||
+ | |||
+ | /// Get the Width of this Window | ||
+ | const unsigned int getWidth() const { return mWidth; } | ||
+ | |||
+ | /// Get the Height of this Window | ||
+ | const unsigned int getHeight() const { return mHeight; } | ||
+ | |||
+ | /// Refresh the Window | ||
+ | virtual void refresh() = 0; | ||
+ | |||
+ | /// Close this window | ||
+ | virtual void close() = 0; | ||
+ | |||
+ | protected: | ||
+ | RenderContext* mContext; | ||
+ | unsigned int mWidth; | ||
+ | unsigned int mHeight; | ||
+ | }; | ||
+ | |||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | This is our Window class.<br /> | ||
+ | We'll be using the Interface paradigm a lot, which is why these are all pure virtual functions ( means they must be overloaded in derived classes. )<br /> | ||
+ | There's also one funny thing we're doing here with getContext - and that's making it a templated function so we can directly cast to the type we want via '''''GLES1Context* myContext(myWindow->getContext<GLES1Context>());''''' rather than having to pull the pointer out, then waste another line to reinterpret_cast the pointer ourselves.<br /> | ||
+ | Of course, we also specify a standard getContext should we just need to use the standard interface - but grabbing the specific type in one line is particularly handy at times. | ||
+ | |||
+ | So, let's implement the X11 window open-up-er! | ||
+ | == X11 Window Class == | ||
+ | === X11Window.h === | ||
+ | #ifndef _X11_WINDOW_H_ | ||
+ | #define _X11_WINDOW_H_ | ||
+ | |||
+ | #include "Window.h" | ||
+ | |||
+ | namespace X11 { | ||
+ | #include <X11/Xlib.h> | ||
+ | } | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | |||
+ | class X11Window : public Window | ||
+ | { | ||
+ | public: | ||
+ | X11Window(); | ||
+ | ~X11Window(); | ||
+ | |||
+ | /// Open this window with the given width and height | ||
+ | void open(const unsigned int width, const unsigned int height); | ||
+ | |||
+ | /// Set this Window's Render Context | ||
+ | void setContext(RenderContext* const context); | ||
+ | |||
+ | /// Refresh the Window | ||
+ | void refresh(); | ||
+ | |||
+ | /// Close this window | ||
+ | void close(); | ||
+ | |||
+ | /// Returns the Display for platform specific bits | ||
+ | X11::Display* const getDisplay() const { return mDisplay; } | ||
+ | |||
+ | /// Returns the Window for platform specific bits | ||
+ | X11::Window const getWindow() const { return mWindow; } | ||
+ | |||
+ | private: | ||
+ | X11::Display* mDisplay; | ||
+ | X11::Window mWindow; | ||
+ | }; | ||
+ | |||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | This one does some mad stuff.<br /> | ||
+ | We wrap the Xlib include around an X11 namespace. This might seem mad but.. we've just created our own Window class, and X11 has it's own Window class so there's a conflict there.<br /> | ||
+ | Luckily with C++, you can set namespaces to separate chunks of code - and that's exactly what we're doing here! | ||
+ | |||
+ | '''NOTE:''''' I've actually removed the namespace hacks and just renamed everything to stop conflicting... the namespaces for GL and X11 were starting to conflict themselves and it just proved to be a vicious nightmare when implementing shader support... so I redid it all! That said, as we're using SVN here, the SVN repository version and this guide still match up, so feel free to follow through, but remember that it changes later on! '' | ||
+ | |||
+ | === X11Window.cpp === | ||
+ | #include "X11Window.h" | ||
+ | #include "../Context/RenderContext.h" | ||
+ | |||
+ | namespace X11 { | ||
+ | #include <X11/Xlib.h> | ||
+ | } | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | using namespace X11; | ||
+ | |||
+ | X11Window::X11Window() | ||
+ | : mDisplay(XOpenDisplay(0)) | ||
+ | , mWindow(0) | ||
+ | { | ||
+ | } | ||
+ | |||
+ | X11Window::~X11Window() | ||
+ | { | ||
+ | if (0 != mWindow) | ||
+ | close(); | ||
+ | |||
+ | XDestroyWindow(mDisplay, mWindow); | ||
+ | XCloseDisplay(mDisplay); | ||
+ | } | ||
+ | |||
+ | void X11Window::open(const unsigned int width, const unsigned int height) | ||
+ | { | ||
+ | // Store the width and height. | ||
+ | mWidth = width; | ||
+ | mHeight = height; | ||
+ | |||
+ | // Create the actual window and store the pointer. | ||
+ | mWindow = XCreateWindow(mDisplay // Pointer to the Display | ||
+ | , DefaultRootWindow(mDisplay) // Parent Window | ||
+ | , 0 // X of top-left corner | ||
+ | , 0 // Y of top-left corner | ||
+ | , width // requested width | ||
+ | , height // requested height | ||
+ | , 0 // border width | ||
+ | , CopyFromParent // window depth | ||
+ | , CopyFromParent // window class - InputOutput / InputOnly / CopyFromParent | ||
+ | , CopyFromParent // visual type | ||
+ | , 0 // value mask | ||
+ | , 0); // attributes | ||
+ | |||
+ | // Map the window to the display. | ||
+ | XMapWindow(mDisplay, mWindow); | ||
+ | } | ||
+ | |||
+ | void X11Window::setContext(RenderContext* const context) | ||
+ | { | ||
+ | mContext = context; | ||
+ | } | ||
+ | |||
+ | void X11Window::refresh() | ||
+ | { | ||
+ | mContext->refresh(); | ||
+ | XFlush(mDisplay); // this is a crutch till we handle events.. ignore! | ||
+ | } | ||
+ | |||
+ | void X11Window::close() | ||
+ | { | ||
+ | XDestroyWindow(mDisplay, mWindow); | ||
+ | } | ||
+ | |||
+ | There's our namespace hack again, and some actual code that does something!<br /> | ||
+ | As stated in the comment, ignore that XFlush.. it's because we haven't written anything to deal with events yet, so that flushes all events out. | ||
+ | |||
+ | The CreateWindow call has all it's parameters commented to let you know what each part is.. on a Linux machine, typing '''man XCreateWindow''' will give you the man page for what they all mean. Linux manpages are very useful in coding! | ||
+ | |||
+ | We now need a RenderContext to play with... and this is where the fun begins! | ||
+ | |||
+ | = Render Contexts = | ||
+ | As we're striving for multi-platform goodness, we need to create a base Render Context implementation that they all conform to.<br /> | ||
+ | We also want to ensure that our Window and Render Context stuff is separate from one another - which is what we're doing here - as then we can do things like the X11Window Class, which'll run on standard Linux machines as well as the Pandora. It's not much, but the less duplicated code in an Engine, the easier it is to maintain! | ||
+ | |||
+ | For our purposes, we have two types of Render Contexts we can create on the Pandora - GLES1 and GLES2. | ||
+ | |||
+ | == Discussion - ES1 vs ES2 == | ||
+ | Choosing one over the other is not quite a trivial process as you'd think, as they each have their own little pros and cons.<br /> | ||
+ | ES1 is fixed function and is pretty much designed for low end hardware.<br /> | ||
+ | ES2 is shader based - you have full control over the vertex and fragment processing parts of the graphics pipeline. | ||
+ | |||
+ | If you're not wholly sure what this Graphics Pipeline malarkey is, I suggest you read up on it. Here's some handy links: | ||
+ | * http://en.wikipedia.org/wiki/Graphics_pipeline | ||
+ | * http://developer.nvidia.com/page/documentation.html - particularly the free books on CG and GPU Gems 1-3 - for theory if nothing else. | ||
+ | * http://www.khronos.org/opengles/2_X/ - specially the images at the bottom which shows what gets replaced! | ||
+ | |||
+ | However, as the old adage goes, with great power comes great responsibility. As the images on Khronos' site shows - ES2 replaces a rather large chunk of the fixed function pipeline. You therefore need to deal with the following yourself: | ||
+ | * Matrix Stacks for Model, View, Projection and Textures ( though Texture Stack isn't usually used much. ) | ||
+ | * Alpha Test ( this can be an absolute killer.. ) | ||
+ | * Transform and Lighting ( that's effectively the Matrix Stacks, and lighting is done in either Vertex or Fragment shaders now ) | ||
+ | |||
+ | === OpenGL 1.5 vs ES1 === | ||
+ | Generally, OpenGL 1.5 is essentially what ES1 is based on. Both are fixed function predominantly, but ES1 does rip out a number of things: | ||
+ | * No Immediate Mode (glBegin/glEnd). | ||
+ | * Tends to favour fixed point arithmetic. | ||
+ | * No quad rendering. | ||
+ | * Some Stack handling you need to do yourself. | ||
+ | * Display Lists, Accumulators, and a bunch of other stuff... | ||
+ | |||
+ | Generally however, if your app has been using VBOs or Vertex Arrays, there shouldn't be a great deal of a change.<br /> | ||
+ | One sneaky gotcha is that there are two main ES1 profiles - Common and Common Lite ... Common Lite ONLY has fixed function, where as Common has both.<br /> | ||
+ | Additionally, ES1.0 does not support VBOs whereas ES1.1 does. | ||
+ | |||
+ | === OpenGL 2.0 vs ES2 === | ||
+ | Again, OpenGL 2.0 is essentially what ES2 is based on... but specifically, the programmable pipeline part - all fixed function pipeline functions have now been removed. Therefore, as stated above, you need to deal with: | ||
+ | * Matrix Stacks for Model, View, Projection and Textures ( though Texture Stack isn't usually used much. ) | ||
+ | * Alpha Test ( this can be an absolute killer.. ) | ||
+ | * Transform and Lighting ( that's effectively the Matrix Stacks, and lighting is done in either Vertex or Fragment shaders now ) | ||
+ | Along with: | ||
+ | * Vertex Attributes | ||
+ | * Shader Management | ||
+ | * Headaches - especially when trying to balance load across the vertex and fragment processors! | ||
+ | |||
+ | == The RenderContext Class == | ||
+ | Like Window, we'll have a common interface to deal with things.<br /> | ||
+ | This is only a simple tutorial to instantiate a Window and Context without actually rendering anything as yet, so these classes are still pretty simple. | ||
+ | |||
+ | As we're wanting to support ES1 and ES2, we're going to do things a bit differently from our Window class as well... | ||
+ | === RenderContext.h === | ||
+ | #ifndef _RENDER_CONTEXT_H_ | ||
+ | #define _RENDER_CONTEXT_H_ | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | |||
+ | class Window; | ||
+ | |||
+ | class RenderContext | ||
+ | { | ||
+ | public: | ||
+ | RenderContext() {} | ||
+ | virtual ~RenderContext() {} | ||
+ | |||
+ | /// Initialise this Context | ||
+ | virtual void initialise() = 0; | ||
+ | |||
+ | /// Shutdown this Context | ||
+ | virtual void shutdown() = 0; | ||
+ | |||
+ | /// Refresh this Context | ||
+ | virtual void refresh() = 0; | ||
+ | |||
+ | /// Bind to a Window | ||
+ | virtual void bindToWindow(Window* const window) = 0; | ||
+ | }; | ||
+ | |||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Pretty simple.. no crazy tricks as in Window. | ||
+ | === FixedFunctionContext.h === | ||
+ | #ifndef _FIXED_FUNCTION_CONTEXT_H_ | ||
+ | #define _FIXED_FUNCTION_CONTEXT_H_ | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | |||
+ | class FixedFunctionContext | ||
+ | { | ||
+ | public: | ||
+ | FixedFunctionContext() {} | ||
+ | virtual ~FixedFunctionContext() {} | ||
+ | |||
+ | }; | ||
+ | |||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | This is where things get a bit more interesting... this class is empty as we're not defining anything, but you may be wondering what the point of this actually is?<br /> | ||
+ | As stated already, I'm aiming GLESGAE to be multi-platform... some platforms have both Fixed Function and Shader Based contexts, and it's not to say you can't emulate Fixed Function on a Shader Based context either, so we shall separate the two styles and pull them in using multiple inheritance later on. | ||
+ | === ShaderBasedContext.h === | ||
+ | #ifndef _SHADER_BASED_CONTEXT_H_ | ||
+ | #define _SHADER_BASED_CONTEXT_H_ | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | |||
+ | class ShaderBasedContext | ||
+ | { | ||
+ | public: | ||
+ | ShaderBasedContext() {} | ||
+ | virtual ~ShaderBasedContext() {} | ||
+ | |||
+ | }; | ||
+ | |||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Again, empty.. but as stated above.. some platforms can support both.<br /> | ||
+ | This'll come in handy for when we want to do some development tests on our Desktop machines before pushing over to the Pandora, for instance; or any other platform we write support for. | ||
+ | |||
+ | In the repository, there is a GLXContext class which'll run under Linux; which shows how easy it is to add support for other Contexts. We're interested in the Pandora here though, so we'll generate Contexts suitable for it now. | ||
+ | == GLES1 Context Class == | ||
+ | Again, we want to be able to be multi platform.. we know where Pandora keeps it's EGL and GLES files, but that _can_ change depending on the platform.. so we need to protect this a bit. | ||
+ | === GLES1Context.h === | ||
+ | #ifndef _GLES1_CONTEXT_H_ | ||
+ | #define _GLES1_CONTEXT_H_ | ||
+ | |||
+ | #if defined(PANDORA) | ||
+ | #include <EGL/egl.h> | ||
+ | #endif | ||
+ | |||
+ | #include "RenderContext.h" | ||
+ | #include "FixedFunctionContext.h" | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | |||
+ | class Window; | ||
+ | class X11Window; | ||
+ | |||
+ | class GLES1Context : public RenderContext, public FixedFunctionContext | ||
+ | { | ||
+ | public: | ||
+ | GLES1Context(); | ||
+ | ~GLES1Context(); | ||
+ | |||
+ | /// Initialise this Context | ||
+ | void initialise(); | ||
+ | |||
+ | /// Shutdown this Context | ||
+ | void shutdown(); | ||
+ | |||
+ | /// Refresh this Context | ||
+ | void refresh(); | ||
+ | |||
+ | /// Bind us to a Window | ||
+ | void bindToWindow(Window* const window); | ||
+ | |||
+ | private: | ||
+ | X11Window* mWindow; | ||
+ | EGLDisplay mDisplay; | ||
+ | EGLContext mContext; | ||
+ | EGLSurface mSurface; | ||
+ | }; | ||
+ | |||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | |||
+ | GLES1 only supports fixed function, so we bring that in so that when we write the Renderer, we call the right functions on it.<br /> | ||
+ | Technically, being dependent on X11Window is a bit naughty.. but we can deal with that later...<br /> | ||
+ | Nothing crazy here, so we shall continue... | ||
+ | === GLES1Context.cpp === | ||
+ | #if defined(GLES1) | ||
+ | |||
+ | #include <cstdio> | ||
+ | |||
+ | #include "GLES1Context.h" | ||
+ | #include "../Window/X11Window.h" | ||
+ | |||
+ | #if defined(PANDORA) | ||
+ | #include <GLES/gl.h> | ||
+ | #endif | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | GLES1Context::GLES1Context() | ||
+ | : mWindow(0) | ||
+ | , mDisplay(0) | ||
+ | , mContext(0) | ||
+ | , mSurface(0) | ||
+ | { | ||
+ | |||
+ | } | ||
+ | |||
+ | GLES1Context::~GLES1Context() | ||
+ | { | ||
+ | shutdown(); | ||
+ | } | ||
+ | |||
+ | void GLES1Context::initialise() | ||
+ | { | ||
+ | // Get the EGL Display.. | ||
+ | mDisplay = eglGetDisplay( (reinterpret_cast<EGLNativeDisplayType>(mWindow->getDisplay())) ); | ||
+ | if (EGL_NO_DISPLAY == mDisplay) { | ||
+ | printf("failed to get egl display..\n"); | ||
+ | } | ||
+ | |||
+ | // Initialise the EGL Display | ||
+ | if (0 == eglInitialize(mDisplay, NULL, NULL)) { | ||
+ | printf("failed to init egl..\n"); | ||
+ | } | ||
+ | |||
+ | // Now we want to find an EGL Surface that will work for us... | ||
+ | EGLint eglAttribs[] = { | ||
+ | EGL_BUFFER_SIZE, 16 // 16bit Colour Buffer | ||
+ | , EGL_NONE | ||
+ | }; | ||
+ | |||
+ | EGLConfig eglConfig; | ||
+ | EGLint numConfig; | ||
+ | if (0 == eglChooseConfig(mDisplay, eglAttribs, &eglConfig, 1, &numConfig)) { | ||
+ | printf("failed to get context..\n"); | ||
+ | } | ||
+ | |||
+ | // Create the actual surface based upon the list of configs we've just gotten... | ||
+ | mSurface = eglCreateWindowSurface(mDisplay, eglConfig, reinterpret_cast<EGLNativeWindowType>(mWindow->getWindow()), NULL); | ||
+ | if (EGL_NO_SURFACE == mSurface) { | ||
+ | printf("failed to get surface..\n"); | ||
+ | } | ||
+ | |||
+ | // Setup the EGL Context | ||
+ | EGLint contextAttribs[] = { | ||
+ | EGL_CONTEXT_CLIENT_VERSION, 1 | ||
+ | , EGL_NONE | ||
+ | }; | ||
+ | |||
+ | // Create our Context | ||
+ | mContext = eglCreateContext (mDisplay, eglConfig, EGL_NO_CONTEXT, contextAttribs); | ||
+ | if (EGL_NO_CONTEXT == mContext) { | ||
+ | printf("failed to get context...\n"); | ||
+ | } | ||
+ | |||
+ | // Bind the Display, Surface and Contexts together | ||
+ | eglMakeCurrent(mDisplay, mSurface, mSurface, mContext); | ||
+ | |||
+ | // Set up our viewport | ||
+ | glViewport(0, 0, mWindow->getWidth(), mWindow->getHeight()); | ||
+ | } | ||
+ | |||
+ | void GLES1Context::shutdown() | ||
+ | { | ||
+ | eglDestroyContext(mDisplay, mContext); | ||
+ | eglDestroySurface(mDisplay, mSurface); | ||
+ | eglTerminate(mDisplay); | ||
+ | } | ||
+ | |||
+ | void GLES1Context::refresh() | ||
+ | { | ||
+ | // Trigger a buffer swap | ||
+ | eglSwapBuffers(mDisplay, mSurface); | ||
+ | |||
+ | // Clear the buffers for the next frame. | ||
+ | glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); | ||
+ | } | ||
+ | |||
+ | void GLES1Context::bindToWindow(Window* const window) | ||
+ | { | ||
+ | // Rememeber the Window we're bound to | ||
+ | mWindow = reinterpret_cast<X11Window*>(window); | ||
+ | |||
+ | // Set the context as us. | ||
+ | mWindow->setContext(this); | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | This one can catch you offguard as we #ifdef the entire file to ensure we're not compiling if we're specified a GLX context.<br /> | ||
+ | I'm also being lazy here in not actually dealing with the failure operations... technically you really should be... | ||
+ | |||
+ | The other thing to get your head around is there are two Displays here.. there is the EGLDisplay and the X11 Display.<br /> | ||
+ | This is just something you need to deal with, as on Android for instance, the "X11 Display" is dalvik's renderer window, but the EGLDisplay calls remain the same. It's for cross-platform compatibility more than anything else. | ||
+ | == GLES2 Context Class == | ||
+ | ES2 is actually almost exactly the same as ES1 at this point.. just that the class names are different, it pulls in ShaderBasedContext, and what we pass in to the EGLConfig and Context are slightly different.<br /> | ||
+ | For completeness sake, here are the two files anyway. | ||
+ | === GLES2Context.h === | ||
+ | #ifndef _GLES2_CONTEXT_H_ | ||
+ | #define _GLES2_CONTEXT_H_ | ||
+ | |||
+ | #if defined(PANDORA) | ||
+ | #include <EGL/egl.h> | ||
+ | #endif | ||
+ | |||
+ | #include "RenderContext.h" | ||
+ | #include "ShaderBasedContext.h" | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | |||
+ | class Window; | ||
+ | class X11Window; | ||
+ | |||
+ | class GLES2Context : public RenderContext, public ShaderBasedContext | ||
+ | { | ||
+ | public: | ||
+ | GLES2Context(); | ||
+ | ~GLES2Context(); | ||
+ | |||
+ | /// Initialise this Context | ||
+ | void initialise(); | ||
+ | |||
+ | /// Shutdown this Context | ||
+ | void shutdown(); | ||
+ | |||
+ | /// Refresh this Context | ||
+ | void refresh(); | ||
+ | |||
+ | /// Bind us to a Window | ||
+ | void bindToWindow(Window* const window); | ||
+ | |||
+ | private: | ||
+ | X11Window* mWindow; | ||
+ | EGLDisplay mDisplay; | ||
+ | EGLContext mContext; | ||
+ | EGLSurface mSurface; | ||
+ | }; | ||
+ | |||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | === GLES2Context.cpp === | ||
+ | #if defined(GLES2) | ||
+ | |||
+ | #include <cstdio> | ||
+ | |||
+ | #include "GLES2Context.h" | ||
+ | #include "../Window/X11Window.h" | ||
+ | |||
+ | #if defined(PANDORA) | ||
+ | #include <GLES2/gl2.h> | ||
+ | #endif | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | GLES2Context::GLES2Context() | ||
+ | : mWindow(0) | ||
+ | , mDisplay(0) | ||
+ | , mContext(0) | ||
+ | , mSurface(0) | ||
+ | { | ||
+ | |||
+ | } | ||
+ | |||
+ | GLES2Context::~GLES2Context() | ||
+ | { | ||
+ | shutdown(); | ||
+ | } | ||
+ | |||
+ | void GLES2Context::initialise() | ||
+ | { | ||
+ | // Get the EGL Display.. | ||
+ | mDisplay = eglGetDisplay( (reinterpret_cast<EGLNativeDisplayType>(mWindow->getDisplay())) ); | ||
+ | if (EGL_NO_DISPLAY == mDisplay) { | ||
+ | printf("failed to get egl display..\n"); | ||
+ | } | ||
+ | |||
+ | // Initialise the EGL Display | ||
+ | if (0 == eglInitialize(mDisplay, NULL, NULL)) { | ||
+ | printf("failed to init egl..\n"); | ||
+ | } | ||
+ | |||
+ | // Now we want to find an EGL Surface that will work for us... | ||
+ | EGLint eglAttribs[] = { | ||
+ | EGL_BUFFER_SIZE, 16 // 16bit Colour Buffer | ||
+ | , EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT // We want an ES2 config | ||
+ | , EGL_NONE | ||
+ | }; | ||
+ | |||
+ | EGLConfig eglConfig; | ||
+ | EGLint numConfig; | ||
+ | if (0 == eglChooseConfig(mDisplay, eglAttribs, &eglConfig, 1, &numConfig)) { | ||
+ | printf("failed to get context..\n"); | ||
+ | } | ||
+ | |||
+ | // Create the actual surface based upon the list of configs we've just gotten... | ||
+ | mSurface = eglCreateWindowSurface(mDisplay, eglConfig, reinterpret_cast<EGLNativeWindowType>(mWindow->getWindow()), NULL); | ||
+ | if (EGL_NO_SURFACE == mSurface) { | ||
+ | printf("failed to get surface..\n"); | ||
+ | } | ||
+ | |||
+ | // Setup the EGL context | ||
+ | EGLint contextAttribs[] = { | ||
+ | EGL_CONTEXT_CLIENT_VERSION, 2 | ||
+ | , EGL_NONE | ||
+ | }; | ||
+ | |||
+ | // Create our Context | ||
+ | mContext = eglCreateContext (mDisplay, eglConfig, EGL_NO_CONTEXT, contextAttribs); | ||
+ | if (EGL_NO_CONTEXT == mContext) { | ||
+ | printf("failed to get context...\n"); | ||
+ | } | ||
+ | |||
+ | // Bind the Display, Surface and Contexts together | ||
+ | eglMakeCurrent(mDisplay, mSurface, mSurface, mContext); | ||
+ | |||
+ | // Setup the viewport | ||
+ | glViewport(0, 0, mWindow->getWidth(), mWindow->getHeight()); | ||
+ | } | ||
+ | |||
+ | void GLES2Context::shutdown() | ||
+ | { | ||
+ | eglDestroyContext(mDisplay, mContext); | ||
+ | eglDestroySurface(mDisplay, mSurface); | ||
+ | eglTerminate(mDisplay); | ||
+ | } | ||
+ | |||
+ | void GLES2Context::refresh() | ||
+ | { | ||
+ | // Trigger a buffer swap | ||
+ | eglSwapBuffers(mDisplay, mSurface); | ||
+ | |||
+ | // Clear the buffers for the next frame. | ||
+ | glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); | ||
+ | } | ||
+ | |||
+ | void GLES2Context::bindToWindow(Window* const window) | ||
+ | { | ||
+ | // Rememeber the Window we're bound to | ||
+ | mWindow = reinterpret_cast<X11Window*>(window); | ||
+ | |||
+ | // Set the context as us. | ||
+ | mWindow->setContext(this); | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | |||
+ | = A Simple Test = | ||
+ | Of course, you probably want to see this in action!<br /> | ||
+ | It's a bit anti-climatic, but it works at least.. so let's create a little test program ( this is WindowContextTest in the SVN ) | ||
+ | == main.cpp == | ||
+ | #include <cstdlib> | ||
+ | |||
+ | #if defined(LINUX) | ||
+ | #include "../../Graphics/Window/X11Window.h" | ||
+ | #endif | ||
+ | |||
+ | #if defined(GLX) | ||
+ | #include "../../Graphics/Context/GLXContext.h" | ||
+ | #elif defined(GLES1) | ||
+ | #include "../../Graphics/Context/GLES1Context.h" | ||
+ | #elif defined(GLES2) | ||
+ | #include "../../Graphics/Context/GLES2Context.h" | ||
+ | #endif | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | int main(void) | ||
+ | { | ||
+ | #if defined(GLX) | ||
+ | GLXContext* context(new GLXContext); | ||
+ | #elif defined(GLES1) | ||
+ | GLES1Context* context(new GLES1Context); | ||
+ | #elif defined(GLES2) | ||
+ | GLES2Context* context(new GLES2Context); | ||
+ | #endif | ||
+ | |||
+ | #if defined(LINUX) | ||
+ | X11Window* window(new X11Window); | ||
+ | #endif | ||
+ | |||
+ | context->bindToWindow(window); | ||
+ | window->open(800, 480); | ||
+ | context->initialise(); | ||
+ | window->refresh(); | ||
+ | |||
+ | while(1) | ||
+ | window->refresh(); | ||
+ | } | ||
+ | |||
+ | It's not that scary honest! Most of it is just dealing with what version of Window and RenderContext to pull in.<br /> | ||
+ | As said, the SVN has a GLXContext for Linux as well... WinAPI support will be added at some point as well. | ||
+ | |||
+ | So, logic wise, we create a Window and a RenderContext.<br /> | ||
+ | We then bind the Window to the Context, then open said Window with a specified Width and Height ( 800 x 480 in this case. )<br /> | ||
+ | Then we initialise the Context.<br /> | ||
+ | Finally, we go into an infinite loop, refreshing the Window ( and in turn, the Context bound to it. ) | ||
+ | |||
+ | Simple! | ||
+ | |||
+ | To stop it, just close the Window.. as we're not catching events yet, this will cause it to crash. | ||
+ | |||
+ | == Building the Example == | ||
+ | In the SVN there are Makefiles already setup for you.. just trigger '''''make -f MakefileES1.pandora''''' or whatever your chosen configuration is, and it'll happily build for you and spit out a '''GLESGAE.pandora''' binary for you to run. | ||
+ | |||
+ | Alternatively, if you use CodeLite, there's a Workspace/Project set for you preconfigured. | ||
+ | |||
+ | '''Gotcha''' ''I had an issue about not having -lgcc_s ... I did have /lib/libgcc_s.so.1 so all I had to do was '''sudo ln -s /lib/libgcc_s.so.1 /lib/libgcc_s.so''' and it was happy again.'' | ||
+ | |||
+ | = Next Time = | ||
+ | We'll be looking at an Event Queue next time.. dealing with the Window Events and some of those lovely Pandora controls!<br /> | ||
+ | Then after that, we shall setup the Renderers and get some stuff on screen. | ||
+ | |||
+ | = GLESGAE - Event and Input Systems = | ||
+ | |||
+ | == Fast Track == | ||
+ | We're on SVN revision 3 now so, while I finish writing this up, feel free to grab and mess about:<br /> | ||
+ | '''''svn co -r 3 http://svn3.xp-dev.com/svn/glesgae/trunk/ glesgae''''' | ||
+ | |||
+ | == Introduction == | ||
+ | Dealing with Input - and by extension Events - can be a right pain in the backside.<br /> | ||
+ | Each platform generally comes with it's own set of Events to deal with - even though most of them are pretty common.<br /> | ||
+ | Therefore, each platform should generally have it's own Event System to deal with them, and a global Engine Event System that the game deals with after the platform specifics have been filtered out. | ||
+ | |||
+ | So what game-side events would we be interested in? Input, for one... our application being closed for whatever reason... and for platform specifics; Android has that Activity Lifecycle you need to be aware of and handle.<br /> | ||
+ | In terms of our Pandora, there aren't many platform events we really need to deal with. Our application being closed being perhaps the only immediate one, as well as the lid being shut.<br /> | ||
+ | We also need to deal with the touchscreen ( mouse events ) and keyboard though - and that'll come through X along with the other platform events.<br /> | ||
+ | Our nubs, face buttons, shoulder buttons and dpad we need to deal with manually, however. | ||
+ | |||
+ | Beware though, just handling Input as standard Events is a bit iffy.. so we'll abstract out some Input interfaces so we can perform some more sensible logic such as '''''if(dpad->Left()) moveLeft();''''' rather than polling an event queue and pulling off the events manually. | ||
+ | |||
+ | = The Event System = | ||
+ | As said, our Event System will come in two parts - a generic version that a game will use, and a platform specific one that filters interesting events and such like.<br /> | ||
+ | We'll start with the generic one first, but we'll switch between them during the rest of the weekend. | ||
+ | |||
+ | == Generic Event System == | ||
+ | I'm going to model the event system sortof on the Android Activity life cycle - more that that's the weirdest setup you'll likely come across, and it's very painful to take something that doesn't conform in the slightest, and try get it working in this style. | ||
+ | |||
+ | So, we need to think about a generic set of Events that could happen that we'd be interested in. Android has the following in the Activity cycle:<br /> | ||
+ | * onCreate | ||
+ | * onStart | ||
+ | * onPause | ||
+ | * onResume | ||
+ | * onRestart | ||
+ | * onStop | ||
+ | * onDestroy | ||
+ | |||
+ | For our purposes, we can simplify this to:<br /> | ||
+ | * onStart | ||
+ | * onPause | ||
+ | * onResume | ||
+ | * onDestroy | ||
+ | |||
+ | '''Gotcha''' ''I've missed out onCreate and gone to onStart ... onCreate is not guaranteed to be the first one run! If dalvik has decided to cache your library for whatever reason, this can be skipped. But I digress, this is Android stuff, not Pandora!'' | ||
+ | |||
+ | You might be sitting there thinking "why bother with the Android stuff if we're doing Pandora related things?"<br /> | ||
+ | Well, think of it like this.. we have a lid that we can close. We'd want to trigger an onPause style event when that happens, and an onResume event when it's lifted.<br /> | ||
+ | It's also handy to fire an event round during creation and destruction of our app, as we'll be taking the Observer pattern for the Event System, so being able to prod things to start up and shut themselves down is particulaly handy. And obviously, an onDestroy event can be triggered via quitting the game via closing the window, or whatever other means. | ||
+ | |||
+ | === Event.h === | ||
+ | #ifndef _EVENT_H_ | ||
+ | #define _EVENT_H_ | ||
+ | |||
+ | #include "EventTypes.h" | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class Event | ||
+ | { | ||
+ | public: | ||
+ | Event(const EventType& eventType) : mEventType(eventType) {} | ||
+ | virtual ~Event() {} | ||
+ | |||
+ | /// Get the type of event this is.. useful for classes which mark themselves as EventObservers for multiple Events | ||
+ | const EventType& getEventType() const { return mEventType; } | ||
+ | |||
+ | private: | ||
+ | EventType mEventType; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Event is fairly straight forward. The atypical base class for polymorphic nightmares, with a bit of custom RTTI thrown in for good measure.<br /> | ||
+ | EventType itself is defined in EventTypes.h, which you can find below: | ||
+ | === EventTypes.h === | ||
+ | #ifndef _EVENT_TYPES_H_ | ||
+ | #define _EVENT_TYPES_H_ | ||
+ | |||
+ | #include <string> | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | typedef std::string EventType; | ||
+ | |||
+ | namespace SystemEvents | ||
+ | { | ||
+ | namespace App | ||
+ | { | ||
+ | extern EventType Started; | ||
+ | extern EventType Paused; | ||
+ | extern EventType Resumed; | ||
+ | extern EventType Destroyed; | ||
+ | } | ||
+ | |||
+ | namespace Window | ||
+ | { | ||
+ | extern EventType Opened; | ||
+ | extern EventType Resized; | ||
+ | extern EventType Closed; | ||
+ | } | ||
+ | } | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | This is where things start to get a little strange.<br /> | ||
+ | We're using namespaces here to separate out the Event Types. This stops us from polluting the main namespace with global spam; and although these are still effectively globals, they're at least organised spam!<br /> | ||
+ | Another thing is we've typedefined EventType to a string. We've typedefined it so we can change it to something a bit less heavy to check against later, but strings made the most sense at the time - code obvious first, optimise later .. or K.I.S.S - Keep It Simple, Stupid ;) | ||
+ | |||
+ | The externs are of course, defined in the cpp file. They're just the namespace again as it's string, so I won't bother printing up the file as it's in the SVN if you want it anyway! | ||
+ | |||
+ | == The Common Event System == | ||
+ | As stated, we have a generic Event System and then a platform specific one.<br /> | ||
+ | Our generic one is pretty powerful in itself though.. let's have a look at the interface: | ||
+ | |||
+ | === EventSystem.h === | ||
+ | #ifndef _EVENT_SYSTEM_H_ | ||
+ | #define _EVENT_SYSTEM_H_ | ||
+ | |||
+ | #include "EventTypes.h" | ||
+ | // This defines general purpose Event System logic. | ||
+ | // Each of the platform specific headers ( included below ) extend this. | ||
+ | |||
+ | #include <map> | ||
+ | #include <vector> | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class Event; | ||
+ | class EventObserver; | ||
+ | class EventTrigger; | ||
+ | class Window; | ||
+ | class CommonEventSystem | ||
+ | { | ||
+ | public: | ||
+ | CommonEventSystem() | ||
+ | : mEventObservers() | ||
+ | , mEventTriggers() | ||
+ | { | ||
+ | |||
+ | } | ||
+ | |||
+ | virtual ~CommonEventSystem() {} | ||
+ | |||
+ | /// Update the Event System. | ||
+ | virtual void update() = 0; | ||
+ | |||
+ | /// Bind to specified Window. | ||
+ | virtual void bindToWindow(Window* const window) = 0; | ||
+ | |||
+ | /// Register an Event Type. | ||
+ | void registerEventType(const EventType& eventType); | ||
+ | |||
+ | /// Register an Event Observer with Event Type. | ||
+ | void registerObserver(const EventType& eventType, EventObserver* const observer); | ||
+ | |||
+ | /// Deregister an Event Observer from Event Type. | ||
+ | void deregisterObserver(const EventType& eventType, EventObserver* const observer); | ||
+ | |||
+ | /// Register a Custom Event Trigger. | ||
+ | void registerEventTrigger(const EventType& eventType, EventTrigger* const trigger); | ||
+ | |||
+ | /// Deregister a Custom Event Trigger. | ||
+ | void deregisterEventTrigger(const EventType& eventType, EventTrigger* const trigger); | ||
+ | |||
+ | /// Send an Event to all Observers of this type. | ||
+ | /// If you want to retain this event beyond it being in the receiving scope, you'll have to copy it. | ||
+ | void sendEvent(const EventType& eventType, Event* event); | ||
+ | |||
+ | /// Update all the Triggers to send Events if necessary. | ||
+ | void updateAllTriggers(); | ||
+ | |||
+ | protected: | ||
+ | std::map<EventType, std::vector<EventObserver*> > mEventObservers; //!< Outer = Event Type; Inner = Array of Event Observers for this Event Type | ||
+ | std::map<EventType, EventTrigger*> mEventTriggers; | ||
+ | |||
+ | }; | ||
+ | } | ||
+ | |||
+ | #if defined(LINUX) | ||
+ | #include "X11/X11EventSystem.h" | ||
+ | #endif | ||
+ | |||
+ | #endif | ||
+ | |||
+ | '''Oopsie''' ''In my mad dash to get this finished and put up, I've only just noticed a small little hiccup in my logic - updateAllTriggers should be protected or private as we only really want to call it from update, and not let random outside forces prod an event through anytime they feel like it!'' | ||
+ | |||
+ | Fairly weighty class, methinks!<br /> | ||
+ | The comments describe pretty much what each function does, but I'll describe a few of the more interesting ones. | ||
+ | |||
+ | Firstly, the class is CommonEventSystem rather than EventSystem.. and then does something strange by pulling in a platform specific header at the bottom. Why is this? So you can platform-agnostically pull in "EventSystem.h" and it'll grab the correct platform Event System for you, without having to deal with it on your side.<br /> | ||
+ | It does kindof go against the grain of Class Name = File Name... but for sanity sake in your application, it's much better, believe me! | ||
+ | |||
+ | Event Systems generally don't work unless they have a handle to something the OS has given them. This is usually the window, which is why we bind the Event System to a Window via the bindToWindow call. It's rather platform specific, so it's set up as a pure virtual to force it to be overloaded further up the chain.. this means it can be recast to a Window that's specific to that platform to get all the gooey bits out - X11Display, etc.. | ||
+ | |||
+ | Now we get to the Events handling bit itself.<br /> | ||
+ | Generally, we'd be going through the following process:<br /> | ||
+ | * registerEventType | ||
+ | * registerEventTrigger | ||
+ | * perhaps multiple registerObservers | ||
+ | |||
+ | So, we'd register an Event type.. this creates space in our Observers array. Optionally, we could ( and I might do this later ) extend the Trigger map to work in the same vein.<br /> | ||
+ | We then register an Event Trigger, and bind as many Event Observers to that type as we want.<br /> | ||
+ | This basically boils down to Triggers send events, Observers wait for them. Nice and simple! | ||
+ | |||
+ | When building your class you can inherit from EventTrigger and EventObserver, and over load their respective functions.<br /> | ||
+ | The platform specific Input Systems pull in EventObserver and registers itself as an Observer of many Events. This is very useful, but it does give you extra overhead in having to figure out what Event you've been given, and recasting it to access. This is why EventType has been typedefined so we can change it later, as parsing strings isn't cheap! It does make it pretty straight forward though as to what's going on, so for the time being, it's particularly useful. We'll need a Resource Manager or some form of CRC-like encoding class further down the line, and these will all be changed to use that. | ||
+ | |||
+ | === EventTrigger.h === | ||
+ | #ifndef _EVENT_TRIGGER_H_ | ||
+ | #define _EVENT_TRIGGER_H_ | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class Event; | ||
+ | class EventTrigger | ||
+ | { | ||
+ | public: | ||
+ | EventTrigger() {} | ||
+ | virtual ~EventTrigger() {} | ||
+ | |||
+ | /// Check if this trigger has an Event ready. | ||
+ | /// This MUST return 0 if there is no Event, otherwise it's the event pointer. | ||
+ | /// This event WILL be cleaned up by the Event System once all Observers have observed the event. | ||
+ | virtual Event* hasEvent() = 0; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Nice and simple, does what it says on the tin.<br /> | ||
+ | Make note of the comment though - as there's only the one function, it's checked to see if you return something.. if you do not have an Event when this function is called, you must return 0! | ||
+ | |||
+ | === EventObserver.h === | ||
+ | #ifndef _EVENT_OBSERVER_H_ | ||
+ | #define _EVENT_OBSERVER_H_ | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class Event; | ||
+ | class EventObserver | ||
+ | { | ||
+ | public: | ||
+ | EventObserver() {} | ||
+ | virtual ~EventObserver() {} | ||
+ | |||
+ | /// Trigger this Observer to receive an event. | ||
+ | virtual void receiveEvent(Event* const event) = 0; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Perhaps even simpler... it's up to the derived class to deal with event, checking types, etc.. | ||
+ | |||
+ | '''Gotcha''' ''I haven't quite mentioned this yet as we haven't gone to it, but marking this here seems rather important. When receiveEvent is called from the EventSystem, it'll be from the sendEvent function which takes an unprotected pointer because it WILL delete your event after sending it to all Observers. This is important to note as when your receiveEvent gets triggered, if you want to keep that Event around for whatever reason, you will have to perform a deep copy on it - as a shallow copy will get wiped out!''<br /> | ||
+ | Why do we do this? So you can effectively "fire and forget" your events, and have the system clean up after you. | ||
+ | |||
+ | == The X11 Event System == | ||
+ | Now onto the actual Event System we'll be using on our Pandoras. | ||
+ | |||
+ | It comes with it's own set of Events, so we'll define them as follows: | ||
+ | === X11Events.h === | ||
+ | #ifndef _X11_EVENT_TYPES_H_ | ||
+ | #define _X11_EVENT_TYPES_H_ | ||
+ | |||
+ | #include "../EventTypes.h" | ||
+ | #include "../Event.h" | ||
+ | |||
+ | namespace X11 | ||
+ | { | ||
+ | #include <X11/Xlib.h> | ||
+ | } | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | namespace X11Events | ||
+ | { | ||
+ | namespace Input | ||
+ | { | ||
+ | namespace Mouse | ||
+ | { | ||
+ | extern EventType Moved; | ||
+ | extern EventType ButtonDown; | ||
+ | extern EventType ButtonUp; | ||
+ | |||
+ | class MovedEvent : public Event | ||
+ | { | ||
+ | public: | ||
+ | MovedEvent(int x, int y) | ||
+ | : Event(Moved) | ||
+ | , mX(x) | ||
+ | , mY(y) | ||
+ | { | ||
+ | |||
+ | } | ||
+ | |||
+ | /// retrieve the new X co-ord | ||
+ | const int getX() const { return mX; } | ||
+ | |||
+ | /// retrieve the new Y co-ord | ||
+ | const int getY() const { return mY; } | ||
+ | |||
+ | private: | ||
+ | int mX; | ||
+ | int mY; | ||
+ | }; | ||
+ | |||
+ | class ButtonDownEvent : public Event | ||
+ | { | ||
+ | public: | ||
+ | ButtonDownEvent(unsigned int button) | ||
+ | : Event(ButtonDown) | ||
+ | , mButton(button) | ||
+ | { | ||
+ | |||
+ | } | ||
+ | |||
+ | /// retrieve the button pressed | ||
+ | const unsigned int getButton() const { return mButton; } | ||
+ | |||
+ | private: | ||
+ | unsigned int mButton; | ||
+ | }; | ||
+ | |||
+ | class ButtonUpEvent : public Event | ||
+ | { | ||
+ | public: | ||
+ | ButtonUpEvent(unsigned int button) | ||
+ | : Event(ButtonUp) | ||
+ | , mButton(button) | ||
+ | { | ||
+ | |||
+ | } | ||
+ | |||
+ | /// retrieve the button released | ||
+ | const unsigned int getButton() const { return mButton; } | ||
+ | |||
+ | private: | ||
+ | unsigned int mButton; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | namespace Keyboard | ||
+ | { | ||
+ | extern EventType KeyDown; | ||
+ | extern EventType KeyUp; | ||
+ | |||
+ | class KeyDownEvent : public Event | ||
+ | { | ||
+ | public: | ||
+ | KeyDownEvent(X11::KeySym key) | ||
+ | : Event(KeyDown) | ||
+ | , mKey(key) | ||
+ | { | ||
+ | |||
+ | } | ||
+ | |||
+ | /// retrieve the key pressed | ||
+ | const X11::KeySym getKey() const { return mKey; } | ||
+ | |||
+ | private: | ||
+ | X11::KeySym mKey; | ||
+ | }; | ||
+ | |||
+ | class KeyUpEvent : public Event | ||
+ | { | ||
+ | public: | ||
+ | KeyUpEvent(X11::KeySym key) | ||
+ | : Event(KeyUp) | ||
+ | , mKey(key) | ||
+ | { | ||
+ | |||
+ | } | ||
+ | |||
+ | /// retrieve the key released | ||
+ | const X11::KeySym getKey() const { return mKey; } | ||
+ | |||
+ | private: | ||
+ | X11::KeySym mKey; | ||
+ | }; | ||
+ | } | ||
+ | } | ||
+ | } | ||
+ | } | ||
+ | |||
+ | |||
+ | #endif | ||
+ | |||
+ | You should be used to my mad coding style by now, and this is pretty straight forward anyway.<br /> | ||
+ | We define the Event Types as externs as normal ( so we only ever have one copy of them in the .cpp file and not recreating instances of it all over the place, ) and then define the Events themselves. As they're simple, and effectively need to be fast and inlined, I've defined them all in the header.<br /> | ||
+ | Only oddness is our X11 namespace hack makes a re-appearance, but other than that, it's fairly typical. | ||
+ | |||
+ | === X11EventSystem.h === | ||
+ | #ifndef _X11_EVENT_SYSTEM_H_ | ||
+ | #define _X11_EVENT_SYSTEM_H_ | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class Window; | ||
+ | class X11Window; | ||
+ | class EventSystem : public CommonEventSystem | ||
+ | { | ||
+ | public: | ||
+ | EventSystem(); | ||
+ | ~EventSystem(); | ||
+ | |||
+ | /// Update the Event System | ||
+ | void update(); | ||
+ | |||
+ | /// Bind to the Window | ||
+ | void bindToWindow(Window* const window); | ||
+ | |||
+ | private: | ||
+ | bool mActive; | ||
+ | X11Window* mWindow; | ||
+ | int mCurrentPointerX; | ||
+ | int mCurrentPointerY; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Not a great deal in here, is there?<br /> | ||
+ | We just overload the standard CommonEventSystem interface, and store an X11Window pointer directly rather than a generic Window pointer. | ||
+ | |||
+ | The fun stuff is in the cpp... | ||
+ | === X11EventSystem.cpp === | ||
+ | #if defined(LINUX) | ||
+ | |||
+ | #include <cstdio> | ||
+ | |||
+ | #include "../EventSystem.h" | ||
+ | #include "../EventTypes.h" | ||
+ | #include "../SystemEvents.h" | ||
+ | #include "X11Events.h" | ||
+ | #include "../../Graphics/Window/X11Window.h" | ||
+ | |||
+ | namespace X11 { | ||
+ | #include <X11/Xlib.h> | ||
+ | } | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | EventSystem::EventSystem() | ||
+ | : CommonEventSystem() | ||
+ | , mActive(true) | ||
+ | , mWindow(0) | ||
+ | , mCurrentPointerX(0) | ||
+ | , mCurrentPointerY(0) | ||
+ | { | ||
+ | // Register System Events | ||
+ | registerEventType(SystemEvents::App::Started); | ||
+ | registerEventType(SystemEvents::App::Paused); | ||
+ | registerEventType(SystemEvents::App::Resumed); | ||
+ | registerEventType(SystemEvents::App::Destroyed); | ||
+ | |||
+ | registerEventType(SystemEvents::Window::Opened); | ||
+ | registerEventType(SystemEvents::Window::Resized); | ||
+ | registerEventType(SystemEvents::Window::Closed); | ||
+ | |||
+ | // Register X11 Specific Events | ||
+ | registerEventType(X11Events::Input::Mouse::Moved); | ||
+ | registerEventType(X11Events::Input::Mouse::ButtonDown); | ||
+ | registerEventType(X11Events::Input::Mouse::ButtonUp); | ||
+ | |||
+ | registerEventType(X11Events::Input::Keyboard::KeyDown); | ||
+ | registerEventType(X11Events::Input::Keyboard::KeyUp); | ||
+ | } | ||
+ | |||
+ | EventSystem::~EventSystem() | ||
+ | { | ||
+ | |||
+ | } | ||
+ | |||
+ | void EventSystem::bindToWindow(Window* const window) | ||
+ | { | ||
+ | mWindow = reinterpret_cast<X11Window*>(window); | ||
+ | } | ||
+ | |||
+ | |||
+ | void EventSystem::update() | ||
+ | { | ||
+ | // Deal with the pointer first. | ||
+ | X11::Window rootReturn; | ||
+ | X11::Window childReturn; | ||
+ | int rootXReturn; | ||
+ | int rootYReturn; | ||
+ | int pointerX; | ||
+ | int pointerY; | ||
+ | unsigned int maskReturn; | ||
+ | |||
+ | if (true == X11::XQueryPointer(mWindow->getDisplay(), mWindow->getWindow() | ||
+ | , &rootReturn, &childReturn | ||
+ | , &rootXReturn, &rootYReturn | ||
+ | , &pointerX, &pointerY, &maskReturn)) { | ||
+ | if ((pointerX != mCurrentPointerX) && (pointerY != mCurrentPointerY)) | ||
+ | sendEvent(X11Events::Input::Mouse::Moved, new X11Events::Input::Mouse::MovedEvent(pointerX, pointerY)); | ||
+ | mCurrentPointerX = pointerX; | ||
+ | mCurrentPointerY = pointerY; | ||
+ | } | ||
+ | |||
+ | // Rest of the events... | ||
+ | X11::XEvent event; | ||
+ | while (X11::XPending(mWindow->getDisplay())) { | ||
+ | X11::XNextEvent(mWindow->getDisplay(), &event); | ||
+ | |||
+ | switch (event.type) { | ||
+ | case Expose: | ||
+ | if (event.xexpose.count != 0) | ||
+ | break; | ||
+ | break; | ||
+ | case ConfigureNotify: | ||
+ | sendEvent(SystemEvents::Window::Resized, new SystemEvents::Window::ResizedEvent(event.xconfigure.width, event.xconfigure.height)); | ||
+ | break; | ||
+ | |||
+ | case KeyPress: | ||
+ | sendEvent(X11Events::Input::Keyboard::KeyDown, new X11Events::Input::Keyboard::KeyDownEvent(X11::XLookupKeysym(&event.xkey, 0))); | ||
+ | break; | ||
+ | |||
+ | case KeyRelease: | ||
+ | sendEvent(X11Events::Input::Keyboard::KeyUp, new X11Events::Input::Keyboard::KeyUpEvent(X11::XLookupKeysym(&event.xkey, 0))); | ||
+ | break; | ||
+ | |||
+ | case ButtonRelease: | ||
+ | sendEvent(X11Events::Input::Mouse::ButtonUp, new X11Events::Input::Mouse::ButtonUpEvent(event.xbutton.button)); | ||
+ | break; | ||
+ | |||
+ | case ButtonPress: | ||
+ | sendEvent(X11Events::Input::Mouse::ButtonDown, new X11Events::Input::Mouse::ButtonDownEvent(event.xbutton.button)); | ||
+ | break; | ||
+ | |||
+ | case ClientMessage: | ||
+ | if (*X11::XGetAtomName(mWindow->getDisplay(), event.xclient.message_type) == *"WM_PROTOCOLS") { | ||
+ | sendEvent(SystemEvents::Window::Closed, new SystemEvents::Window::ClosedEvent); | ||
+ | sendEvent(SystemEvents::App::Destroyed, new SystemEvents::App::DestroyedEvent); | ||
+ | } | ||
+ | else if (event.xclient.data.l[0] == mWindow->getDeleteMessage()) { | ||
+ | sendEvent(SystemEvents::Window::Closed, new SystemEvents::Window::ClosedEvent); | ||
+ | sendEvent(SystemEvents::App::Destroyed, new SystemEvents::App::DestroyedEvent); | ||
+ | } | ||
+ | |||
+ | break; | ||
+ | default: | ||
+ | break; | ||
+ | } | ||
+ | } | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Now we have some actual code.<br /> | ||
+ | First off, we register all the System and X11 specific events we'll be sending. This allows us to create the arrays and set things up so that other systems can observe these events, and we can actually send them. | ||
+ | |||
+ | When we bind the Window, we recast it to our specific type and store it.. nothing else really needed. | ||
+ | |||
+ | Going through the update function, however... is a bit more involved.<br /> | ||
+ | I've split it up in handling the Pointer co-ordinates first, as they're dealt with slightly differently, and then running through the standard Event Loop to catch things we're interested in. | ||
+ | |||
+ | The pointer stuff is pretty straight forward - store the current position and if it's moved, send an Event. It's all API gubbins really, and you can read up on it if you like.<br /> | ||
+ | One hidden gotcha is XQueryPointer will give you the position of the mouse pointer as long as it's on the same screen as your application - therefore, you can get into a situation where you're receiving co-ordinates that are outside your window! So be careful! | ||
+ | |||
+ | The Event Queue itself is also API gubbins for the most part, and sending Events for the parts we're interested in.<br /> | ||
+ | Of note is the Client Message where we detect the closure of the window... these are generally handled via the WM_PROTOCOLS set, or via a special delete message atom... so we check for both, just in case. | ||
+ | |||
+ | = The Input System = | ||
+ | The general Input System is pretty basic, and just defines an interface for the platform specific variants to conform to.<br /> | ||
+ | Inputs generally link up with Events.. basic events being Keyboard and Pointers from the Windowing System.<br /> | ||
+ | Joysticks and other things can come from X11 as well.. though are not currently supported due to lack of time, motivation and testing equipment. | ||
+ | |||
+ | == Controllers == | ||
+ | We're again abusing namespaces here to separate things out. This does give us huge amounts of control over generic Controllers like Pads and Joysticks as we can define our own button sets pretty quickly - see the PandoraInputSystem for more details on the Pandora Buttons Set. | ||
+ | |||
+ | Generally, these are all pretty similar, so check the code out if you want more info, for I'll just define the PadController here... | ||
+ | === Pad.h === | ||
+ | #ifndef _PAD_H_ | ||
+ | #define _PAD_H_ | ||
+ | |||
+ | #include "Controller.h" | ||
+ | #include "ControllerTypes.h" | ||
+ | |||
+ | #include <vector> | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class InputSystem; | ||
+ | namespace Controller | ||
+ | { | ||
+ | class PadController : public CommonController | ||
+ | { | ||
+ | friend class GLESGAE::InputSystem; | ||
+ | public: | ||
+ | PadController(const Controller::Id controllerId, const int buttons) | ||
+ | : CommonController(Pad, controllerId) | ||
+ | , mButtons(buttons) | ||
+ | { | ||
+ | |||
+ | } | ||
+ | |||
+ | /// Get the value of the specified button | ||
+ | const float getButton(const Button button) const | ||
+ | { | ||
+ | // TODO: actual error checking | ||
+ | return mButtons[button]; | ||
+ | } | ||
+ | |||
+ | /// Get amount of buttons | ||
+ | const unsigned int getNumButtons() const { return mButtons.size(); } | ||
+ | |||
+ | protected: | ||
+ | /// Set Button data | ||
+ | void setButton(const Button button, const float data) | ||
+ | { | ||
+ | // TODO: actual error checking | ||
+ | mButtons[button] = data; | ||
+ | } | ||
+ | |||
+ | private: | ||
+ | std::vector<float> mButtons; | ||
+ | }; | ||
+ | } | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Again, my lack of error checking is shocking, so I shall leave that up to you! | ||
+ | |||
+ | Literally all a Pad is, is an array of buttons. We've defined the array as a float in case we have pressure sensitive buttons, but we could get away with booleans for simple digital pads that are just on/off. | ||
+ | |||
+ | We've also protected the set functions, and made the Input System a friend so it can access them.<br /> | ||
+ | This is to stop accidental setting of the buttons from random systems, as that'd just confuse everything. | ||
+ | |||
+ | CommonController is just a base class which contains a Type and Id of the Controller.. Type being Pad, Keyboard, etc.. and the Id being a unique identifier for this Controller. Again, read the SVN if you want more details, as it's pretty straight forward. | ||
+ | |||
+ | == Pandora Input System == | ||
+ | The Pandora Input System is mostly the Linux Input System with the addition of using libPND to access the nubs and dpad/face buttons.<br /> | ||
+ | Like the Event System, the Inputs get automatically updated for you during the Input System update, so you don't need to poll manually. | ||
+ | |||
+ | We could have abused the Event System further by tagging each action as an event to watch, but that's perhaps a bit overboard. | ||
+ | |||
+ | So let's have a look at the Common Input System first.. it's pretty big but defines everything we'd ever want, really. | ||
+ | === InputSystem.h === | ||
+ | #ifndef _INPUT_SYSTEM_H_ | ||
+ | #define _INPUT_SYSTEM_H_ | ||
+ | |||
+ | #include "ControllerTypes.h" | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | namespace Controller | ||
+ | { | ||
+ | class KeyboardController; | ||
+ | class PointerController; | ||
+ | class JoystickController; | ||
+ | class PadController; | ||
+ | } | ||
+ | class CommonInputSystem | ||
+ | { | ||
+ | public: | ||
+ | CommonInputSystem() {} | ||
+ | virtual ~CommonInputSystem() {} | ||
+ | |||
+ | /// Update the Input System | ||
+ | virtual void update() = 0; | ||
+ | |||
+ | /// Retreive number of Active Keyboards. | ||
+ | virtual const Controller::Id getNumberOfKeyboards() const = 0; | ||
+ | |||
+ | /// Retreive number of Active Joysticks. | ||
+ | virtual const Controller::Id getNumberOfJoysticks() const = 0; | ||
+ | |||
+ | /// Retreive number of Active Pads. | ||
+ | virtual const Controller::Id getNumberOfPads() const = 0; | ||
+ | |||
+ | /// Retreive number of Active Pointers. | ||
+ | virtual const Controller::Id getNumberOfPointers() const = 0; | ||
+ | |||
+ | /// Create new Keyboard - will return NULL if no more available. | ||
+ | virtual Controller::KeyboardController* const newKeyboard() = 0; | ||
+ | |||
+ | /// Create new Joystick - will return NULL if no more available. | ||
+ | virtual Controller::JoystickController* const newJoystick() = 0; | ||
+ | |||
+ | /// Create new Pad - will return NULL if no more available. | ||
+ | virtual Controller::PadController* const newPad() = 0; | ||
+ | |||
+ | /// Create new Pointer - will return NULL if no more available. | ||
+ | virtual Controller::PointerController* const newPointer() = 0; | ||
+ | |||
+ | /// Grab another instance of the specified Keyboard - returns NULL if not created. | ||
+ | virtual Controller::KeyboardController* const getKeyboard(const Controller::Id id) = 0; | ||
+ | |||
+ | /// Grab another instance of the specified Joystick - returns NULL if not created. | ||
+ | virtual Controller::JoystickController* const getJoystick(const Controller::Id id) = 0; | ||
+ | |||
+ | /// Grab another instance of the specified Pointer - returns NULL if not created. | ||
+ | virtual Controller::PointerController* const getPointer(const Controller::Id id) = 0; | ||
+ | |||
+ | /// Grab another instance of the specified Pad - returns NULL if not created. | ||
+ | virtual Controller::PadController* const getPad(const Controller::Id id) = 0; | ||
+ | |||
+ | /// Destroy a Keyboard. | ||
+ | virtual void destroyKeyboard(Controller::KeyboardController* const keyboard) = 0; | ||
+ | |||
+ | /// Destroy a Joystick. | ||
+ | virtual void destroyJoystick(Controller::JoystickController* const joystick) = 0; | ||
+ | |||
+ | /// Destroy a Pad. | ||
+ | virtual void destroyPad(Controller::PadController* const pad) = 0; | ||
+ | |||
+ | /// Destroy a Pointer. | ||
+ | virtual void destroyPointer(Controller::PointerController* const pointer) = 0; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | #if defined(PANDORA) | ||
+ | #include "Pandora/PandoraInputSystem.h" | ||
+ | #elif defined(LINUX) | ||
+ | #include "Linux/LinuxInputSystem.h" | ||
+ | #endif | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Again, we abuse the fact that we just need to pull in InputSystem.h and we get the platform specific variant automatically. | ||
+ | |||
+ | There isn't really much to the Input System.. we have functionality for creating, retrieving and deleting Pads, Pointers, Keyboards and Joysticks. We can also query how many there are connected to the system, and call the standard update function. | ||
+ | |||
+ | === PandoraInputSystem.h === | ||
+ | #ifndef _PANDORA_INPUT_SYSTEM_H_ | ||
+ | #define _PANDORA_INPUT_SYSTEM_H_ | ||
+ | |||
+ | #include <vector> | ||
+ | |||
+ | #include "../ControllerTypes.h" | ||
+ | #include "../../Events/EventObserver.h" | ||
+ | |||
+ | namespace X11 | ||
+ | { | ||
+ | #include <X11/Xlib.h> | ||
+ | } | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | namespace Controller | ||
+ | { | ||
+ | namespace Pandora { | ||
+ | extern Controller::Button Up; | ||
+ | extern Controller::Button Down; | ||
+ | extern Controller::Button Left; | ||
+ | extern Controller::Button Right; | ||
+ | extern Controller::Button Start; | ||
+ | extern Controller::Button Select; | ||
+ | extern Controller::Button Pandora; | ||
+ | extern Controller::Button Y; | ||
+ | extern Controller::Button B; | ||
+ | extern Controller::Button X; | ||
+ | extern Controller::Button A; | ||
+ | extern Controller::Button L1; | ||
+ | extern Controller::Button L2; | ||
+ | extern Controller::Button R1; | ||
+ | extern Controller::Button R2; | ||
+ | |||
+ | extern Controller::Id LeftNub; | ||
+ | extern Controller::Id RightNub; | ||
+ | extern Controller::Id Buttons; | ||
+ | } | ||
+ | } | ||
+ | |||
+ | class Event; | ||
+ | class EventSystem; | ||
+ | class InputSystem : public CommonInputSystem, public EventObserver | ||
+ | { | ||
+ | public: | ||
+ | InputSystem(EventSystem* const eventSystem); | ||
+ | ~InputSystem(); | ||
+ | |||
+ | /// Update the Input System | ||
+ | void update(); | ||
+ | |||
+ | /// Receive an Event | ||
+ | void receiveEvent(Event* const event); | ||
+ | |||
+ | /// Retreive number of Active Keyboards. | ||
+ | const Controller::Id getNumberOfKeyboards() const; | ||
+ | |||
+ | /// Retreive number of Active Joysticks. | ||
+ | const Controller::Id getNumberOfJoysticks() const; | ||
+ | |||
+ | /// Retreive number of Active Pads. | ||
+ | const Controller::Id getNumberOfPads() const ; | ||
+ | |||
+ | /// Retreive number of Active Pointers. | ||
+ | const Controller::Id getNumberOfPointers() const; | ||
+ | |||
+ | /// Create new Keyboard - will return NULL if no more available. | ||
+ | Controller::KeyboardController* const newKeyboard(); | ||
+ | |||
+ | /// Create new Joystick - will return NULL if no more available. | ||
+ | Controller::JoystickController* const newJoystick(); | ||
+ | |||
+ | /// Create new Pad - will return NULL if no more available. | ||
+ | Controller::PadController* const newPad(); | ||
+ | |||
+ | /// Create new Pointer - will return NULL if no more available. | ||
+ | Controller::PointerController* const newPointer(); | ||
+ | |||
+ | /// Grab another instance of the specified Keyboard - returns NULL if not created. | ||
+ | Controller::KeyboardController* const getKeyboard(const Controller::Id id); | ||
+ | |||
+ | /// Grab another instance of the specified Joystick - returns NULL if not created. | ||
+ | Controller::JoystickController* const getJoystick(const Controller::Id id); | ||
+ | |||
+ | /// Grab another instance of the specified Pointer - returns NULL if not created. | ||
+ | Controller::PointerController* const getPointer(const Controller::Id id); | ||
+ | |||
+ | /// Grab another instance of the specified Pad - returns NULL if not created. | ||
+ | Controller::PadController* const getPad(const Controller::Id id); | ||
+ | |||
+ | /// Destroy a Keyboard. | ||
+ | void destroyKeyboard(Controller::KeyboardController* const keyboard); | ||
+ | |||
+ | /// Destroy a Joystick. | ||
+ | void destroyJoystick(Controller::JoystickController* const joystick); | ||
+ | |||
+ | /// Destroy a Pad. | ||
+ | void destroyPad(Controller::PadController* const pad); | ||
+ | |||
+ | /// Destroy a Pointer. | ||
+ | void destroyPointer(Controller::PointerController* const pointer); | ||
+ | |||
+ | protected: | ||
+ | const Controller::KeyType convertKey(X11::KeySym x11Key); | ||
+ | |||
+ | private: | ||
+ | Controller::KeyboardController* mKeyboard; | ||
+ | Controller::PointerController* mPointer; | ||
+ | std::vector<Controller::JoystickController*> mJoysticks; | ||
+ | std::vector<Controller::PadController*> mPads; | ||
+ | |||
+ | EventSystem* mEventSystem; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Effectively, we just define what's already in the CommonInputSystem, but we also have a bunch of custom buttons, so we define these as well.<br /> | ||
+ | We also define the Ids for the Left and Right Nubs, and the Game Button set. | ||
+ | |||
+ | === PandoraInputSystem.cpp === | ||
+ | I'm not going to print up all of this, as it's rather large.. if you want the full code, check the SVN. | ||
+ | |||
+ | Most of the class does exactly what you think, so we'll take a look at just a few of the functions here. | ||
+ | InputSystem::InputSystem(EventSystem* const eventSystem) | ||
+ | : CommonInputSystem() | ||
+ | , mKeyboard(0) | ||
+ | , mPointer(0) | ||
+ | , mJoysticks() | ||
+ | , mPads() | ||
+ | , mEventSystem(eventSystem) | ||
+ | { | ||
+ | mJoysticks.push_back(new Controller::JoystickController(Controller::Pandora::LeftNub, 2U, 0U)); // ID, 2 Axes, 0 Buttons | ||
+ | mJoysticks.push_back(new Controller::JoystickController(Controller::Pandora::RightNub, 2U, 0U)); // ID, 2 Axes, 0 Buttons | ||
+ | |||
+ | mPads.push_back(new Controller::PadController(Controller::Pandora::Buttons, 15U)); // 15 Buttons - Up/Down/Left/Right - Start/Select/Pandora - Y/B/X/A - L1/R1/L2/R2 ( L2/R2 being optional, of course ) | ||
+ | |||
+ | pnd_evdev_open(pnd_evdev_dpads); | ||
+ | pnd_evdev_open(pnd_evdev_nub1); | ||
+ | pnd_evdev_open(pnd_evdev_nub2); | ||
+ | } | ||
+ | |||
+ | The Constructor is fairly straight-forward.. we setup how many Nubs we have ( marking them as Joystick devices of 2 axes and 0 buttons, effectively ) along with the buttons and then call the libpnd functions to open up the devices we want. | ||
+ | |||
+ | InputSystem::~InputSystem() | ||
+ | { | ||
+ | if (0 != mKeyboard) { | ||
+ | delete mKeyboard; | ||
+ | mKeyboard = 0; | ||
+ | } | ||
+ | |||
+ | if (0 != mPointer) { | ||
+ | delete mPointer; | ||
+ | mPointer = 0; | ||
+ | } | ||
+ | |||
+ | for (std::vector<Controller::JoystickController*>::iterator itr(mJoysticks.begin()); itr != mJoysticks.end(); ++itr) | ||
+ | delete (*itr); | ||
+ | |||
+ | mJoysticks.clear(); | ||
+ | |||
+ | for (std::vector<Controller::PadController*>::iterator itr(mPads.begin()); itr != mPads.end(); ++itr) | ||
+ | delete (*itr); | ||
+ | |||
+ | mPads.clear(); | ||
+ | |||
+ | pnd_evdev_close(pnd_evdev_dpads); | ||
+ | pnd_evdev_close(pnd_evdev_nub1); | ||
+ | pnd_evdev_close(pnd_evdev_nub2); | ||
+ | } | ||
+ | |||
+ | The Destructor tidies up our mess, with some actual checking to ensure we're being sane about it. | ||
+ | |||
+ | void InputSystem::update() | ||
+ | { | ||
+ | // Use libPND to update Nubs and Buttons... keyboard and pointer come through as events. | ||
+ | pnd_evdev_catchup(0); | ||
+ | |||
+ | pnd_nubstate_t nubState; | ||
+ | |||
+ | // Left Nub | ||
+ | if (pnd_evdev_nub_state(pnd_evdev_nub1, &nubState) > 0) { | ||
+ | mJoysticks[Controller::Pandora::LeftNub]->setAxis(Controller::Axis::X, static_cast<float>(nubState.x)); | ||
+ | mJoysticks[Controller::Pandora::LeftNub]->setAxis(Controller::Axis::Y, static_cast<float>(nubState.y)); | ||
+ | } | ||
+ | |||
+ | // Right Nub | ||
+ | if (pnd_evdev_nub_state(pnd_evdev_nub2, &nubState) > 0) { | ||
+ | mJoysticks[Controller::Pandora::RightNub]->setAxis(Controller::Axis::X, static_cast<float>(nubState.x)); | ||
+ | mJoysticks[Controller::Pandora::RightNub]->setAxis(Controller::Axis::Y, static_cast<float>(nubState.y)); | ||
+ | } | ||
+ | |||
+ | // Buttons | ||
+ | Controller::Button buttonState(pnd_evdev_dpad_state(pnd_evdev_dpads)); | ||
+ | mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::Left, (buttonState & pnd_evdev_left)); | ||
+ | mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::Right, (buttonState & pnd_evdev_right)); | ||
+ | mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::Up, (buttonState & pnd_evdev_up)); | ||
+ | mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::Down, (buttonState & pnd_evdev_down)); | ||
+ | mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::X, (buttonState & pnd_evdev_x)); | ||
+ | mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::Y, (buttonState & pnd_evdev_y)); | ||
+ | mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::A, (buttonState & pnd_evdev_a)); | ||
+ | mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::B, (buttonState & pnd_evdev_b)); | ||
+ | mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::L1, (buttonState & pnd_evdev_ltrigger)); | ||
+ | mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::R1, (buttonState & pnd_evdev_rtrigger)); | ||
+ | mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::Start, (buttonState & pnd_evdev_start)); | ||
+ | mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::Select, (buttonState & pnd_evdev_select)); | ||
+ | mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::Pandora, (buttonState & pnd_evdev_pandora)); | ||
+ | } | ||
+ | |||
+ | Now then, the update function actually performs a fair bit of logic here.<br /> | ||
+ | The cleanliness of abusing namespaces also makes this fairly readable - see, method in my madness! | ||
+ | |||
+ | The first thing we do is effectively poll the pnd device state to get everything.<br /> | ||
+ | We then update the nubs one at a time. | ||
+ | |||
+ | Then we move on to the buttons themselves, which are stored in a bitmask - hence the funky use of the ampersand.<br /> | ||
+ | This is all hidden out the road of the user though, so all they need to call is '''''const float value(pandoraButtons->getButton(Controller::Pandora::B));''''' and be done with it. | ||
+ | |||
+ | void InputSystem::receiveEvent(Event* const event) | ||
+ | { | ||
+ | if (event->getEventType() == X11Events::Input::Keyboard::KeyDown) | ||
+ | mKeyboard->setKey(convertKey(reinterpret_cast<X11Events::Input::Keyboard::KeyDownEvent*>(event)->getKey()), true); | ||
+ | else if (event->getEventType() == X11Events::Input::Keyboard::KeyUp) | ||
+ | mKeyboard->setKey(convertKey(reinterpret_cast<X11Events::Input::Keyboard::KeyUpEvent*>(event)->getKey()), false); | ||
+ | else if (event->getEventType() == X11Events::Input::Mouse::Moved) { | ||
+ | mPointer->setAxis(Controller::Axis::X, reinterpret_cast<X11Events::Input::Mouse::MovedEvent*>(event)->getX()); | ||
+ | mPointer->setAxis(Controller::Axis::Y, reinterpret_cast<X11Events::Input::Mouse::MovedEvent*>(event)->getY()); | ||
+ | } | ||
+ | else if (event->getEventType() == X11Events::Input::Mouse::ButtonDown) | ||
+ | mPointer->setButton(reinterpret_cast<X11Events::Input::Mouse::ButtonDownEvent*>(event)->getButton(), 1.0F); | ||
+ | else if (event->getEventType() == X11Events::Input::Mouse::ButtonUp) | ||
+ | mPointer->setButton(reinterpret_cast<X11Events::Input::Mouse::ButtonUpEvent*>(event)->getButton(), 0.0F); | ||
+ | } | ||
+ | |||
+ | Finally, we actually get to implement a receiveEvent function! | ||
+ | |||
+ | These get sent from our Event System as they're dealt with via X11.. so we sit and wait for an event, and figure out what type it is before acting on it. | ||
+ | |||
+ | As we currently have EventType set to a string, we can't do a switch statement here.. however, I'm planning on updating this to use unsigned ints instead so we can add in a switch statement instead - which is much faster than string comparing! | ||
+ | |||
+ | The rest of the class does what you'd think.. for example; create a new Keyboard interface, register ourselves as an observer of Keyboard events for it so we can pick them up, return the Keyboard interface, remove the Keyboard interface and deregister ourselves with the event system, etc.. | ||
+ | |||
+ | The only fun function is converting the X11 Keyboard symcodes to our own Key codes... but that's just API gubbins. | ||
+ | |||
+ | = A Simple Test = | ||
+ | Of course, you probably want to see this in action!<br /> | ||
+ | It's still a bit empty and not hugely exciting I'm afraid.. but we shall start fixing that soon!<br /> | ||
+ | We are at least cleaning up properly now :) | ||
+ | #include <cstdio> | ||
+ | #include <cstdlib> | ||
+ | |||
+ | #include "../../Events/EventSystem.h" | ||
+ | #include "../../Input/InputSystem.h" | ||
+ | |||
+ | #if defined(LINUX) | ||
+ | #include "../../Graphics/Window/X11Window.h" | ||
+ | #endif | ||
+ | |||
+ | #if defined(GLX) | ||
+ | #include "../../Graphics/Context/GLXContext.h" | ||
+ | #elif defined(GLES1) | ||
+ | #include "../../Graphics/Context/GLES1Context.h" | ||
+ | #elif defined(GLES2) | ||
+ | #include "../../Graphics/Context/GLES2Context.h" | ||
+ | #endif | ||
+ | |||
+ | #include "../../Input/Keyboard.h" | ||
+ | #include "../../Input/Pad.h" | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | int main(void) | ||
+ | { | ||
+ | EventSystem* eventSystem(new EventSystem); | ||
+ | InputSystem* inputSystem(new InputSystem(eventSystem)); | ||
+ | |||
+ | #if defined(LINUX) | ||
+ | X11Window* window(new X11Window); | ||
+ | #endif | ||
+ | |||
+ | #if defined(GLX) | ||
+ | GLXContext* context(new GLXContext); | ||
+ | #elif defined(GLES1) | ||
+ | GLES1Context* context(new GLES1Context); | ||
+ | #elif defined(GLES2) | ||
+ | GLES2Context* context(new GLES2Context); | ||
+ | #endif | ||
+ | |||
+ | eventSystem->bindToWindow(window); | ||
+ | context->bindToWindow(window); | ||
+ | window->open(800, 480); | ||
+ | context->initialise(); | ||
+ | window->refresh(); | ||
+ | |||
+ | Controller::KeyboardController* myKeyboard(inputSystem->newKeyboard()); | ||
+ | |||
+ | #ifdef PANDORA | ||
+ | Controller::PadController* pandoraButtons(inputSystem->getPad(Controller::Pandora::Buttons)); | ||
+ | #endif | ||
+ | |||
+ | while(false == myKeyboard->getKey(Controller::KEY_Q)) { | ||
+ | window->refresh(); | ||
+ | eventSystem->update(); | ||
+ | inputSystem->update(); | ||
+ | #ifdef PANDORA | ||
+ | if (pandoraButtons->getButton(Controller::Pandora::B)) | ||
+ | printf("B! I got a B! Is mummy proud? :D\n"); | ||
+ | #endif | ||
+ | } | ||
+ | |||
+ | delete context; | ||
+ | delete window; | ||
+ | delete inputSystem; | ||
+ | delete eventSystem; | ||
+ | |||
+ | return 0; | ||
+ | } | ||
+ | |||
+ | |||
+ | == Building the Example == | ||
+ | In the SVN there are Makefiles already setup for you.. just trigger '''''make -f MakefileES1.pandora''''' or whatever your chosen configuration is, and it'll happily build for you and spit out a '''GLESGAE.pandora''' binary for you to run. | ||
+ | |||
+ | Alternatively, if you use CodeLite, there's a Workspace/Project set for you preconfigured. | ||
+ | |||
+ | '''Gotcha''' ''This time, I had an issue with libpnd... I was a bit heavy handed this time and just did '''ln -s /usr/lib/libpnd.so.1 /usr/lib/libpnd.so''' which is naughty, as we should install the dev package of libpnd instead. This is why I sneakily included a version of one of the headers in the repository, something which I shall fix in the next commit.'' | ||
+ | |||
+ | = Next Time = | ||
+ | We'll be looking at the Renderers.. we'll have two of them - FixedFunctionRenderer and ShaderBasedRenderer - so depending on how it works out, it may be two articles or a really really heavy one!<br /> | ||
+ | Following that, we shall start setting up some engine defines, declare the data pipeline, and perhaps maybe even load up a model! | ||
+ | |||
+ | After that, some State Systems to organise the mess we're starting to develop. | ||
+ | |||
+ | = GLESGAE - Making a Mesh = | ||
+ | |||
+ | == Introduction == | ||
+ | Originally, I was just going to dive right in to Shader Based Rendering.<br /> | ||
+ | This was a stupid idea... what was I going to render?<br /> | ||
+ | So, I've split up the original behemoth of a Chapter and decided to split out the Vertex Array part as it was fairly self-contained and could prove handy as another alternative to the existing tutorial on the Wiki. | ||
+ | |||
+ | == Vertex Array Overview == | ||
+ | In the olden days, OpenGL had Immediate Mode.<br /> | ||
+ | This allowed you to do glBegin() .. .. .. glEnd() and effectively whatever points and things you specified would be drawn in that order, immediately, on the screen. Nice and simple.<br /> | ||
+ | Obviously this was a bit iffy, so Display Lists were developed to take a copy of your glBegin/glEnd logic's final outcome, and instance it all over the place instead of stopping to plot your points again. Magic!<br /> | ||
+ | However, this was still slow in that you still had to do the initial glBegin/glEnd logic, which although is rather easy, it's also rather wordy and has a hit on what the Graphics Card is actually doing. This is what Vertex Arrays are designed to fix. | ||
+ | |||
+ | Vertex Arrays describe Meshes.<br /> | ||
+ | How they describe them is effectively up to you, but in general they're going to have Vertices, Normals, Texture Co-ordinates, Colours and potentially Indices as well.<br /> | ||
+ | These can all be managed in one of two ways - separate Vertex Arrays for each Attribute, or one giant Interleaved Array with relevant offsets and stride values. | ||
+ | |||
+ | In GLESGAE, we use interleaved arrays. | ||
+ | |||
+ | == Interleave my What-now? == | ||
+ | Interleaving is just a fancy way of squishing all the data in one array. This actually works out faster as we know for a fact that all our Vertex information is going to be in one long memory chunk, so we just set a pointer somewhere and run through it all, rather than having to jump about all the time to find bits and pieces. | ||
+ | |||
+ | Here's some example vertex data - in particular, the vertex data we will be drawing later on: | ||
+ | float vertexData[24] = {// Position - 12 floats | ||
+ | -1.0F, 1.0F, 0.0F, | ||
+ | 1.0F, 1.0F, 0.0F, | ||
+ | 1.0F, -1.0F, 0.0F, | ||
+ | -1.0F, -1.0F, 0.0F, | ||
+ | // Colour - 12 floats | ||
+ | 0.0F, 1.0F, 0.0F, | ||
+ | 1.0F, 0.0F, 0.0F, | ||
+ | 0.0F, 0.0F, 1.0F, | ||
+ | 1.0F, 1.0F, 1.0F}; | ||
+ | |||
+ | As you can see, I'm using three floats to define a positional point, and three floats to define a colour.<br /> | ||
+ | With four points, we can draw a quad. No, really! As GLESGAE also deals with index buffers. Here's the one for the quad: | ||
+ | unsigned char indexData[6] = { 0, 1, 2, 2, 3, 0 } | ||
+ | |||
+ | This is stating that we're using point 0, 1 and 2 as Triangle 1, and 2, 3 and 0 as Triangle 2 - making up our quad on the screen.<br /> | ||
+ | * Point 0 corresponds to -1.0F, 1.0F, 0.0F | ||
+ | * Point 1 corresponds to 1.0F, 1.0F, 0.0F | ||
+ | * Point 2 corresponds to 1.0F, -1.0F, 0.0F | ||
+ | and so on... | ||
+ | |||
+ | == Defining a Format == | ||
+ | So let's go create a Vertex Buffer class to tie everything up, and allow us to tell the engine how to draw our vertex data. | ||
+ | === VertexBuffer.h === | ||
+ | #ifndef _VERTEX_BUFFER_H_ | ||
+ | #define _VERTEX_BUFFER_H_ | ||
+ | |||
+ | #include <vector> | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class VertexBuffer | ||
+ | { | ||
+ | public: | ||
+ | enum FormatType | ||
+ | { | ||
+ | // Float | ||
+ | FORMAT_CUSTOM_4F | ||
+ | , FORMAT_CUSTOM_3F | ||
+ | , FORMAT_CUSTOM_2F | ||
+ | , FORMAT_POSITION_2F | ||
+ | , FORMAT_POSITION_3F | ||
+ | , FORMAT_POSITION_4F | ||
+ | , FORMAT_NORMAL_3F | ||
+ | , FORMAT_COLOUR_3F // Not available in GLES1 | ||
+ | , FORMAT_COLOUR_4F | ||
+ | , FORMAT_TEXTURE_2F | ||
+ | , FORMAT_TEXTURE_3F | ||
+ | , FORMAT_TEXTURE_4F | ||
+ | // Unsigned/Byte | ||
+ | , FORMAT_CUSTOM_2B | ||
+ | , FORMAT_CUSTOM_3B | ||
+ | , FORMAT_CUSTOM_4B | ||
+ | , FORMAT_POSITION_2B | ||
+ | , FORMAT_POSITION_3B | ||
+ | , FORMAT_POSITION_4B | ||
+ | , FORMAT_NORMAL_3B | ||
+ | , FORMAT_COLOUR_3UB // Not available in GLES1 | ||
+ | , FORMAT_COLOUR_4UB | ||
+ | , FORMAT_TEXTURE_2B | ||
+ | , FORMAT_TEXTURE_3B | ||
+ | , FORMAT_TEXTURE_4B | ||
+ | // Short | ||
+ | , FORMAT_CUSTOM_2S | ||
+ | , FORMAT_CUSTOM_3S | ||
+ | , FORMAT_CUSTOM_4S | ||
+ | , FORMAT_POSITION_2S | ||
+ | , FORMAT_POSITION_3S | ||
+ | , FORMAT_POSITION_4S | ||
+ | , FORMAT_NORMAL_3S | ||
+ | , FORMAT_COLOUR_3S // Not available in GLES1 | ||
+ | , FORMAT_COLOUR_4S // Not available in GLES1 | ||
+ | , FORMAT_TEXTURE_2S | ||
+ | , FORMAT_TEXTURE_3S | ||
+ | , FORMAT_TEXTURE_4S | ||
+ | }; | ||
+ | |||
+ | class Format | ||
+ | { | ||
+ | public: | ||
+ | Format(const FormatType type, unsigned int offset) | ||
+ | : mType(type) | ||
+ | , mOffset(offset) | ||
+ | { | ||
+ | switch (mType) | ||
+ | { | ||
+ | case FORMAT_CUSTOM_2F: | ||
+ | case FORMAT_POSITION_2F: | ||
+ | case FORMAT_TEXTURE_2F: | ||
+ | mSize = sizeof(float) * 2; | ||
+ | break; | ||
+ | case FORMAT_CUSTOM_3F: | ||
+ | case FORMAT_POSITION_3F: | ||
+ | case FORMAT_NORMAL_3F: | ||
+ | case FORMAT_COLOUR_3F: | ||
+ | case FORMAT_TEXTURE_3F: | ||
+ | mSize = sizeof(float) * 3; | ||
+ | break; | ||
+ | case FORMAT_CUSTOM_4F: | ||
+ | case FORMAT_POSITION_4F: | ||
+ | case FORMAT_COLOUR_4F: | ||
+ | case FORMAT_TEXTURE_4F: | ||
+ | mSize = sizeof(float) * 4; | ||
+ | break; | ||
+ | case FORMAT_CUSTOM_2B: | ||
+ | case FORMAT_POSITION_2B: | ||
+ | case FORMAT_TEXTURE_2B: | ||
+ | mSize = sizeof(char) * 2; | ||
+ | break; | ||
+ | case FORMAT_CUSTOM_3B: | ||
+ | case FORMAT_POSITION_3B: | ||
+ | case FORMAT_NORMAL_3B: | ||
+ | case FORMAT_COLOUR_3UB: | ||
+ | case FORMAT_TEXTURE_3B: | ||
+ | mSize = sizeof(char) * 3; | ||
+ | break; | ||
+ | case FORMAT_CUSTOM_4B: | ||
+ | case FORMAT_POSITION_4B: | ||
+ | case FORMAT_COLOUR_4UB: | ||
+ | case FORMAT_TEXTURE_4B: | ||
+ | mSize = sizeof(char) * 4; | ||
+ | case FORMAT_CUSTOM_2S: | ||
+ | case FORMAT_POSITION_2S: | ||
+ | case FORMAT_TEXTURE_2S: | ||
+ | mSize = sizeof(short) * 2; | ||
+ | break; | ||
+ | case FORMAT_CUSTOM_3S: | ||
+ | case FORMAT_POSITION_3S: | ||
+ | case FORMAT_NORMAL_3S: | ||
+ | case FORMAT_COLOUR_3S: | ||
+ | case FORMAT_TEXTURE_3S: | ||
+ | mSize = sizeof(short) * 3; | ||
+ | break; | ||
+ | case FORMAT_CUSTOM_4S: | ||
+ | case FORMAT_POSITION_4S: | ||
+ | case FORMAT_COLOUR_4S: | ||
+ | case FORMAT_TEXTURE_4S: | ||
+ | mSize = sizeof(short) * 4; | ||
+ | break; | ||
+ | default: | ||
+ | break; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | /// Retrieve the type of this Format Identifier | ||
+ | const FormatType getType() const { return mType; } | ||
+ | |||
+ | /// Retrieve the offset of this Format Identifier | ||
+ | const unsigned int getOffset() const { return mOffset; } | ||
+ | |||
+ | /// Retrieve the size of this Format Identifier | ||
+ | const unsigned int getSize() const { return mSize; } | ||
+ | |||
+ | private: | ||
+ | FormatType mType; | ||
+ | unsigned int mSize; | ||
+ | unsigned int mOffset; | ||
+ | }; | ||
+ | |||
+ | VertexBuffer(unsigned char* const data, const unsigned int size, const std::vector<Format>& format); | ||
+ | VertexBuffer(unsigned char* const data, const unsigned int size); | ||
+ | VertexBuffer(const VertexBuffer& vertexBuffer); | ||
+ | |||
+ | /// Retrieve format details | ||
+ | const std::vector<Format>& getFormat() const { return mFormat; } | ||
+ | |||
+ | /// Retrieve data | ||
+ | const unsigned char* getData() const { return mData; } | ||
+ | |||
+ | /// Retrieve size | ||
+ | const unsigned int getSize() const { return mSize; } | ||
+ | |||
+ | /// Retrieve stride | ||
+ | const unsigned int getStride() const { return mStride; } | ||
+ | |||
+ | /// Add a Format Identifier | ||
+ | void addFormatIdentifier(const FormatType formatType, const unsigned int amount); | ||
+ | |||
+ | protected: | ||
+ | /// Protected equals operator - use the copy constructor instead. | ||
+ | VertexBuffer operator=(const VertexBuffer&) { return *this; } | ||
+ | |||
+ | private: | ||
+ | unsigned char* mData; | ||
+ | unsigned int mSize; | ||
+ | unsigned int mStride; | ||
+ | std::vector<Format> mFormat; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Firstly, I'm being my usual pedantic self and ensuring I can support every thing I can get away with. There are a few things which ES1 doesn't support however; which I have highlighted.<br /> | ||
+ | To render anything effectively, and if you can at all get away with it, your meshes should really use the smallest data unit you can. For example, using unsigned bytes for Colour values, Shorts for Vertices ( Bytes are probably a bit too small for large meshes, ) etc.. also be careful in that ES 1 expects 4 colour values rather than 3 - it requires that extra alpha bit! | ||
+ | |||
+ | Ignore the Custom declarations for now, as they're for Shader based rendering, where your Vertex Attributes can be whatever you like and you're not limited to the standard four types. | ||
+ | |||
+ | Of course, the irony is that as a fair chunk of the code is defined in the header, the meat of the object file is actually pretty small: | ||
+ | === VertexBuffer.cpp === | ||
+ | #include "VertexBuffer.h" | ||
+ | |||
+ | #include <cstring> // for memcpy | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | VertexBuffer::VertexBuffer(unsigned char* const data, const unsigned int size, const std::vector<Format>& format) | ||
+ | : mData(new unsigned char[size]) | ||
+ | , mSize(size) | ||
+ | , mFormat(format) | ||
+ | , mStride(0U) | ||
+ | { | ||
+ | std::memcpy(mData, data, size); | ||
+ | |||
+ | for (std::vector<Format>::const_iterator itr(mFormat.begin()); itr < mFormat.end(); ++itr) | ||
+ | mStride += itr->getSize(); | ||
+ | } | ||
+ | |||
+ | VertexBuffer::VertexBuffer(unsigned char* const data, const unsigned int size) | ||
+ | : mData(new unsigned char[size]) | ||
+ | , mSize(size) | ||
+ | , mFormat() | ||
+ | , mStride(0U) | ||
+ | { | ||
+ | std::memcpy(mData, data, size); | ||
+ | } | ||
+ | |||
+ | VertexBuffer::VertexBuffer(const VertexBuffer& vertexBuffer) | ||
+ | : mData(vertexBuffer.mData) | ||
+ | , mSize(vertexBuffer.mSize) | ||
+ | , mFormat(vertexBuffer.mFormat) | ||
+ | , mStride(vertexBuffer.mStride) | ||
+ | { | ||
+ | } | ||
+ | |||
+ | void VertexBuffer::addFormatIdentifier(const FormatType formatType, const unsigned int amount) | ||
+ | { | ||
+ | Format newFormat(formatType, mStride); | ||
+ | mStride += newFormat.getSize() * amount; | ||
+ | mFormat.push_back(newFormat); | ||
+ | } | ||
+ | |||
+ | All we're really doing here, is making sure we copy the mesh structure properly when we have to, and that we deal with the Format parameters properly and update our current stride amount when adding a new format identifier. Simples! | ||
+ | |||
+ | === IndexBuffer.h === | ||
+ | Considering all our Index Buffer has to do is store the order of the vertices to render, there's not much to it. | ||
+ | #ifndef _INDEX_BUFFER_H_ | ||
+ | #define _INDEX_BUFFER_H_ | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class IndexBuffer | ||
+ | { | ||
+ | public: | ||
+ | enum FormatType { | ||
+ | FORMAT_FLOAT // unsupported by ES 1 | ||
+ | , FORMAT_UNSIGNED_BYTE | ||
+ | , FORMAT_UNSIGNED_SHORT | ||
+ | }; | ||
+ | |||
+ | IndexBuffer(unsigned char* const data, const unsigned int size, const FormatType format); | ||
+ | IndexBuffer(const IndexBuffer& indexBuffer); | ||
+ | |||
+ | /// Retrieve format details | ||
+ | const FormatType getFormat() const { return mFormat; } | ||
+ | |||
+ | /// Retrieve data | ||
+ | const unsigned char* getData() const { return mData; } | ||
+ | |||
+ | /// Retrieve size | ||
+ | const unsigned int getSize() const { return mSize; } | ||
+ | |||
+ | protected: | ||
+ | /// Protected equals operator - use the copy constructor instead. | ||
+ | IndexBuffer operator=(const IndexBuffer&) { return *this; } | ||
+ | |||
+ | private: | ||
+ | unsigned char* mData; | ||
+ | unsigned int mSize; | ||
+ | FormatType mFormat; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Again, as with the Vertex Buffer, if you can at all get away with it, use the smallest data unit you can for your indices.<br /> | ||
+ | Be advised that ES 1 does not support Float indices so you're best using unsigned short. Actually, I'd recommend unsigned short overall... if you're dealing with meshes large enough to require Floats, you're probably doing it wrong or have pretty specific needs which this set of tutorials probably isn't going to help you with. | ||
+ | === IndexBuffer.cpp === | ||
+ | #include "VertexBuffer.h" | ||
+ | |||
+ | #include <cstring> // for memcpy | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | VertexBuffer::VertexBuffer(unsigned char* const data, const unsigned int size, const std::vector<Format>& format) | ||
+ | : mData(new unsigned char[size]) | ||
+ | , mSize(size) | ||
+ | , mFormat(format) | ||
+ | , mStride(0U) | ||
+ | { | ||
+ | std::memcpy(mData, data, size); | ||
+ | |||
+ | for (std::vector<Format>::const_iterator itr(mFormat.begin()); itr < mFormat.end(); ++itr) | ||
+ | mStride += itr->getSize(); | ||
+ | } | ||
+ | |||
+ | VertexBuffer::VertexBuffer(unsigned char* const data, const unsigned int size) | ||
+ | : mData(new unsigned char[size]) | ||
+ | , mSize(size) | ||
+ | , mFormat() | ||
+ | , mStride(0U) | ||
+ | { | ||
+ | std::memcpy(mData, data, size); | ||
+ | } | ||
+ | |||
+ | VertexBuffer::VertexBuffer(const VertexBuffer& vertexBuffer) | ||
+ | : mData(vertexBuffer.mData) | ||
+ | , mSize(vertexBuffer.mSize) | ||
+ | , mFormat(vertexBuffer.mFormat) | ||
+ | , mStride(vertexBuffer.mStride) | ||
+ | { | ||
+ | } | ||
+ | |||
+ | void VertexBuffer::addFormatIdentifier(const FormatType formatType, const unsigned int amount) | ||
+ | { | ||
+ | Format newFormat(formatType, mStride); | ||
+ | mStride += newFormat.getSize() * amount; | ||
+ | mFormat.push_back(newFormat); | ||
+ | } | ||
+ | |||
+ | The object file is just as weedy as the last one. | ||
+ | |||
+ | == The Mesh Object == | ||
+ | Now, to actually draw this, we're still in need of a few more bits and pieces.. but as we haven't really gotten to them yet, we'll just do some place holders. | ||
+ | |||
+ | === Place holder Objects === | ||
+ | Generally, Meshes will have a Material definition. This will tell the Graphics System how shiny the object is, for example, and how the lighting affects it. However, how Materials are actually used are a bit dependent upon the Rendering System's context, so are a bit out of scope for now. | ||
+ | |||
+ | We can therefore get away with the following: | ||
+ | ==== Material.h ==== | ||
+ | #ifndef _MATERIAL_H_ | ||
+ | #define _MATERIAL_H_ | ||
+ | |||
+ | #include <vector> | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class Shader; | ||
+ | class Texture; | ||
+ | class Material | ||
+ | { | ||
+ | public: | ||
+ | Material(); | ||
+ | |||
+ | /// Grab the Shader that's linked to this Material | ||
+ | Shader* const getShader() const { return mShader; } | ||
+ | |||
+ | /// Set a new Shader on this Material | ||
+ | void setShader(Shader* const shader) { mShader = shader; } | ||
+ | |||
+ | /// Grab a Texture | ||
+ | Texture* const getTexture(unsigned int index) const { return mTextures[index]; } | ||
+ | |||
+ | private: | ||
+ | Shader* mShader; | ||
+ | std::vector<Texture*> mTextures; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | You might have noticed the Shader and Texture pointers there.<br /> | ||
+ | Luckily, as we've forward declared them up top, we don't really need to define them much more than this, so can ignore them for now! | ||
+ | |||
+ | ==== Transforms ==== | ||
+ | This one's a bit more complex as generally a Transform Matrix is a 4x4 Matrix, and requires a bit of Math knowledge to fiddle with.<br /> | ||
+ | Again, place holder time: | ||
+ | ===== Matrix4.h ===== | ||
+ | #ifndef _MATRIX4_H_ | ||
+ | #define _MATRIX4_H_ | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class Matrix4 | ||
+ | { | ||
+ | public: | ||
+ | Matrix4(); | ||
+ | |||
+ | /// Retrieve a pointer to the matrix | ||
+ | float* const getData() { return mMatrix; } | ||
+ | |||
+ | /// Set a new matrix | ||
+ | void setMatrix(const float matrix[16]); | ||
+ | |||
+ | /// Set to Identity | ||
+ | void setToIdentity(); | ||
+ | |||
+ | /// Set to Zero | ||
+ | void setToZero(); | ||
+ | |||
+ | private: | ||
+ | float mMatrix[16]; | ||
+ | }; | ||
+ | |||
+ | } | ||
+ | |||
+ | #endif | ||
+ | ===== Matrix4.cpp ===== | ||
+ | #include "Matrix4.h" | ||
+ | |||
+ | #include <cstring> // for memcpy | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | Matrix4::Matrix4() | ||
+ | : mMatrix() | ||
+ | { | ||
+ | setToIdentity(); | ||
+ | } | ||
+ | |||
+ | void Matrix4::setMatrix(const float matrix[16]) | ||
+ | { | ||
+ | std::memcpy(mMatrix, matrix, sizeof(float) * 16); | ||
+ | } | ||
+ | |||
+ | void Matrix4::setToIdentity() | ||
+ | { | ||
+ | // 0 1 2 3 | ||
+ | // 4 5 6 7 | ||
+ | // 8 9 10 11 | ||
+ | // 12 13 14 15 | ||
+ | mMatrix[0] = mMatrix[5] = mMatrix[10] = mMatrix[15] = 1.0F; | ||
+ | mMatrix[1] = mMatrix[2] = mMatrix[3] = 0.0F; | ||
+ | mMatrix[4] = mMatrix[6] = mMatrix[7] = 0.0F; | ||
+ | mMatrix[8] = mMatrix[9] = mMatrix[11] = 0.0F; | ||
+ | mMatrix[12] = mMatrix[13] = mMatrix[14] = 0.0F; | ||
+ | } | ||
+ | |||
+ | void Matrix4::setToZero() | ||
+ | { | ||
+ | mMatrix[0] = mMatrix[1] = mMatrix[2] = mMatrix[3] = 0.0F; | ||
+ | mMatrix[4] = mMatrix[5] = mMatrix[6] = mMatrix[7] = 0.0F; | ||
+ | mMatrix[8] = mMatrix[9] = mMatrix[10] = mMatrix[11] = 0.0F; | ||
+ | mMatrix[12] = mMatrix[13] = mMatrix[14] = mMatrix[15] = 0.0F; | ||
+ | } | ||
+ | |||
+ | Fairly standard stuff.. an Identity matrix just has 1's down the diagonal and 0's elsewhere as shown, and is pretty handy for drawing stuff dead centre. We'll be using this to draw our quad on the screen starting from the centre so it fills the screen with coloured joy. A zero matrix pretty much explains itself. | ||
+ | |||
+ | === Mesh.h === | ||
+ | Binding all this up into one Mesh object is then fairly straight-forward: | ||
+ | #ifndef _MESH_H_ | ||
+ | #define _MESH_H_ | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class VertexBuffer; | ||
+ | class IndexBuffer; | ||
+ | class Material; | ||
+ | class Matrix4; | ||
+ | class Mesh | ||
+ | { | ||
+ | public: | ||
+ | Mesh(VertexBuffer* const vertexBuffer | ||
+ | , IndexBuffer* const indexBuffer | ||
+ | , Material* const material | ||
+ | , Matrix4* const matrix); | ||
+ | ~Mesh(); | ||
+ | |||
+ | /// Grab the current Vertex Buffer - read-only | ||
+ | const VertexBuffer* const getVertexBuffer() const { return mVertexBuffer; } | ||
+ | |||
+ | /// Grab the current Index Buffer - read-only | ||
+ | const IndexBuffer* const getIndexBuffer() const { return mIndexBuffer; } | ||
+ | |||
+ | /// Grab the current Material - read-only | ||
+ | const Material* const getMaterial() const { return mMaterial; } | ||
+ | |||
+ | /// Grab the current Transform Matrix - read-only | ||
+ | const Matrix4* const getMatrix() const { return mMatrix; } | ||
+ | |||
+ | /// Grab an Editable Vertex Buffer | ||
+ | VertexBuffer* const editVertexBuffer() { return mVertexBuffer; } | ||
+ | |||
+ | /// Grab an Editable Index Buffer | ||
+ | IndexBuffer* const editIndexBuffer() { return mIndexBuffer; } | ||
+ | |||
+ | /// Grab an Editable Material | ||
+ | Material* const editMaterial() { return mMaterial; } | ||
+ | |||
+ | /// Grab an Editable Transform Matrix | ||
+ | Matrix4* const editMatrix() { return mMatrix; } | ||
+ | |||
+ | private: | ||
+ | VertexBuffer* mVertexBuffer; | ||
+ | IndexBuffer* mIndexBuffer; | ||
+ | Material* mMaterial; | ||
+ | Matrix4* mMatrix; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | === Mesh.cpp === | ||
+ | #include "Mesh.h" | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | Mesh::Mesh(VertexBuffer* const vertexBuffer, IndexBuffer* const indexBuffer, Material* const material, Matrix4* const matrix) | ||
+ | : mVertexBuffer(vertexBuffer) | ||
+ | , mIndexBuffer(indexBuffer) | ||
+ | , mMaterial(material) | ||
+ | , mMatrix(matrix) | ||
+ | { | ||
+ | |||
+ | } | ||
+ | |||
+ | Mesh::~Mesh() | ||
+ | { | ||
+ | //TODO: Resource Management | ||
+ | delete mVertexBuffer; | ||
+ | delete mIndexBuffer; | ||
+ | delete mMaterial; | ||
+ | delete mMatrix; | ||
+ | } | ||
+ | |||
+ | What could be simpler than that? a bunch of accessors, and a constructor which takes in a pointer to each part that makes up the Mesh. | ||
+ | |||
+ | Do note that we delete everything in the constructor - we're assuming that as soon as you give the Mesh the stuff it needs, that it takes full ownership of it. | ||
+ | |||
+ | == Making a Mesh == | ||
+ | Going back to our earlier example of showing you some vertex data to define a quad, here it is again in a handy function using what we've just done: | ||
+ | Mesh* makeSprite() | ||
+ | { | ||
+ | float vertexData[24] = {// Position - 12 floats | ||
+ | -1.0F, 1.0F, 0.0F, | ||
+ | 1.0F, 1.0F, 0.0F, | ||
+ | 1.0F, -1.0F, 0.0F, | ||
+ | -1.0F, -1.0F, 0.0F, | ||
+ | // Colour - 12 floats | ||
+ | 0.0F, 1.0F, 0.0F, | ||
+ | 1.0F, 0.0F, 0.0F, | ||
+ | 0.0F, 0.0F, 1.0F, | ||
+ | 1.0F, 1.0F, 1.0F}; | ||
+ | |||
+ | unsigned int vertexSize = 24 * sizeof(float); | ||
+ | |||
+ | unsigned char indexData[6] = { 0, 1, 2, 2, 3, 0 }; | ||
+ | unsigned int indexSize = 6 * sizeof(unsigned char); | ||
+ | |||
+ | VertexBuffer* newVertexBuffer = new VertexBuffer(reinterpret_cast<unsigned char*>(&vertexData), vertexSize); | ||
+ | newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_POSITION_3F, 4U); | ||
+ | newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_COLOUR_3F, 4U); | ||
+ | IndexBuffer* newIndexBuffer = new IndexBuffer(reinterpret_cast<unsigned char*>(&indexData), indexSize, IndexBuffer::FORMAT_UNSIGNED_BYTE); | ||
+ | Material* newMaterial = new Material; | ||
+ | Matrix4* newTransform = new Matrix4; | ||
+ | |||
+ | return new Mesh(newVertexBuffer, newIndexBuffer, newMaterial, newTransform); | ||
+ | } | ||
+ | |||
+ | Nice and simple, huh? While I'm not using the smallest format for everything; I am doing a reasonably even mix. | ||
+ | |||
+ | == Checking out the SVN == | ||
+ | There isn't actually any useful code for this Chapter that would make it standalone.<br /> | ||
+ | You'll just need to bear with me and wait till you get to the Rendering Contexts. | ||
+ | |||
+ | == Building the Example == | ||
+ | There is no example this time, as there'd be nothing to show!<br /> | ||
+ | Instead, jump to either [[GLESGAE:Fixed Function Rendering Contexts]] or [[GLESGAE:Shader Based Contexts]] for displaying your Mesh in the chosen manner. | ||
+ | |||
+ | = Next Time = | ||
+ | We'll be looking at rendering this mess... err, mesh. | ||
+ | |||
+ | = GLESGAE - Fixed Function Rendering Contexts = | ||
+ | |||
+ | == Introduction == | ||
+ | Fixed Function Rendering is actually pretty easy.<br /> | ||
+ | You've a specific set of things you can and cannot do, and also your Meshes must conform to a reasonable standard for the most part.<br /> | ||
+ | This makes getting something up and running really quick and easy. | ||
+ | |||
+ | Following on from [[GLESGAE:Making a Mesh]], we now have a Mesh object available to draw... so let's go ahead and do so. | ||
+ | |||
+ | == Fast Track == | ||
+ | We're on SVN revision 4 now, which includes all the mesh code from previous guide. | ||
+ | '''''svn co -r 4 http://svn3.xp-dev.com/svn/glesgae/trunk/ glesgae''''' | ||
+ | |||
+ | == The Fixed Function Context == | ||
+ | Our Fixed Function Context is probably still quite empty, so let's start filling it up with some functions: | ||
+ | === FixedFunctionContext.h === | ||
+ | #ifndef _FIXED_FUNCTION_CONTEXT_H_ | ||
+ | #define _FIXED_FUNCTION_CONTEXT_H_ | ||
+ | |||
+ | #if defined(GLX) | ||
+ | #include "../GLee.h" | ||
+ | #elif defined(PANDORA) | ||
+ | #include <GLES/gl.h> | ||
+ | #endif | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class Material; | ||
+ | class Mesh; | ||
+ | class FixedFunctionContext | ||
+ | { | ||
+ | /** | ||
+ | The quickest way for a Fixed Function pipeline to work, is to have all data match up to a | ||
+ | common format. As such, we use global state to deal with Vertex Attributes. | ||
+ | |||
+ | This means that you really need to order your data correctly so you're doing as few state | ||
+ | switches as possible when rendering - the context just renders, it doesn't organise for you. | ||
+ | |||
+ | Additionally, dealing with multiple texture co-ordinates is a pain as we can only really deal | ||
+ | with them as and when we get them, and store how many we've enabled/disabled since last time. | ||
+ | Also, with a Fixed Function pipeline, each Texture must have Texture Co-ordinates, so be | ||
+ | mindful of that with your data! | ||
+ | **/ | ||
+ | |||
+ | public: | ||
+ | FixedFunctionContext(); | ||
+ | virtual ~FixedFunctionContext(); | ||
+ | |||
+ | /// Enable Vertex Positions | ||
+ | void enableFixedFunctionVertexPositions(); | ||
+ | |||
+ | /// Disable Vertex Positions | ||
+ | void disableFixedFunctionVertexPositions(); | ||
+ | |||
+ | /// Enable Vertex Colours | ||
+ | void enableFixedFunctionVertexColours(); | ||
+ | |||
+ | /// Disable Vertex Colours | ||
+ | void disableFixedFunctionVertexColours(); | ||
+ | |||
+ | /// Enable Vertex Normals | ||
+ | void enableFixedFunctionVertexNormals(); | ||
+ | |||
+ | /// Disable Vertex Normals | ||
+ | void disableFixedFunctionVertexNormals(); | ||
+ | |||
+ | protected: | ||
+ | /// Draw a Mesh using the Fixed Function Pipeline | ||
+ | void drawMeshFixedFunction(Mesh* const mesh); | ||
+ | |||
+ | /// Setup texturing - check if the requested texture unit is on and bind a texture from the material. | ||
+ | void setupFixedFunctionTexturing(unsigned int* textureUnit, const Material* const material); | ||
+ | |||
+ | /// Disable Texture Units | ||
+ | void disableFixedFunctionTexturing(const unsigned int currentTextureUnit); | ||
+ | |||
+ | private: | ||
+ | bool mFixedFunctionTexUnits[8]; // 8 Texture Units sounds like they'd be enough to me! | ||
+ | unsigned int mFixedFunctionLastTexUnit; // Last Texture Unit we were working on, in case it's the same. | ||
+ | }; | ||
+ | |||
+ | } | ||
+ | |||
+ | #endif | ||
+ | === FixedFunctionContext.cpp === | ||
+ | #include "FixedFunctionContext.h" | ||
+ | |||
+ | #include "../IndexBuffer.h" | ||
+ | #include "../Material.h" | ||
+ | #include "../Mesh.h" | ||
+ | #include "../Texture.h" | ||
+ | #include "../VertexBuffer.h" | ||
+ | |||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | FixedFunctionContext::FixedFunctionContext() | ||
+ | : mFixedFunctionTexUnits() | ||
+ | , mFixedFunctionLastTexUnit(0U) | ||
+ | { | ||
+ | // Mark all texture co-ordinate arrays as offline. | ||
+ | for (unsigned int index(0U); index < 8U; ++index) | ||
+ | mFixedFunctionTexUnits[index] = false; | ||
+ | } | ||
+ | |||
+ | FixedFunctionContext::~FixedFunctionContext() | ||
+ | { | ||
+ | } | ||
+ | |||
+ | void FixedFunctionContext::drawMeshFixedFunction(Mesh* const mesh) | ||
+ | { | ||
+ | const IndexBuffer* const indexBuffer(mesh->getIndexBuffer()); | ||
+ | const VertexBuffer* const vertexBuffer(mesh->getVertexBuffer()); | ||
+ | const Material* const material(mesh->getMaterial()); | ||
+ | unsigned int currentTextureUnit(0U); | ||
+ | |||
+ | const std::vector<VertexBuffer::Format>& meshFormat(vertexBuffer->getFormat()); | ||
+ | for (std::vector<VertexBuffer::Format>::const_iterator itr(meshFormat.begin()); itr < meshFormat.end(); ++itr) { | ||
+ | switch (itr->getType()) { | ||
+ | // Position | ||
+ | case VertexBuffer::FORMAT_POSITION_2F: | ||
+ | glVertexPointer(2, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_POSITION_3F: | ||
+ | glVertexPointer(3, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_POSITION_4F: | ||
+ | glVertexPointer(4, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_POSITION_2S: | ||
+ | glVertexPointer(2, GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_POSITION_3S: | ||
+ | glVertexPointer(3, GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_POSITION_4S: | ||
+ | glVertexPointer(4, GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_POSITION_2B: | ||
+ | glVertexPointer(2, GL_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_POSITION_3B: | ||
+ | glVertexPointer(3, GL_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_POSITION_4B: | ||
+ | glVertexPointer(4, GL_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | // Normal | ||
+ | case VertexBuffer::FORMAT_NORMAL_3F: | ||
+ | glNormalPointer(GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_NORMAL_3S: | ||
+ | glNormalPointer(GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_NORMAL_3B: | ||
+ | glNormalPointer(GL_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | // Colour | ||
+ | case VertexBuffer::FORMAT_COLOUR_4F: | ||
+ | glColorPointer(4, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_COLOUR_3F: | ||
+ | glColorPointer(3, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_COLOUR_4S: | ||
+ | glColorPointer(4, GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_COLOUR_3S: | ||
+ | glColorPointer(3, GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_COLOUR_4UB: | ||
+ | glColorPointer(4, GL_UNSIGNED_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_COLOUR_3UB: | ||
+ | glColorPointer(3, GL_UNSIGNED_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | // Textures and Co-ordinates | ||
+ | case VertexBuffer::FORMAT_TEXTURE_2F: | ||
+ | setupFixedFunctionTexturing(¤tTextureUnit, material); | ||
+ | glTexCoordPointer(2, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_TEXTURE_3F: | ||
+ | setupFixedFunctionTexturing(¤tTextureUnit, material); | ||
+ | glTexCoordPointer(3, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_TEXTURE_4F: | ||
+ | setupFixedFunctionTexturing(¤tTextureUnit, material); | ||
+ | glTexCoordPointer(4, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_TEXTURE_2S: | ||
+ | setupFixedFunctionTexturing(¤tTextureUnit, material); | ||
+ | glTexCoordPointer(2, GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_TEXTURE_3S: | ||
+ | setupFixedFunctionTexturing(¤tTextureUnit, material); | ||
+ | glTexCoordPointer(3, GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_TEXTURE_4S: | ||
+ | setupFixedFunctionTexturing(¤tTextureUnit, material); | ||
+ | glTexCoordPointer(4, GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_TEXTURE_2B: | ||
+ | setupFixedFunctionTexturing(¤tTextureUnit, material); | ||
+ | glTexCoordPointer(2, GL_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_TEXTURE_3B: | ||
+ | setupFixedFunctionTexturing(¤tTextureUnit, material); | ||
+ | glTexCoordPointer(3, GL_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_TEXTURE_4B: | ||
+ | setupFixedFunctionTexturing(¤tTextureUnit, material); | ||
+ | glTexCoordPointer(4, GL_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | default: | ||
+ | break; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | disableFixedFunctionTexturing(currentTextureUnit); // Disable any excess texture units | ||
+ | switch (indexBuffer->getFormat()) { | ||
+ | case IndexBuffer::FORMAT_FLOAT: | ||
+ | glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_FLOAT, indexBuffer->getData()); | ||
+ | break; | ||
+ | case IndexBuffer::FORMAT_UNSIGNED_BYTE: | ||
+ | glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_BYTE, indexBuffer->getData()); | ||
+ | break; | ||
+ | case IndexBuffer::FORMAT_UNSIGNED_SHORT: | ||
+ | glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_SHORT, indexBuffer->getData()); | ||
+ | break; | ||
+ | default: | ||
+ | break; | ||
+ | }; | ||
+ | mFixedFunctionLastTexUnit = currentTextureUnit; | ||
+ | } | ||
+ | |||
+ | void FixedFunctionContext::enableFixedFunctionVertexPositions() | ||
+ | { | ||
+ | glEnableClientState(GL_VERTEX_ARRAY); | ||
+ | } | ||
+ | |||
+ | void FixedFunctionContext::disableFixedFunctionVertexPositions() | ||
+ | { | ||
+ | glDisableClientState(GL_VERTEX_ARRAY); | ||
+ | } | ||
+ | |||
+ | void FixedFunctionContext::enableFixedFunctionVertexColours() | ||
+ | { | ||
+ | glEnableClientState(GL_COLOR_ARRAY); | ||
+ | } | ||
+ | |||
+ | void FixedFunctionContext::disableFixedFunctionVertexColours() | ||
+ | { | ||
+ | glDisableClientState(GL_COLOR_ARRAY); | ||
+ | } | ||
+ | |||
+ | void FixedFunctionContext::enableFixedFunctionVertexNormals() | ||
+ | { | ||
+ | glEnableClientState(GL_NORMAL_ARRAY); | ||
+ | } | ||
+ | |||
+ | void FixedFunctionContext::disableFixedFunctionVertexNormals() | ||
+ | { | ||
+ | glDisableClientState(GL_NORMAL_ARRAY); | ||
+ | } | ||
+ | |||
+ | void FixedFunctionContext::setupFixedFunctionTexturing(unsigned int* textureUnit, const Material* const material) | ||
+ | { | ||
+ | if (false == mFixedFunctionTexUnits[*textureUnit]) { // This texture unit isn't currently enabled | ||
+ | mFixedFunctionTexUnits[*textureUnit] = true; | ||
+ | glClientActiveTexture(GL_TEXTURE0 + *textureUnit); | ||
+ | glEnable(GL_TEXTURE_2D); | ||
+ | glEnableClientState(GL_TEXTURE_COORD_ARRAY); | ||
+ | } else { | ||
+ | if (*textureUnit != mFixedFunctionLastTexUnit) { // This texture unit is enabled but it's not the current one | ||
+ | glActiveTexture(GL_TEXTURE0 + *textureUnit); | ||
+ | } | ||
+ | } | ||
+ | |||
+ | Texture* const texture(material->getTexture(*textureUnit)); | ||
+ | glBindTexture(GL_TEXTURE_2D, texture->getId()); | ||
+ | |||
+ | *textureUnit++; | ||
+ | } | ||
+ | |||
+ | void FixedFunctionContext::disableFixedFunctionTexturing(const unsigned int currentTextureUnit) | ||
+ | { | ||
+ | unsigned int delta(mFixedFunctionLastTexUnit); | ||
+ | while (delta > currentTextureUnit) { // We're using less texture units than we need | ||
+ | glClientActiveTexture(GL_TEXTURE0 + delta); | ||
+ | glDisable(GL_TEXTURE_2D); | ||
+ | glDisableClientState(GL_TEXTURE_COORD_ARRAY); | ||
+ | --delta; | ||
+ | } | ||
+ | } | ||
+ | |||
+ | Most of this should be fairly straight-forward... but let's go into some of it, anyway. | ||
+ | |||
+ | == Enabling/Disabling Vertex Attributes == | ||
+ | A Vertex Attribute is generally one of the following: | ||
+ | * Vertex Position | ||
+ | * Vertex Normal | ||
+ | * Vertex Colour | ||
+ | * Vertex Texture Co-ordinate | ||
+ | |||
+ | We need to enable/disable these based on what our Vertex Format is. | ||
+ | |||
+ | It's also rather costly to enable/disable these at will, so generally we'll want all our meshes to correspond to a set standard. Therefore, the enable/disableFixedFunctionVertex* functions affect your global rendering state, and are public to be accessed when needed. | ||
+ | |||
+ | Additionally, enabling and disabling Texture Units is just as costly, so we have to store our own state here too. | ||
+ | |||
+ | == Rendering via Vertex Arrays == | ||
+ | The real meat of the class is the draw function. | ||
+ | |||
+ | It's not really that scary when you sit down and pick it apart, though! Really!<br /> | ||
+ | Most of it is my pedanticness of ensuring I can support as wide a range of things as possible. | ||
+ | |||
+ | So let's take one strip out - in particular, the stuff we need to draw our Mesh described in the previous Chapter: | ||
+ | void FixedFunctionContext::drawMeshFixedFunction(Mesh* const mesh) | ||
+ | { | ||
+ | const IndexBuffer* const indexBuffer(mesh->getIndexBuffer()); | ||
+ | const VertexBuffer* const vertexBuffer(mesh->getVertexBuffer()); | ||
+ | const Material* const material(mesh->getMaterial()); | ||
+ | unsigned int currentTextureUnit(0U); | ||
+ | |||
+ | const std::vector<VertexBuffer::Format>& meshFormat(vertexBuffer->getFormat()); | ||
+ | for (std::vector<VertexBuffer::Format>::const_iterator itr(meshFormat.begin()); itr < meshFormat.end(); ++itr) { | ||
+ | switch (itr->getType()) { | ||
+ | // Position | ||
+ | case VertexBuffer::FORMAT_POSITION_3F: | ||
+ | glVertexPointer(3, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_COLOUR_3F: | ||
+ | glColorPointer(3, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | default: | ||
+ | break; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | So, bit more concise and easier to read, now!<br /> | ||
+ | We don't have any normal data, so we don't draw a normal. We do have Positional data, represented by 3 Floats, so we mark a glVertexPointer to the current iterator position offset from the vertex Buffer's data pointer - which is our big fat interleaved array of goodies. | ||
+ | |||
+ | The iterator itself is actually running through the Vertex Buffer's format array which we described and setup in the last guide, in the newSprite function, so it'll run through all the Format descriptors to find one in the switch and act appropriately, such as setting the glVertexPointer, or the glColorPointer. | ||
+ | |||
+ | disableFixedFunctionTexturing(currentTextureUnit); // Disable any excess texture units | ||
+ | switch (indexBuffer->getFormat()) { | ||
+ | case IndexBuffer::FORMAT_UNSIGNED_BYTE: | ||
+ | glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_BYTE, indexBuffer->getData()); | ||
+ | break; | ||
+ | default: | ||
+ | break; | ||
+ | }; | ||
+ | mFixedFunctionLastTexUnit = currentTextureUnit; | ||
+ | } | ||
+ | |||
+ | Although we don't deal with texturing yet, I've put the code in to handle most of it already. In this case, we're seeing if we need to enable/disable more texture units than we had the previous frame. Pretty straight forward stuff.. leaving texture units enabled when you don't need them can cause oddness, for example.. and obviously not enabling enough isn't really going to help much either. | ||
+ | |||
+ | Anyway, we get on to running through our Index Buffer to see what type it is, and then call glDrawElements specifying the Mesh as a set of triangles, how many indices to draw, the type, and where the data resides. This then draws all the Attributes we've defined based upon the indices specified in our Index buffer. Pretty nifty, huh? We don't need to know or care what comes in any more as long as it's been wrapped properly in a Mesh object. | ||
+ | |||
+ | == Multi-Coloured Quad Goodness == | ||
+ | Of course, we're still missing stuff - the transform matrices, for example - but we'll get there in due course. We can now display our quad on the screen with the following example program. | ||
+ | #include <cstdio> | ||
+ | #include <cstdlib> | ||
+ | |||
+ | #include "../../Graphics/GraphicsSystem.h" | ||
+ | #include "../../Graphics/Context/FixedFunctionContext.h" | ||
+ | #include "../../Events/EventSystem.h" | ||
+ | #include "../../Input/InputSystem.h" | ||
+ | #include "../../Input/Keyboard.h" | ||
+ | #include "../../Input/Pad.h" | ||
+ | |||
+ | #include "../../Graphics/Mesh.h" | ||
+ | #include "../../Graphics/VertexBuffer.h" | ||
+ | #include "../../Graphics/IndexBuffer.h" | ||
+ | #include "../../Graphics/Material.h" | ||
+ | #include "../../Maths/Matrix4.h" | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | void updateRender(Mesh* const mesh); | ||
+ | Mesh* makeSprite(); | ||
+ | |||
+ | int main(void) | ||
+ | { | ||
+ | EventSystem* eventSystem(new EventSystem); | ||
+ | InputSystem* inputSystem(new InputSystem(eventSystem)); | ||
+ | GraphicsSystem* graphicsSystem(new GraphicsSystem(GraphicsSystem::FIXED_FUNCTION_RENDERING)); | ||
+ | |||
+ | if (false == graphicsSystem->initialise("GLESGAE Fixed Function Test", 800, 480, 16, false)) { | ||
+ | //TODO: OH NOES! WE'VE DIEDED! | ||
+ | return -1; | ||
+ | } | ||
+ | |||
+ | Mesh* mesh(makeSprite()); | ||
+ | |||
+ | FixedFunctionContext* const fixedContext(graphicsSystem->getFixedContext()); | ||
+ | if (0 != fixedContext) { | ||
+ | fixedContext->enableFixedFunctionVertexPositions(); | ||
+ | fixedContext->enableFixedFunctionVertexColours(); | ||
+ | } | ||
+ | |||
+ | eventSystem->bindToWindow(graphicsSystem->getWindow()); | ||
+ | |||
+ | Controller::KeyboardController* myKeyboard(inputSystem->newKeyboard()); | ||
+ | |||
+ | while(false == myKeyboard->getKey(Controller::KEY_ESCAPE)) { | ||
+ | eventSystem->update(); | ||
+ | inputSystem->update(); | ||
+ | graphicsSystem->beginFrame(); | ||
+ | graphicsSystem->drawMesh(mesh); | ||
+ | graphicsSystem->endFrame(); | ||
+ | } | ||
+ | |||
+ | delete graphicsSystem; | ||
+ | delete inputSystem; | ||
+ | delete eventSystem; | ||
+ | delete mesh; | ||
+ | |||
+ | return 0; | ||
+ | } | ||
+ | |||
+ | Mesh* makeSprite() | ||
+ | { | ||
+ | float vertexData[24] = {// Position - 12 floats | ||
+ | -1.0F, 1.0F, 0.0F, | ||
+ | 1.0F, 1.0F, 0.0F, | ||
+ | 1.0F, -1.0F, 0.0F, | ||
+ | -1.0F, -1.0F, 0.0F, | ||
+ | // Colour - 12 floats | ||
+ | 0.0F, 1.0F, 0.0F, | ||
+ | 1.0F, 0.0F, 0.0F, | ||
+ | 0.0F, 0.0F, 1.0F, | ||
+ | 1.0F, 1.0F, 1.0F}; | ||
+ | |||
+ | unsigned int vertexSize = 24 * sizeof(float); | ||
+ | |||
+ | unsigned char indexData[6] = { 0, 1, 2, 2, 3, 0 }; | ||
+ | unsigned int indexSize = 6 * sizeof(unsigned char); | ||
+ | |||
+ | VertexBuffer* newVertexBuffer = new VertexBuffer(reinterpret_cast<unsigned char*>(&vertexData), vertexSize); | ||
+ | newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_POSITION_3F, 4U); | ||
+ | newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_COLOUR_3F, 4U); | ||
+ | IndexBuffer* newIndexBuffer = new IndexBuffer(reinterpret_cast<unsigned char*>(&indexData), indexSize, IndexBuffer::FORMAT_UNSIGNED_BYTE); | ||
+ | Material* newMaterial = new Material; | ||
+ | Matrix4* newTransform = new Matrix4; | ||
+ | |||
+ | return new Mesh(newVertexBuffer, newIndexBuffer, newMaterial, newTransform); | ||
+ | } | ||
+ | |||
+ | == Building the Example == | ||
+ | In the SVN there are Makefiles already setup for you.. just trigger '''''make -f MakefileES1.pandora''''' or whatever your chosen configuration is, and it'll happily build for you and spit out a '''GLESGAE.pandora''' binary for you to run. | ||
+ | |||
+ | Alternatively, if you use CodeLite, there's a Workspace/Project set for you preconfigured. | ||
+ | |||
+ | = Next Time = | ||
+ | Let's render the same thing again, but through a Shader based system. | ||
+ | |||
+ | |||
+ | = GLESGAE - The Shader Based Context = | ||
+ | |||
+ | == Fast Track == | ||
+ | We're on SVN revision 4 now, which includes all the rendering stuff, and the mesh code from previous guides. | ||
+ | '''''svn co -r 4 http://svn3.xp-dev.com/svn/glesgae/trunk/ glesgae''''' | ||
+ | |||
+ | == Introduction == | ||
+ | This was much bigger than it is now.. as I originally jumped from Events to Shader Based Rendering due to there already being some Fixed Function stuff on the Wiki. However, I also needed to create a Mesh object and some foundation, so I split things up and did everything else first. | ||
+ | |||
+ | Therefore, it is highly advisable to have at least read [[GLESGAE:Making a Mesh]] before continuing. Reading [[GLESGAE:Fixed Function Rendering Contexts]] wouldn't kill you either. | ||
+ | |||
+ | == Platform Specific Madness == | ||
+ | As an aside, this is where platform specifics can bite your bum.<br /> | ||
+ | On my netbook with Intel GMA 950, my default gl.h does not include shader support, so I had to pull in GLee instead, which rips it out from the driver itself, and gives you handy checking abilities to ensure you're not trying to use something which isn't there.<br /> | ||
+ | Technically, GLee and GLEW do the same thing, so it's a matter of taste as to which one you use really, but they're immensely handy for when you can't be bothered doing all the actual extension-o-rama with OpenGL yourself. | ||
+ | |||
+ | You can get GLee from: http://elf-stone.com/glee.php | ||
+ | |||
+ | = The Shader = | ||
+ | In a shader-based rendering system, the shader is perhaps one of - if not the most - important parts of the renderer.<br /> | ||
+ | What do I mean by this? Well, your shader will be what defines what gets sent to the GPU - vertex positions, colours, texture co-ordinates, etc...<br /> | ||
+ | If the shader isn't expecting it, it can cause Open GL to crash viciously.<br /> | ||
+ | Therefore, it's of high importance to get the whole concept of a shader completely defined from the start - what it does, and how you use them. | ||
+ | |||
+ | == Attributes and Uniforms == | ||
+ | A shader comprises of two main types of modifiable data that we can send to it from the engine - attributes and uniforms.<br /> | ||
+ | Attributes are used only in the Vertex Shader, and are then converted to Varyings which are passed to the Fragment Shader.<br /> | ||
+ | |||
+ | Attributes are your vertex descriptors as previously mentioned; position, colour, co-ordinates, etc... whereas uniforms are any random bit of data that you want your shader to process; time for example. | ||
+ | |||
+ | Now, obviously we should know what our shader is expecting as we wrote it, but it's much more convenient to pull this information back out of the shader as we load it up.<br /> | ||
+ | This allows us to just push any old shader in, and our rendering context will configure itself accordingly.<br /> | ||
+ | Of course, we still have to send the data over, and if we don't feed our shader what it needs, it can cause rather bizarre effects - or worse, crashes. | ||
+ | |||
+ | Before we get that far though, we need to load the shader up. | ||
+ | |||
+ | == Writing our First Shaders == | ||
+ | In the GL ES variant of GLSL you're allowed vertex and fragment shaders. When combined, this creates a shader program.<br /> | ||
+ | Vertex Shaders deal with manipulating geometry; pushing vertices around on the screen.<br /> | ||
+ | Fragment Shaders deal with manipulating fragments - or pixels; such as texturing or colouring.<br /> | ||
+ | Vertex Shaders deal with attributes, modifying these as varyings to pass to the Fragment shader; both of which can also deal with uniforms.<br /> | ||
+ | A Shader Program can comprise of many of these Vertex and Fragment shaders, but must have only one main() function for each Vertex/Fragment part.<br /> | ||
+ | To make matters trickier than they need be, GLSL is not file-based like HLSL or CG, therefore you need to manage common functionality yourself rather than define a set of common functions in one shader file, and including it where it needs be.<br /> | ||
+ | Believe me, this is an arse to deal with properly and you really need to come up with some form of effects descriptor to handle it correctly... something beyond our scope for the moment. | ||
+ | |||
+ | Anyway, once we load up a Vertex and Fragment shader, we link them together to form a Shader Program. We can then pull out all the attributes and uniforms which have not been optimised out.<br /> | ||
+ | Read that last line again, as it's rather important. The GLSL shader compiler will automatically optimise out attributes and uniforms which are not used, so although you may define them, and expect them to be there; if they've not been used, they won't be!<br /> | ||
+ | This is why I much prefer to check the Shader Program to see what it has, rather than assuming what I'm going to get. | ||
+ | |||
+ | It's easier to understand what's going on with an example: | ||
+ | === A Simple Vertex Shader === | ||
+ | attribute vec4 a_position; | ||
+ | attribute vec4 a_colour; | ||
+ | |||
+ | varying vec4 v_colour; | ||
+ | |||
+ | void main() { | ||
+ | gl_Position = a_position; | ||
+ | v_colour = a_colour; | ||
+ | } | ||
+ | |||
+ | This shader expects two Attributes - a_position and a_colour. This means, it expect a Vertex Position and a Vertex Colour for every vertex it receives.<br /> | ||
+ | It also expects them as four vectors, or floats in other words. Simple enough?<br /> | ||
+ | We don't do anything to the position, and just pass it directly to OpenGL via the special gl_Position variable.<br /> | ||
+ | We also don't do anything to the colour, but we do want to pass it along to the Fragment Shader, so we create a varying, and make that equal to our attribute. | ||
+ | |||
+ | === A Simple Fragment Shader === | ||
+ | varying vec4 v_colour; | ||
+ | uniform sampler2D s_texture; | ||
+ | uniform float u_random; | ||
+ | |||
+ | void main() | ||
+ | { | ||
+ | gl_FragColor = v_colour; | ||
+ | } | ||
+ | |||
+ | Our Fragment shader is even simpler, we just equal our special gl_FragColor variable to our varying which has come from the Vertex Shader and we're done!<br /> | ||
+ | I've also included two uniforms in here, a standard float - u_random, and a texture sampler - s_texture. These don't get used, so will be optimised out by the compiler when it links these two shaders together. | ||
+ | |||
+ | A quick gotcha is that of precision... some GL ES implementations (Pandora's for instance) require a default precision to be set, others have one pre-defined. Desktop GLSL doesn't support default precision either, so you're left with the fun task of marking every sodding thing with lowp, mediump and highp if you run into it... or adding it into your shader compile chain ( which we will do later, I've done a trick in the example instead. ) | ||
+ | |||
+ | That wasn't so hard was it? | ||
+ | |||
+ | == The Shader Loader == | ||
+ | We have our two shaders, now we need to compile them and make them do something. | ||
+ | |||
+ | For this, we shall create a new class so we can store all our fun Shader uniforms and attributes. We shall call this Shader, due to lack of imagination ( you can call it Bob, if you prefer. ) | ||
+ | === Shader.h === | ||
+ | #ifndef _SHADER_H_ | ||
+ | #define _SHADER_H_ | ||
+ | |||
+ | #include <string> | ||
+ | #include <vector> | ||
+ | |||
+ | #if defined(GLX) | ||
+ | #include "GLee.h" | ||
+ | #elif defined(PANDORA) | ||
+ | #include <GLES2/gl2.h> | ||
+ | #endif | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class Shader | ||
+ | { | ||
+ | public: | ||
+ | Shader(); | ||
+ | ~Shader(); | ||
+ | |||
+ | /// Create a shader from source | ||
+ | void createFromSource(const std::string& vertex, const std::string& fragment); | ||
+ | |||
+ | /// Create a shader from file | ||
+ | void createFromFile(const std::string& vertex, const std::string& fragment); | ||
+ | |||
+ | /// Get Attribute Location | ||
+ | const GLint getAttribute(const std::string& attribute) const; | ||
+ | |||
+ | /// Get Uniform Location | ||
+ | const GLint getUniform(const std::string& uniform) const; | ||
+ | |||
+ | /// Get Program Id | ||
+ | const GLuint getProgramId() const { return mShaderProgram; } | ||
+ | |||
+ | protected: | ||
+ | /// Actually load and compile the shader source | ||
+ | const GLuint loadShader(const std::string& shader, const GLenum type); | ||
+ | |||
+ | /// Clear out any shader stuff we may currently have - useful for forcing a recompile of the shader | ||
+ | void resetShader(); | ||
+ | |||
+ | private: | ||
+ | std::vector<std::pair<std::string, GLint> > mUniforms; | ||
+ | std::vector<std::pair<std::string, GLint> > mAttributes; | ||
+ | GLuint mVertexShader; | ||
+ | GLuint mFragmentShader; | ||
+ | GLuint mShaderProgram; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Pretty straight forward really.<br /> | ||
+ | While Desktops can have Geometry shaders as well, we're aiming for compatibility with GL ES for the most part; which does not have Geometry shaders.. so we'll only deal with Vertex and Fragments.<br /> | ||
+ | We also store two arrays; one of uniforms and another of attributes, so we know which location they are stored when we come to bind and use this shader. | ||
+ | |||
+ | === Shader.cpp === | ||
+ | #include "Shader.h" | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | Shader::Shader() | ||
+ | : mUniforms() | ||
+ | , mAttributes() | ||
+ | , mVertexShader(GL_INVALID_VALUE) | ||
+ | , mFragmentShader(GL_INVALID_VALUE) | ||
+ | , mShaderProgram(GL_INVALID_VALUE) | ||
+ | { | ||
+ | |||
+ | } | ||
+ | |||
+ | Shader::~Shader() | ||
+ | { | ||
+ | resetShader(); | ||
+ | } | ||
+ | |||
+ | void Shader::createFromFile(const std::string& vertex, const std::string& fragment) | ||
+ | { | ||
+ | // TODO: implement | ||
+ | } | ||
+ | |||
+ | void Shader::createFromSource(const std::string& vertex, const std::string& fragment) | ||
+ | { | ||
+ | mVertexShader = loadShader(vertex, GL_VERTEX_SHADER); | ||
+ | mFragmentShader = loadShader(fragment, GL_FRAGMENT_SHADER); | ||
+ | |||
+ | if ((GL_INVALID_VALUE == mVertexShader) || (GL_INVALID_VALUE == mFragmentShader)) { | ||
+ | // TODO: check that either shader has come back as GL_INVALID_VALUE and scream! | ||
+ | return; | ||
+ | } | ||
+ | |||
+ | mShaderProgram = glCreateProgram(); | ||
+ | glAttachShader(mShaderProgram, mVertexShader); | ||
+ | glAttachShader(mShaderProgram, mFragmentShader); | ||
+ | glLinkProgram(mShaderProgram); | ||
+ | |||
+ | GLint isLinked; | ||
+ | glGetProgramiv(mShaderProgram, GL_LINK_STATUS, &isLinked); | ||
+ | if (false == isLinked) { | ||
+ | GLint infoLen(0U); | ||
+ | |||
+ | glGetProgramiv(mShaderProgram, GL_INFO_LOG_LENGTH, &infoLen); | ||
+ | if (infoLen > 1) { | ||
+ | char* infoLog(new char[infoLen]); | ||
+ | glGetShaderInfoLog(mShaderProgram, infoLen, NULL, infoLog); | ||
+ | // TODO: something bad happened.. print the infolog and die. | ||
+ | delete [] infoLog; | ||
+ | } | ||
+ | |||
+ | // TODO: Die.. something bad has happened | ||
+ | resetShader(); | ||
+ | return; | ||
+ | } | ||
+ | |||
+ | { // Find all Uniforms. | ||
+ | GLint numUniforms; | ||
+ | GLint maxUniformLen; | ||
+ | glGetProgramiv(mShaderProgram, GL_ACTIVE_UNIFORMS, &numUniforms); | ||
+ | glGetProgramiv(mShaderProgram, GL_ACTIVE_UNIFORM_MAX_LENGTH, &maxUniformLen); | ||
+ | char* uniformName(new char[maxUniformLen]); | ||
+ | |||
+ | for (GLint index(0); index < numUniforms; ++index) { | ||
+ | GLint size; | ||
+ | GLenum type; | ||
+ | GLint location; | ||
+ | |||
+ | glGetActiveUniform(mShaderProgram, index, maxUniformLen, NULL, &size, &type, uniformName); | ||
+ | location = glGetUniformLocation(mShaderProgram, uniformName); | ||
+ | |||
+ | std::pair<std::string, GLint> parameter; | ||
+ | parameter.first = std::string(uniformName); | ||
+ | parameter.second = location; | ||
+ | mUniforms.push_back(parameter); | ||
+ | } | ||
+ | |||
+ | delete [] uniformName; | ||
+ | } | ||
+ | |||
+ | { // Find all Attributes | ||
+ | GLint numAttributes; | ||
+ | GLint maxAttributeLen; | ||
+ | glGetProgramiv(mShaderProgram, GL_ACTIVE_ATTRIBUTES, &numAttributes); | ||
+ | glGetProgramiv(mShaderProgram, GL_ACTIVE_ATTRIBUTE_MAX_LENGTH, &maxAttributeLen); | ||
+ | char* attributeName(new char[maxAttributeLen]); | ||
+ | |||
+ | for (GLint index(0); index < numAttributes; ++index) { | ||
+ | GLint size; | ||
+ | GLenum type; | ||
+ | GLint location; | ||
+ | |||
+ | glGetActiveAttrib(mShaderProgram, index, maxAttributeLen, NULL, &size, &type, attributeName); | ||
+ | location = glGetAttribLocation(mShaderProgram, attributeName); | ||
+ | |||
+ | std::pair<std::string, GLint> parameter; | ||
+ | parameter.first = std::string(attributeName); | ||
+ | parameter.second = location; | ||
+ | mAttributes.push_back(parameter); | ||
+ | } | ||
+ | |||
+ | delete [] attributeName; | ||
+ | } | ||
+ | } | ||
+ | |||
+ | const GLuint Shader::loadShader(const std::string& shader, const GLenum type) | ||
+ | { | ||
+ | GLuint newShader(glCreateShader(type)); | ||
+ | |||
+ | const char* shaderSource(shader.c_str()); | ||
+ | glShaderSource(newShader, 1, &shaderSource, NULL); | ||
+ | glCompileShader(newShader); | ||
+ | |||
+ | GLint isCompiled; | ||
+ | glGetShaderiv(newShader, GL_COMPILE_STATUS, &isCompiled); | ||
+ | if (!isCompiled) { | ||
+ | GLint infoLen(0); | ||
+ | glGetShaderiv(newShader, GL_INFO_LOG_LENGTH, &infoLen); | ||
+ | if (infoLen > 1) { | ||
+ | char* infoLog(new char[infoLen]); | ||
+ | glGetShaderInfoLog(newShader, infoLen, NULL, infoLog); | ||
+ | delete [] infoLog; | ||
+ | } | ||
+ | |||
+ | glDeleteShader(newShader); | ||
+ | // TODO: catch this... this is bad | ||
+ | return GL_INVALID_VALUE; | ||
+ | } | ||
+ | |||
+ | return newShader; | ||
+ | } | ||
+ | |||
+ | const GLint Shader::getAttribute(const std::string& attribute) const | ||
+ | { | ||
+ | for (std::vector<std::pair<std::string, GLint> >::const_iterator itr(mAttributes.begin()); itr < mAttributes.end(); ++itr) { | ||
+ | if (attribute == itr->first) | ||
+ | return itr->second; | ||
+ | } | ||
+ | |||
+ | // TODO: catch this! obviously this is bad! | ||
+ | return GL_INVALID_VALUE; | ||
+ | } | ||
+ | |||
+ | const GLint Shader::getUniform(const std::string& uniform) const | ||
+ | { | ||
+ | for (std::vector<std::pair<std::string, GLint> >::const_iterator itr(mUniforms.begin()); itr < mUniforms.end(); ++itr) { | ||
+ | if (uniform == itr->first) | ||
+ | return itr->second; | ||
+ | } | ||
+ | |||
+ | // TODO: catch this! obviously this is bad! | ||
+ | return GL_INVALID_VALUE; | ||
+ | } | ||
+ | |||
+ | void Shader::resetShader() | ||
+ | { | ||
+ | if (GL_INVALID_VALUE != mVertexShader) { | ||
+ | glDetachShader(mShaderProgram, mVertexShader); | ||
+ | glDeleteShader(mVertexShader); | ||
+ | } | ||
+ | if (GL_INVALID_VALUE != mFragmentShader) { | ||
+ | glDetachShader(mShaderProgram, mFragmentShader); | ||
+ | glDeleteShader(mFragmentShader); | ||
+ | } | ||
+ | if (GL_INVALID_VALUE != mShaderProgram) | ||
+ | glDeleteProgram(mShaderProgram); | ||
+ | |||
+ | mUniforms.clear(); | ||
+ | mAttributes.clear(); | ||
+ | |||
+ | mVertexShader = GL_INVALID_VALUE; | ||
+ | mFragmentShader = GL_INVALID_VALUE; | ||
+ | mShaderProgram = GL_INVALID_VALUE; | ||
+ | } | ||
+ | |||
+ | Lots of code, but again, it's pretty straight forward.. I'd highly recommend the OpenGL ES 2.0 Programming Guide to go through if you want more detail. It should be by your side while doing any ES work though, seriously! | ||
+ | |||
+ | = Shader Context Additions = | ||
+ | Now we have a Shader object, we need to put the infrastructure in to actually use it. This means ShaderBasedContext is getting updated a bit. | ||
+ | == ShaderBasedContext.h == | ||
+ | #ifndef _SHADER_BASED_CONTEXT_H_ | ||
+ | #define _SHADER_BASED_CONTEXT_H_ | ||
+ | |||
+ | #if defined(GLX) | ||
+ | #include "../GLee.h" | ||
+ | #elif defined(PANDORA) | ||
+ | #include <GLES2/gl2.h> | ||
+ | #endif | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class Mesh; | ||
+ | class Shader; | ||
+ | class ShaderBasedContext | ||
+ | { | ||
+ | public: | ||
+ | ShaderBasedContext(); | ||
+ | virtual ~ShaderBasedContext(); | ||
+ | |||
+ | protected: | ||
+ | /// Reset all attribute location links | ||
+ | void resetAttributes(); | ||
+ | |||
+ | /// Draw a Mesh using the Shader Based Pipeline | ||
+ | void drawMeshShaderBased(Mesh* const mesh); | ||
+ | |||
+ | /// Bind a shader | ||
+ | void bindShader(const Shader* const shader); | ||
+ | |||
+ | private: | ||
+ | const Shader* mCurrentShader; | ||
+ | GLuint a_position; | ||
+ | GLuint a_colour; | ||
+ | GLuint a_normal; | ||
+ | GLuint a_texCoord0; | ||
+ | GLuint a_texCoord1; | ||
+ | GLuint a_custom0; | ||
+ | GLuint a_custom1; | ||
+ | GLuint a_custom2; | ||
+ | }; | ||
+ | |||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Not much to this, really...<br /> | ||
+ | One oddity is that I'm caching the default set of attributes - a_position, a_colour, a_normal, a_texCoord0, a_texCoord1, a_custom0, a_custom1 and a_custom2. This is so we don't need to regrab them every object if they share the same shader. | ||
+ | |||
+ | To make it more generic, you could just store an array of GLuints instead, and run through them in order. I shall leave that as a reader exercise for the moment. | ||
+ | |||
+ | == ShaderBasedContext.cpp == | ||
+ | #include "ShaderBasedContext.h" | ||
+ | |||
+ | #include "../IndexBuffer.h" | ||
+ | #include "../Material.h" | ||
+ | #include "../Mesh.h" | ||
+ | #include "../Texture.h" | ||
+ | #include "../VertexBuffer.h" | ||
+ | #include "../Shader.h" | ||
+ | |||
+ | #include <cstdio> | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | ShaderBasedContext::ShaderBasedContext() | ||
+ | : mCurrentShader(0) | ||
+ | , a_position(GL_INVALID_VALUE) | ||
+ | , a_colour(GL_INVALID_VALUE) | ||
+ | , a_normal(GL_INVALID_VALUE) | ||
+ | , a_texCoord0(GL_INVALID_VALUE) | ||
+ | , a_texCoord1(GL_INVALID_VALUE) | ||
+ | , a_custom0(GL_INVALID_VALUE) | ||
+ | , a_custom1(GL_INVALID_VALUE) | ||
+ | , a_custom2(GL_INVALID_VALUE) | ||
+ | { | ||
+ | |||
+ | } | ||
+ | |||
+ | ShaderBasedContext::~ShaderBasedContext() | ||
+ | { | ||
+ | mCurrentShader = 0; | ||
+ | } | ||
+ | |||
+ | void ShaderBasedContext::bindShader(const Shader* const shader) | ||
+ | { | ||
+ | if (mCurrentShader != shader) { | ||
+ | mCurrentShader = shader; | ||
+ | glUseProgram(shader->getProgramId()); | ||
+ | resetAttributes(); | ||
+ | } | ||
+ | } | ||
+ | |||
+ | void ShaderBasedContext::resetAttributes() | ||
+ | { | ||
+ | a_position = GL_INVALID_VALUE; | ||
+ | a_colour = GL_INVALID_VALUE; | ||
+ | a_normal = GL_INVALID_VALUE; | ||
+ | a_texCoord0 = GL_INVALID_VALUE; | ||
+ | a_texCoord1 = GL_INVALID_VALUE; | ||
+ | a_custom0 = GL_INVALID_VALUE; | ||
+ | a_custom1 = GL_INVALID_VALUE; | ||
+ | a_custom2 = GL_INVALID_VALUE; | ||
+ | |||
+ | if (0 != mCurrentShader) { | ||
+ | a_position = mCurrentShader->getAttribute("a_position"); | ||
+ | a_colour = mCurrentShader->getAttribute("a_colour"); | ||
+ | a_normal = mCurrentShader->getAttribute("a_normal"); | ||
+ | a_texCoord0 = mCurrentShader->getAttribute("a_texCoord0"); | ||
+ | a_texCoord1 = mCurrentShader->getAttribute("a_texCoord1"); | ||
+ | a_custom0 = mCurrentShader->getAttribute("a_custom0"); | ||
+ | a_custom1 = mCurrentShader->getAttribute("a_custom1"); | ||
+ | a_custom2 = mCurrentShader->getAttribute("a_custom2"); | ||
+ | } | ||
+ | |||
+ | if (GL_INVALID_VALUE != a_position) | ||
+ | glEnableVertexAttribArray(a_position); | ||
+ | if (GL_INVALID_VALUE != a_colour) | ||
+ | glEnableVertexAttribArray(a_colour); | ||
+ | if (GL_INVALID_VALUE != a_normal) | ||
+ | glEnableVertexAttribArray(a_normal); | ||
+ | if (GL_INVALID_VALUE != a_texCoord0) | ||
+ | glEnableVertexAttribArray(a_texCoord0); | ||
+ | if (GL_INVALID_VALUE != a_texCoord1) | ||
+ | glEnableVertexAttribArray(a_texCoord1); | ||
+ | if (GL_INVALID_VALUE != a_custom0) | ||
+ | glEnableVertexAttribArray(a_custom0); | ||
+ | if (GL_INVALID_VALUE != a_custom1) | ||
+ | glEnableVertexAttribArray(a_custom1); | ||
+ | if (GL_INVALID_VALUE != a_custom2) | ||
+ | glEnableVertexAttribArray(a_custom2); | ||
+ | } | ||
+ | |||
+ | void ShaderBasedContext::drawMeshShaderBased(Mesh* const mesh) | ||
+ | { | ||
+ | const IndexBuffer* const indexBuffer(mesh->getIndexBuffer()); | ||
+ | const VertexBuffer* const vertexBuffer(mesh->getVertexBuffer()); | ||
+ | const Material* const material(mesh->getMaterial()); | ||
+ | unsigned int currentTextureUnit(0U); | ||
+ | |||
+ | bindShader(material->getShader()); | ||
+ | |||
+ | const std::vector<VertexBuffer::Format>& meshFormat(vertexBuffer->getFormat()); | ||
+ | for (std::vector<VertexBuffer::Format>::const_iterator itr(meshFormat.begin()); itr < meshFormat.end(); ++itr) { | ||
+ | switch (itr->getType()) { | ||
+ | // Position | ||
+ | case VertexBuffer::FORMAT_POSITION_2F: | ||
+ | glVertexAttribPointer(a_position, 2, GL_FLOAT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_POSITION_3F: | ||
+ | glVertexAttribPointer(a_position, 3, GL_FLOAT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_POSITION_4F: | ||
+ | glVertexAttribPointer(a_position, 4, GL_FLOAT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_POSITION_2S: | ||
+ | glVertexAttribPointer(a_position, 2, GL_SHORT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_POSITION_3S: | ||
+ | glVertexAttribPointer(a_position, 3, GL_SHORT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_POSITION_4S: | ||
+ | glVertexAttribPointer(a_position, 4, GL_SHORT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_POSITION_2B: | ||
+ | glVertexAttribPointer(a_position, 2, GL_BYTE, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_POSITION_3B: | ||
+ | glVertexAttribPointer(a_position, 3, GL_BYTE, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_POSITION_4B: | ||
+ | glVertexAttribPointer(a_position, 4, GL_BYTE, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | // Normal | ||
+ | case VertexBuffer::FORMAT_NORMAL_3F: | ||
+ | glVertexAttribPointer(a_normal, 3, GL_FLOAT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_NORMAL_3S: | ||
+ | glVertexAttribPointer(a_normal, 3, GL_SHORT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_NORMAL_3B: | ||
+ | glVertexAttribPointer(a_normal, 3, GL_BYTE, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | // Colour | ||
+ | case VertexBuffer::FORMAT_COLOUR_4F: | ||
+ | glVertexAttribPointer(a_colour, 4, GL_FLOAT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_COLOUR_3F: | ||
+ | glVertexAttribPointer(a_colour, 3, GL_FLOAT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_COLOUR_4S: | ||
+ | glVertexAttribPointer(a_colour, 4, GL_SHORT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_COLOUR_3S: | ||
+ | glVertexAttribPointer(a_colour, 3, GL_SHORT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_COLOUR_4UB: | ||
+ | glVertexAttribPointer(a_colour, 4, GL_UNSIGNED_BYTE, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | case VertexBuffer::FORMAT_COLOUR_3UB: | ||
+ | glVertexAttribPointer(a_colour, 3, GL_UNSIGNED_BYTE, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | default: | ||
+ | break; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | switch (indexBuffer->getFormat()) { | ||
+ | case IndexBuffer::FORMAT_FLOAT: | ||
+ | glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_FLOAT, indexBuffer->getData()); | ||
+ | break; | ||
+ | case IndexBuffer::FORMAT_UNSIGNED_BYTE: | ||
+ | glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_BYTE, indexBuffer->getData()); | ||
+ | break; | ||
+ | case IndexBuffer::FORMAT_UNSIGNED_SHORT: | ||
+ | glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_SHORT, indexBuffer->getData()); | ||
+ | break; | ||
+ | default: | ||
+ | break; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | If you've come from the Fixed Function guide, the above will look rather familiar. The only real difference is we specify a Shader handle to bind the Vertex Pointer to, but other than that, what was said there still applies here. If you haven't read it, or have forgotten, you might want to - [[GLESGAE:Fixed Function Rendering Contexts]] | ||
+ | |||
+ | One thing we don't deal with yet is texturing.<br /> | ||
+ | Texturing is a bit weird on Shader systems, so we're holding off on that just now.. but we'll sort it out soon. | ||
+ | |||
+ | = Feeding The Shader = | ||
+ | We're not going to be setting up a uniform system just yet.. so our shaders are going to be rather basic for now.<br /> | ||
+ | We do have the ability to pull uniforms out the shader and bind stuff to them as it stands, however it's far better to have some sort of system that can take care of the grunt work for you.<br /> | ||
+ | We'll be looking at a uniform system soon. | ||
+ | |||
+ | = A Simple Test = | ||
+ | Our test app is getting a bit cluttered now... | ||
+ | #include <cstdio> | ||
+ | #include <cstdlib> | ||
+ | |||
+ | #include "../../Graphics/GraphicsSystem.h" | ||
+ | #include "../../Graphics/Context/FixedFunctionContext.h" | ||
+ | #include "../../Events/EventSystem.h" | ||
+ | #include "../../Input/InputSystem.h" | ||
+ | #include "../../Input/Keyboard.h" | ||
+ | #include "../../Input/Pad.h" | ||
+ | |||
+ | #include "../../Graphics/Mesh.h" | ||
+ | #include "../../Graphics/VertexBuffer.h" | ||
+ | #include "../../Graphics/IndexBuffer.h" | ||
+ | #include "../../Graphics/Material.h" | ||
+ | #include "../../Graphics/Shader.h" | ||
+ | #include "../../Maths/Matrix4.h" | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | void updateRender(Mesh* const mesh); | ||
+ | Mesh* makeSprite(Shader* const shader); | ||
+ | Shader* makeSpriteShader(); | ||
+ | |||
+ | int main(void) | ||
+ | { | ||
+ | EventSystem* eventSystem(new EventSystem); | ||
+ | InputSystem* inputSystem(new InputSystem(eventSystem)); | ||
+ | GraphicsSystem* graphicsSystem(new GraphicsSystem(GraphicsSystem::SHADER_BASED_RENDERING)); | ||
+ | |||
+ | if (false == graphicsSystem->initialise("GLESGAE Shader Test", 800, 480, 16, false)) { | ||
+ | //TODO: OH NOES! WE'VE DIEDED! | ||
+ | return -1; | ||
+ | } | ||
+ | |||
+ | Mesh* mesh(makeSprite(makeSpriteShader())); | ||
+ | |||
+ | eventSystem->bindToWindow(graphicsSystem->getWindow()); | ||
+ | |||
+ | Controller::KeyboardController* myKeyboard(inputSystem->newKeyboard()); | ||
+ | |||
+ | while(false == myKeyboard->getKey(Controller::KEY_ESCAPE)) { | ||
+ | eventSystem->update(); | ||
+ | inputSystem->update(); | ||
+ | graphicsSystem->beginFrame(); | ||
+ | graphicsSystem->drawMesh(mesh); | ||
+ | graphicsSystem->endFrame(); | ||
+ | } | ||
+ | |||
+ | delete graphicsSystem; | ||
+ | delete inputSystem; | ||
+ | delete eventSystem; | ||
+ | delete mesh; | ||
+ | |||
+ | return 0; | ||
+ | } | ||
+ | |||
+ | Mesh* makeSprite(Shader* const shader) | ||
+ | { | ||
+ | float vertexData[32] = {// Position - 16 floats | ||
+ | -1.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, -1.0F, 0.0F, 1.0F, | ||
+ | -1.0F, -1.0F, 0.0F, 1.0F, | ||
+ | // Colour - 16 floats | ||
+ | 0.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, 0.0F, 0.0F, 1.0F, | ||
+ | 0.0F, 0.0F, 1.0F, 1.0F, | ||
+ | 1.0F, 1.0F, 1.0F, 1.0F}; | ||
+ | |||
+ | unsigned int vertexSize = 32 * sizeof(float); | ||
+ | |||
+ | unsigned char indexData[6] = { 0, 1, 2, 2, 3, 0 }; | ||
+ | unsigned int indexSize = 6 * sizeof(unsigned char); | ||
+ | |||
+ | VertexBuffer* newVertexBuffer = new VertexBuffer(reinterpret_cast<unsigned char*>(&vertexData), vertexSize); | ||
+ | newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_POSITION_4F, 4U); | ||
+ | newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_COLOUR_4F, 4U); | ||
+ | IndexBuffer* newIndexBuffer = new IndexBuffer(reinterpret_cast<unsigned char*>(&indexData), indexSize, IndexBuffer::FORMAT_UNSIGNED_BYTE); | ||
+ | Material* newMaterial = new Material; | ||
+ | newMaterial->setShader(shader); | ||
+ | Matrix4* newTransform = new Matrix4; | ||
+ | |||
+ | return new Mesh(newVertexBuffer, newIndexBuffer, newMaterial, newTransform); | ||
+ | } | ||
+ | |||
+ | #if defined(GLX) | ||
+ | #include "../../Graphics/GLee.h" | ||
+ | #endif | ||
+ | |||
+ | Shader* makeSpriteShader() | ||
+ | { | ||
+ | std::string vShader = | ||
+ | "attribute vec4 a_position; \n" | ||
+ | "attribute vec4 a_colour; \n" | ||
+ | "varying vec4 v_colour; \n" | ||
+ | "void main() \n" | ||
+ | "{ \n" | ||
+ | " gl_Position = a_position; \n" | ||
+ | " v_colour = a_colour; \n" | ||
+ | "} \n"; | ||
+ | |||
+ | std::string fShader = | ||
+ | #ifdef GLES2 | ||
+ | "precision mediump float; \n" // setting the default precision for Pandora | ||
+ | #endif | ||
+ | "varying vec4 v_colour; \n" | ||
+ | "void main() \n" | ||
+ | "{ \n" | ||
+ | " gl_FragColor.grb = v_colour.rgb;\n" | ||
+ | "} \n"; | ||
+ | |||
+ | #ifndef GLES1 | ||
+ | Shader* newShader(new Shader()); | ||
+ | newShader->createFromSource(vShader, fShader); | ||
+ | |||
+ | return newShader; | ||
+ | #else | ||
+ | return 0; | ||
+ | #endif | ||
+ | } | ||
+ | |||
+ | Notice that the ES1 stuff has been ripped out?<br /> | ||
+ | ES1 does not support shaders, so there's no point in having that cluttering up our file. I've also removed the Pandora buttons from the Input example and just checking on Q being pressed.<br /> | ||
+ | Additionally, this is near enough exactly the same as the Fixed Function example, just with a shader being created and bound to the Material as well. | ||
+ | |||
+ | == Building the Example == | ||
+ | In the SVN there are Makefiles already setup for you.. just trigger '''''make -f MakefileES2.pandora''''' or whatever your chosen configuration is, and it'll happily build for you and spit out a '''GLESGAE.pandora''' binary for you to run. | ||
+ | |||
+ | Alternatively, if you use CodeLite, there's a Workspace/Project set for you preconfigured. | ||
+ | |||
+ | = Next Time = | ||
+ | Textures, followed by drawing via Vertex Buffer Object (VBO), then we Meet the Matrices. | ||
+ | |||
+ | = GLESGAE - The Transform Stack = | ||
+ | |||
+ | == Introduction == | ||
+ | I was going to just blindly march onto getting transform matrices running so you could position and move things about on screen.<br /> | ||
+ | That presents a few problems though, as it required a bit more knowledge of how OpenGL deals with things, and I'm trying to go about explaining stuff as plainly as possible. It also didn't really touch on the fact there are a few different Transform Stacks to deal with.<br /> | ||
+ | Well, I say stack, but in general only the Fixed Function pipeline retains the stack, the Shader Based pipeline can do whatever it likes... and generally to keep things simple, we only ever use one or two levels of the "stacks." | ||
+ | |||
+ | The added bonus of going through all this, is that we can also implement cameras at the same time that we get things to move about on screen, so it's a kind of win-win situation here! | ||
+ | |||
+ | == Meet the Stacks == | ||
+ | In a Fixed Function pipeline, you have four main Transform Stacks, each corresponding to View, Projection, Model and Texture manipulation. | ||
+ | |||
+ | These literally are stacks in that you can push additional matrices on top of them, and they will be concatenated in sequence for the final draw of whatever object you're currently rendering. This is particularly useful in the case of the Model Stack in that it makes sense in a hierarchical modelling view.<br /> | ||
+ | The stereotypical example being an arm... you'd draw the shoulder, then the upper arm, elbow, lower arm, wrist and hand. When you rotate the shoulder object, the rest of the objects would move and rotate correctly with it. Well, "correctly" is a bit of a misnomer as there'd be no limits on the joint, so you could have it spin round and round like crazy if you wanted at this point... but that's a matter of Physics which we'll get to at some point! | ||
+ | |||
+ | The Texture Stack is also particularly useful for doing quick and dirty effects, such as scrolling textures across models. A good example being indicator arrows following the mesh to direct the player where to go. Just whack a transform matrix that indicates this onto the stack when you want to draw the texture, and OpenGL will do the rest for you. | ||
+ | |||
+ | The View and Projection Stacks deal with the camera. Generally you have two camera types; 2D - or Orthographic - and 3D - generally termed Perspective. These are defined with specific View and Projection matrices, and we'll be implementing these two camera types for both Fixed Function and Shader Based systems very soon. | ||
+ | |||
+ | '''Gotcha: '''''I've mentioned a separate View Transform Stack. This generally gets multiplied with the Model Stack to become the ModelView Stack, which it's more commonly referenced as. Confused? The View Transform deals with the clipping of the final projection to the screen. Think of it in terms of a Camera; the view finder cuts things out - this is the View transform; the lens can warp, zoom and otherwise manipulate how the scene looks - this is the Projection matrix; and the scene itself, being composed of objects, each have their own Model transform. I reference it as a stack because you could, for example, be taking a picture through a car or train window. That window is another View transform... so by dealing with Views in a stack as everything else, it just works a bit better. It is simpler, honest!'' | ||
+ | |||
+ | == Model Transforms == | ||
+ | We'll look at Model transforms first, as these are the easiest and make the most sense. | ||
+ | |||
+ | Every object in the world has a position and rotation. They also have a scale, but usually we leave this alone; the art assets should generally already be the correct size anyway.<br /> | ||
+ | These three things get concatenated into a Transform Matrix.. which is usually a 4x4 matrix.<br /> | ||
+ | We can actually split these up and deal with them seperately if we like - which is a process called decomposing - whereby we can split this 4x4 matrix into it's components of a 3x3 rotation matrix, a position vector, and a scale vector. In general, they sortof correspond to this thing: | ||
+ | Rotation00 * ScaleX, Rotation01, Rotation02, PositionX | ||
+ | Rotation10, Rotation11 * ScaleY, Rotation12, PositionY | ||
+ | Rotation20, Rotation21, Rotation22 * ScaleZ, PositionZ | ||
+ | 0 0 0 1 | ||
+ | |||
+ | '''Gotcha: '''''It's never easy, is it? There are two main conventions to matrices - Row-Major and Column-Major. The above matrix is Column-Major... IE: our Position Vector is in the last column. If you transpose this, you swap it around and it becomes Row-Major and our Position Vector therefore goes in the bottom row, where we've currently got [0, 0, 0, 1]. OpenGL tends to use Column-Major, whereas DirectX uses Row-Major for instance.'' | ||
+ | |||
+ | '''Gotcha: '''''Picking Scale out of a Rotation Matrix can be very painful.. it's something I like to avoid as much as possible, and you should too! Ensure your art assets are either of the correct size, or additionally store a separate scale vector so you can demultiply it to get the original rotation matrix should you need it.'' | ||
+ | |||
+ | Anyway, every object in the world will generally have one of these. It's much more efficient to store a Transform matrix rather than separate Position and Rotation matrices as OpenGL and DirectX are going to be dealing with 4x4 matrices anyway..so you'd have to composea 4x4 every time you want to draw, which all mount up to become something that's not trivially light processing! | ||
+ | |||
+ | I'm not going to get into creating rotation matrices, and rotation objects in space just yet... that'll be later on.. we just need to know what a Transform matrix is, and what it's composed of just now. | ||
+ | |||
+ | == View Matrices == | ||
+ | View Matrices are a bit more fun. | ||
+ | |||
+ | Rather than just being independent things in space, a View Matrix requires an eye to see through. This does make sense, as you view through an eye - or a window, camera lens, etc.. - and what you can't see gets clipped out of view. They still exist, though. Like a certain cat, just because you can't see inside the box, doesn't mean it's not necessarily there, or when you're swinging a Wiimote about; just because you can't see that expensive lamp behind you, doesn't mean you're not about to smash it to pieces by swinging back.<br /> | ||
+ | We also need two other vectors - an Up and Centre vector. This gives us three vectors; an Eye Vector, a Centre Vector, and an Up Vector. | ||
+ | |||
+ | Computing a View Matrix isn't all that hard, it's just a bit lengthy, and again, we store this as a 4x4 Matrix. | ||
+ | Matrix4 createViewMatrix(const Vector3& eye, const Vector3& centre, const Vector3& up) | ||
+ | { | ||
+ | Vector3 forwardVector(centre - eye); | ||
+ | forwardVector.normalise(); | ||
+ | |||
+ | Vector3 rightVector(forwardVector.cross(up)); | ||
+ | rightVector.normalise(); | ||
+ | |||
+ | // We recompute the Up vector to make sure that everything is exactly perpendicular to one another. | ||
+ | // Otherwise, there can be rounding errors and things go a bit mad! | ||
+ | // We can also do this to assert and check that our up vector actually matches up properly. | ||
+ | Vector3 upVector(rightVector.cross(forwardVector)); | ||
+ | |||
+ | Matrx4 viewMatrix; | ||
+ | viewMatrix(0, 0) = rightVector.x; | ||
+ | viewMatrix(0, 1) = upVector.x; | ||
+ | viewMatrix(0, 2) = -forwardVector.x; | ||
+ | viewMatrix(0, 3) = 0.0F; | ||
+ | |||
+ | viewMatrix(1, 0) = rightVector.y; | ||
+ | viewMatrix(1, 1) = upVector.y; | ||
+ | viewMatrix(1, 2) = -forwardVector.y; | ||
+ | viewMatrix(1, 3) = 0.0F; | ||
+ | |||
+ | viewMatrix(2, 0) = rightVector.z; | ||
+ | viewMatrix(2, 1) = upVector.z; | ||
+ | viewMatrix(2, 2) = -forwardVector.z; | ||
+ | viewMatrix(2, 3) = 0.0F; | ||
+ | |||
+ | viewMatrix(3, 0) = 0.0F; | ||
+ | viewMatrix(3, 1) = 0.0F; | ||
+ | viewMatrix(3, 2) = 0.0F; | ||
+ | viewMatrix(3, 3) = 1.0F; | ||
+ | |||
+ | // We then need to translate our eye back to the origin, and use this as effectively the View's position. | ||
+ | // An eye does have a position in space, after all! As does a Window. | ||
+ | viewMatrix(3, 0) = (viewMatrix(0, 0) * -eye.x + viewMatrix(1, 0) * -eye.y + viewMatrix(2, 0) * -eye.z); | ||
+ | viewMatrix(3, 1) = (viewMatrix(0, 1) * -eye.x + viewMatrix(1, 1) * -eye.y + viewMatrix(2, 1) * -eye.z); | ||
+ | viewMatrix(3, 2) = (viewMatrix(0, 2) * -eye.x + viewMatrix(1, 2) * -eye.y + viewMatrix(2, 2) * -eye.z); | ||
+ | |||
+ | return viewMatrix; | ||
+ | } | ||
+ | |||
+ | So, that's a nice bit of code, but what does it actually '''*do*'''?<br /> | ||
+ | Well, our Eye vector is the position of the View... our Camera Lens, our Window, our Player's Eyes. As we know, we can decompose our Object's Model Transform and grab it's position, so this should be easy enough to understand what it is. | ||
+ | |||
+ | Our Up Vector is a unit vector ( meaning it's values are only between 0 and 1) which tell us which way is up. In general, Y is up. Sometimes, Z is up. It depends on which side of Mathematics you live on, really. As I tend to think in 2D with a bit of depth, Y is up for me, and is easier for me to think in, so my Up vector is [ 0, 1, 0 ]. Of course, this can change if you're floating about in space, or if you tilt your head back to look up - your eye's up vector has changed as it's effectively pointing straight out the top of your head, which you've now tilted, so is probably pointing towards the wall behind you now. | ||
+ | |||
+ | The Centre Vector is another odd one. It's calculated by taking the Eye - or Position - Vector and adding the Front Vector to it.<br /> | ||
+ | In other words; look forward. Your eye position is in your head ( I'd assume, anyway! ) and your Front Vector is dead ahead. Without turning your head, look to the right. Your eye position hasn't changed, but your forward vector has - it's now to the right. Now, hold up a piece of paper and imagine it's a window ( or hold up a piece of glass or whatever instead. ) Imagine this is your eyes and make it "look" forward. It should effectively just be flat to you. Now make it look right as you would do with your eyes by rotating it. This is exactly what's going on here!<br /> | ||
+ | This is also why we translate our eye back to the origin... as think about it, you don't know you're looking through a Window unless you step back and you can see the Window. We've stepped back to observe our Viewing area, and now we want to view through it, so we move it towards us. Our perfect example being a camera view finder; we look at it from the outside to see how much it's going to clip out, then we place our eye to it to view through it. | ||
+ | |||
+ | Simples! | ||
+ | |||
+ | ===Extra Fun=== | ||
+ | The centre vector can actually be used in another way.<br /> | ||
+ | Generally, you want a camera to face "forward".. however, you may also want a "tracking" camera.. which in this case, you replace the centre vector with the object you want to track.<br /> | ||
+ | We'll write some examples soon so you can mess about and see all this in action, don't worry! | ||
+ | |||
+ | == Projection Fiddling == | ||
+ | Projection Matrices are what give the world a semblance of depth, and can be used to distort the world. I'll show you the two classical projection matrices - one for 2d, and one for 3d - and you can have fun mucking about with them to create your own if you feel like it. | ||
+ | |||
+ | An amusing irony I've found is that the 3d projection matrix is often a bit easier to understand and deal with, so we'll go through that one first. | ||
+ | Matrix4 create3dProjectMatrix(const float nearClip, const float farClip, const float fov, const float aspectRatio) | ||
+ | { | ||
+ | const float radians = fov / 2.0F * PI / 180.0F; | ||
+ | const float zDelta = farClip - nearClip; | ||
+ | const float sine = sin(radians); | ||
+ | const float cotangent = cos(radians) / sine; | ||
+ | |||
+ | Matrix4 projectionMatrix; | ||
+ | projectionMatrix(0, 0) = cotangent / aspectRatio; | ||
+ | projectionMatrix(0, 1) = 0.0F; | ||
+ | projectionMatrix(0, 2) = 0.0F; | ||
+ | projectionMatrix(0, 3) = 0.0F; | ||
+ | |||
+ | projectionMatrix(1, 0) = 0.0F; | ||
+ | projectionMatrix(1, 1) = cotangent; | ||
+ | projectionMatrix(1, 2) = 0.0F; | ||
+ | projectionMatrix(1, 3) = 0.0F; | ||
+ | |||
+ | projectionMatrix(2, 0) = 0.0F; | ||
+ | projectionMatrix(2, 1) = 0.0F; | ||
+ | projectionMatrix(2, 2) = -(farClip + nearClip) / zDelta; | ||
+ | projectionMatrix(2, 3) = -1.0F; | ||
+ | |||
+ | projectionMatrix(3, 0) = 0.0F; | ||
+ | projectionMatrix(3, 1) = 0.0F; | ||
+ | projectionMatrix(3, 2) = -2.0F * nearClip * farClip / zDelta; | ||
+ | projectionMatrix(3, 3) = 0.0F; | ||
+ | |||
+ | return projectionMatrix; | ||
+ | } | ||
+ | |||
+ | Simple, really.<br /> | ||
+ | We need a near clip, a far clip, a field of view, and an aspect ratio. | ||
+ | |||
+ | Our near clip is how far forward our view begins. Noticed how in some games if you get close to a wall, it disappears? This is usually caused by the near clip. Conversely, things that "pop" in to the screen as you view far ahead are caused by the far clip being too close. While it would be tempting to set the near clip as close as we can, and the far clip as large as possible, this causes a lot of issues depending on the resolution of our depth buffer.<br /> | ||
+ | If you look at two objects far away in the distance, it can be tricky to tell what's in front of the other. Your graphics card has the same issue. However, whereas our perception can just make things look a bit strange, the graphics card gets confused. It has a finite amount of precision to deal with this, and you can sometimes get "Z-Fighting" whereby two objects are effectively "fighting" to get in front of each other, and the screen flickers between them. It's very annoying. Therefore, give your far and near clip ranges sensible values! | ||
+ | |||
+ | The field of view controls how wide an angle ( in degrees ) you can see through your viewing matrix. For example, looking through a keyhole has quite a narrow field of view as it's more or less looking dead ahead. Your eyes, on the other hand, have quite a wide field of view as even when you look dead ahead, you can generally make out shapes to the left and right of you without having to look in that direction. Now, while you can have it anywhere between 0 and 360, you probably want it somewhere between 30 and 120. | ||
+ | |||
+ | Finally, our aspect ratio is the screen's aspect ratio. This can be either the actual physical device's aspect ratio, or perhaps some in-game monitor. Either way, it's just width / height. Easy. | ||
+ | |||
+ | The calculations themselves are fairly straight forward. If you don't quite understand them, then feel free to pick up a maths book. It's a bit beyond what I want to discuss here - I just want to tell you what's going on, not proof it! But with the above code, you'll get yourself a nice 3d projection matrix. | ||
+ | |||
+ | Now onto the 2d one... | ||
+ | Matrix4 create2dProjectionMatrix(const float left, const float bottom, const float right, const float top, const float nearClip, const float farClip) | ||
+ | { | ||
+ | const float xScale = 2.0F / (right - left); | ||
+ | const float yScale = 2.0F / (top - bottom); | ||
+ | const float zScale = -2.0F / (farClip - nearClip); | ||
+ | |||
+ | const float x = -(right + left) / (right - left); | ||
+ | const float y = -(top + bottom) / (top - bottom); | ||
+ | const float z = -(farClip + nearClip) / (farClip - nearClip); | ||
+ | |||
+ | Matrix4 projectionMatrix; | ||
+ | projectionMatrix(0, 0) = xScale; | ||
+ | projectionMatrix(0, 1) = 0.0F; | ||
+ | projectionMatrix(0, 2) = 0.0F; | ||
+ | projectionMatrix(0, 3) = 0.0F; | ||
+ | |||
+ | projectionMatrix(1, 0) = 0.0F; | ||
+ | projectionMatrix(1, 1) = yScale; | ||
+ | projectionMatrix(1, 2) = 0.0F; | ||
+ | projectionMatrix(1, 3) = 0.0F; | ||
+ | |||
+ | projectionMatrix(2, 0) = 0.0F; | ||
+ | projectionMatrix(2, 1) = 0.0F; | ||
+ | projectionMatrix(2, 2) = zScale; | ||
+ | projectionMatrix(2, 3) = 0.0F; | ||
+ | |||
+ | projectionMatrix(3, 0) = x; | ||
+ | projectionMatrix(3, 1) = y; | ||
+ | projectionMatrix(3, 2) = z; | ||
+ | projectionMatrix(3, 3) = 1.0F; | ||
+ | |||
+ | return projectionMatrix; | ||
+ | } | ||
+ | |||
+ | Looks easier doesn't it? Why I have difficulties getting my head around this sometimes, I'm not sure.. probably because it doesn't quite fit any camera analogy that works so well with the 3d view-projection stuff. | ||
+ | |||
+ | Anyway, we know about the near clip and far clip, and the same rules apply to the 2d projection matrix.<br /> | ||
+ | What we're interested in here are the left, top, bottom, and right values.<br /> | ||
+ | As you can see from the calculations, these control the scale, so generally these are unit values: Left and Bottom being 0, and Top and Right being 1.<br /> | ||
+ | This makes sense as these are normalised values... and OpenGL's 2d origin is on the bottom left corner of your screen. Therefore, Right is 1.0, as that's "full right".. you can't get further than that. Of course, you might want to muck about with the scale, and therefore doubling the projection - so you only see half of what's there - would mean you set Top and Right to 2.0 instead of 1.0. Or to shrink it in half - 0.5.<br /> | ||
+ | That's really all there is to it. | ||
+ | |||
+ | == Putting It All Together == | ||
+ | So, how does all this work? How do we get from drawing an object at x, y, z to it appearing on the screen somewhere? | ||
+ | |||
+ | Well, if we're just drawing one object - that has no parent or hierarchical transforms - directly to the screen, we need one Transform matrix, one View matrix and one Projection matrix. All this combined becomes our ModelViewProjection matrix, and it's literally all multiplied in that order - model * view * projection. | ||
+ | |||
+ | If our Object does have parent transformations, we concatenate them first, so it's ''finalTransform = parentTransform * objectTransform;'' and we effectively just go through the chain from bottom up ( so that parentTransform would've been it's finalTransform, and so on... ) till we get to our object. Then we multiply in the view and projection matrices. | ||
+ | |||
+ | Again, as I stated, the view and projection matrices can be stacks.. this works in exactly the same manner as the object transforms - we multiply bottom up to get the final ViewMatrix and final ProjectionMatrix to multiply with; finalTransform * finalView * finalProjection. | ||
+ | |||
+ | Seriously, that's it. | ||
+ | |||
+ | It isn't complicated, it's just a bit strange and you have to think a bit to fully understand what's going on. But once you sit down and break it apart, it's quite straight forward, I think. | ||
+ | |||
+ | Now to actually write some code into the Render Systems to draw things properly! | ||
+ | |||
+ | == Checking out the SVN == | ||
+ | There isn't actually any useful code for this Chapter that would make it standalone. | ||
+ | |||
+ | == Building the Example == | ||
+ | There is no example this time, as there'd be nothing to show!<br /> | ||
+ | Instead, jump to either [[GLESGAE:Fixed Function Transformations]] or [[GLESGAE:Shader Based Transformations]] for dealing with the Transform stacks in the correct manner. | ||
+ | |||
+ | = Next Time = | ||
+ | We're going to actually implement some Transform operations. | ||
+ | |||
+ | = GLESGAE - Fixed Function Transformations = | ||
+ | |||
+ | == Introduction == | ||
+ | The previous part is essential reading before you get here: [[GLESGAE:The Transform Stack]]<br /> | ||
+ | I won't be going over what's already been covered, I'll instead jump into the technical implementation. | ||
+ | |||
+ | While we'll be doing a lot of the same things as the Shader-Based transformation setup, I've split it up as ES 1 does have specific Matrix Stacks internally that you need to push onto. ES 2 does not. So the implementations do end up a bit different. | ||
+ | |||
+ | == Fast Track == | ||
+ | We're on SVN revision 5 now. This includes everything in this article, plus the extra bits of maths we need. | ||
+ | '''''svn co -r 5 http://svn3.xp-dev.com/svn/glesgae/trunk/ glesgae''''' | ||
+ | |||
+ | == Meet the Stacks == | ||
+ | Deja Vu, perhaps? | ||
+ | |||
+ | OpenGL ES 1 ( and indeed, OpenGL 1.4 ) has a set of Matrix Modes, which are set via glMatrixMode, so that any matrix manipulations you do affect that set of matrices alone.<br /> | ||
+ | For our purposes, we want to mess with the following: '''GL_PROJECTION''', '''GL_MODELVIEW''', and '''GL_TEXTURE'''. | ||
+ | |||
+ | '''Gotcha: '''''Hold the phone, we didn't talk about the Texture Stack in the last article! That's because it _only_ has any relevance in a Fixed Function pipeline as shaders are free to do as they please. Don't worry, we shall go through it in this article. Additionally, we've got a combined Model and View Matrix to deal with here. Life is hard.'' | ||
+ | |||
+ | We'll deal with the actual screen view and projection bits first, as they go directly into the Graphics System itself. | ||
+ | |||
+ | == A View To The World == | ||
+ | We shall be creating the Camera Object today. | ||
+ | |||
+ | This shall be a separate object, which shall deal with all it's own matrices so we can have multiple cameras all independent of one another. Particularly handy for doing effects such as Render To Texture and other Portal effects ( reflections in mirrors, for example. ) | ||
+ | |||
+ | So, we take the three functions we devised in the previous article, and slap them into our Camera object.<br /> | ||
+ | We also need it to store it's own Transform matrix, as well as the View and Projection matrices unique to it, and various other little bits and pieces required to calculate these matrices. We therefore end up with an interface looking somewhat like this: | ||
+ | #ifndef _CAMERA_H_ | ||
+ | #define _CAMERA_H_ | ||
+ | |||
+ | #include "../Maths/Vector3.h" | ||
+ | #include "../Maths/Matrix4.h" | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class Camera | ||
+ | { | ||
+ | public: | ||
+ | enum CameraType | ||
+ | { | ||
+ | CAMERA_2D | ||
+ | , CAMERA_3D | ||
+ | }; | ||
+ | |||
+ | Camera(const CameraType& type); | ||
+ | |||
+ | /// Get the Type of Camera this is | ||
+ | const CameraType getType() { return mType; } | ||
+ | |||
+ | /// Update the camera's View and Projection matrices while looking ahead. | ||
+ | void update(); | ||
+ | |||
+ | /// Update the camera's View and Projection matrices while looking at something. | ||
+ | void update(const Vector3& target); | ||
+ | |||
+ | /// Set Near Clip | ||
+ | void setNearClip(const float nearClip) { mNearClip = nearClip; } | ||
+ | |||
+ | /// Set Far Clip | ||
+ | void setFarClip(const float farClip) { mFarClip = farClip; } | ||
+ | |||
+ | /// Set 2d Parameters | ||
+ | void set2dParams(const float left, const float bottom, const float right, const float top) { m2dLeft = left; m2dRight = right; m2dTop = top; m2dBottom = bottom; } | ||
+ | |||
+ | /// Set 3d Parameters | ||
+ | void set3dParams(const float fov, const float aspectRatio) { mFov = fov; mAspectRatio = aspectRatio; } | ||
+ | |||
+ | /// Set the Transform Matrix | ||
+ | void setTransformMatrix(const Matrix4& transformMatrix) { mTransformMatrix = transformMatrix; } | ||
+ | |||
+ | /// Set the View Matrix | ||
+ | void setViewMatrix(const Matrix4& viewMatrix) { mViewMatrix = viewMatrix; } | ||
+ | |||
+ | /// Set the Projection Matrix | ||
+ | void setProjectionMatrix(const Matrix4& projectionMatrix) { mProjectionMatrix = projectionMatrix; } | ||
+ | |||
+ | /// Get the Transform Matrix | ||
+ | Matrix4& getTransformMatrix() { return mTransformMatrix; } | ||
+ | |||
+ | /// Get the View Matrix | ||
+ | Matrix4& getViewMatrix() { return mViewMatrix; } | ||
+ | |||
+ | /// Get the Projection Matrix | ||
+ | Matrix4& getProjectionMatrix() { return mProjectionMatrix; } | ||
+ | |||
+ | /// Get Near Clip | ||
+ | float getNearClip() const { return mNearClip; } | ||
+ | |||
+ | /// Get Far Clip | ||
+ | float getFarClip() const { return mFarClip; } | ||
+ | |||
+ | /// Get Field of View | ||
+ | float getFov() const { return mFov; } | ||
+ | |||
+ | /// Create a viewMatrix | ||
+ | static Matrix4 createViewMatrix(const Vector3& eye, const Vector3& centre, const Vector3& up); | ||
+ | |||
+ | /// Create a 2d projection matrix | ||
+ | static Matrix4 create2dProjectionMatrix(const float left, const float bottom, const float right, const float top, const float nearClip, const float farClip); | ||
+ | |||
+ | /// Create a 3d projection matrix | ||
+ | static Matrix4 create3dProjectionMatrix(const float nearClip, const float farClip, const float fov, const float aspectRatio); | ||
+ | |||
+ | |||
+ | private: | ||
+ | CameraType mType; | ||
+ | |||
+ | float mNearClip; | ||
+ | float mFarClip; | ||
+ | float m2dTop; | ||
+ | float m2dBottom; | ||
+ | float m2dLeft; | ||
+ | float m2dRight; | ||
+ | float mFov; | ||
+ | float mAspectRatio; | ||
+ | |||
+ | Matrix4 mTransformMatrix; | ||
+ | Matrix4 mViewMatrix; | ||
+ | Matrix4 mProjectionMatrix; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Nice and straight forward so far. We're just storing some matrices and values, providing access to them, and throwing our view and projection matrix creation functions on it.<br /> | ||
+ | We do have two update functions though - one which takes a target, and another which just updates itself. This is so we can have this camera follow a spline but look at something else, for example. Though this of course only makes much sense in a 3d world.. trying to use it on a 2d camera makes for some interesting results! | ||
+ | |||
+ | We specify what type of Camera this is in the constructor. This is so that when we trigger the update function, it regenerates the correct projection matrix for us. I've also put default values in the constructor so that in effect, you can just do ''Camera* camera(new Camera(Camera::CAMERA_2D));'' and be done with it; call ''camera->update();'' if you're moving it about, then just pass it in to the Graphics System. | ||
+ | |||
+ | Of course, full code is in the repository.. and as there's a lot of it, that's where it'll stay rather than being plastered here.<br /> | ||
+ | However, the update function is interesting, so let's go through it as you've already seen the view and projection matrix functions and they're no different ( bar some minor changes as I updated my Vector and Matrix classes. ) | ||
+ | |||
+ | void Camera::update() | ||
+ | { | ||
+ | Matrix3 rotation; | ||
+ | Vector3 eye; | ||
+ | |||
+ | mTransformMatrix.decompose(&rotation, &eye); | ||
+ | const Vector3 target(eye + rotation.getFrontVector()); | ||
+ | const Vector3 up(rotation.getUpVector()); | ||
+ | |||
+ | mViewMatrix = createViewMatrix(eye, target, up); | ||
+ | |||
+ | switch (mType) { | ||
+ | case CAMERA_2D: | ||
+ | mProjectionMatrix = create2dProjectionMatrix(m2dLeft, m2dBottom, m2dRight, m2dTop, mNearClip, mFarClip); | ||
+ | break; | ||
+ | case CAMERA_3D: | ||
+ | mProjectionMatrix = create3dProjectionMatrix(mNearClip, mFarClip, mFov, mAspectRatio); | ||
+ | break; | ||
+ | default: | ||
+ | break; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | Really nothing to it, is there?<br /> | ||
+ | The other update function, which takes a target vector, is exactly the same; minus the fact that target is already calculated. | ||
+ | |||
+ | All we do, is decompose our transform into position and rotation, then grab a few vectors out of the rotation matrix.<br /> | ||
+ | As our matrices are column-major, our right, up and front vectors correspond to the first, second and third columns of the rotation matrix. We then just pass in all our variables to generate our view and projection matrices for the correct mode and we're sorted. | ||
+ | |||
+ | == Models and Projections and Views, oh my! == | ||
+ | With our Camera object created, we need to pass it through to our Fixed Function Rendering Context. As we've abstracted everything out, this involves passing it to the Graphics System, then through to the Platform Context ( GLES1RenderContext or GLXRenderContext for instance ) and finally to the FixedFunctionRenderContext itself. This is just trivial interface work, and it's in the repository if you're curious. | ||
+ | |||
+ | What we're really interested in, is what goes into the Fixed Function context itself... | ||
+ | |||
+ | A Camera is a funny thing, in that generally it'll always be moving. Therefore, there is absolutely no point in optimising the calls to check if the camera has moved. With this in mind, we can write our setCamera function as follows: | ||
+ | void FixedFunctionContext::setFixedFunctionCamera(Camera* const camera) | ||
+ | { | ||
+ | glMatrixMode(GL_PROJECTION); | ||
+ | glLoadMatrixf(camera->getProjectionMatrix().getData()); | ||
+ | |||
+ | glMatrixMode(GL_MODELVIEW); | ||
+ | glLoadMatrixf(camera->getViewMatrix().getData()); | ||
+ | |||
+ | mCamera = camera; | ||
+ | } | ||
+ | |||
+ | Nice and simple.<br /> | ||
+ | Now, as you notice, I've not stored any of this in a stack. I'm just manipulating the top matrices in the Projection and ModelView stack. While I did go on about having a stack in the last article, the truth of the matter is that relying on OpenGL to do your matrices properly can get you into bother. The GL specs state that you are only guaranteed a minimum of two matrices in the Projection and Texture stacks, and sixteen in the ModelView stack. This might be enough for you, or it might not. Either way, dealing with them ourselves is much more preferable.<br /> | ||
+ | Our Camera object is also rather simple, and there's nothing stopping us from creating multiple Cameras dotted about the place, and switching the views around as and when we like. We also have the ability to go in and fiddle with the matrices once the Cameras have updated themselves, anyway... so stack manipulation isn't really a big deal for us here. We can safely ignore it. | ||
+ | |||
+ | Anyway, now we have set our Projection and View matrices, we need to concatenate the Model's matrix over the top of the View matrix so that our meshes render where we think they will.<br /> | ||
+ | This is much easier than it sounds, don't worry! | ||
+ | |||
+ | In our drawMeshFixedFunction function, we just add the following little bits.<br /> | ||
+ | We pull out the transform matrix from the Mesh when we pull out the Index and Vertex Buffers, and Material.<br /> | ||
+ | Then, when we're about to draw the object itself, we change the whole bottom segment to this: | ||
+ | glPushMatrix(); | ||
+ | glMultMatrixf(transform->getTranspose().getData()); | ||
+ | |||
+ | disableFixedFunctionTexturing(currentTextureUnit); // Disable any excess texture units | ||
+ | switch (indexBuffer->getFormat()) { | ||
+ | case IndexBuffer::FORMAT_FLOAT: | ||
+ | glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_FLOAT, indexBuffer->getData()); | ||
+ | break; | ||
+ | case IndexBuffer::FORMAT_UNSIGNED_BYTE: | ||
+ | glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_BYTE, indexBuffer->getData()); | ||
+ | break; | ||
+ | case IndexBuffer::FORMAT_UNSIGNED_SHORT: | ||
+ | glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_SHORT, indexBuffer->getData()); | ||
+ | break; | ||
+ | default: | ||
+ | break; | ||
+ | }; | ||
+ | mFixedFunctionLastTexUnit = currentTextureUnit; | ||
+ | |||
+ | glPopMatrix(); | ||
+ | |||
+ | All we've done is add three lines.. very important lines, however!<br /> | ||
+ | We tell OpenGL to give us a new matrix, pushing the current one down a level.This is our View matrix. We then multiply this with our current Transform matrix - or more specifically, the Transpose of our current Transform matrix. Now we draw as normal. Finally, we pop our modified Transform matrix off the stack, so we're back to the View matrix sitting on top again. | ||
+ | |||
+ | So, why the transpose of the Transform matrix? It's to do with matrix multiplication. If we multiply two matrices, A and B to get C, the value C(3, 4) is going to be the row 3 of A multiplied by column 4 of B. So this means that potentially we're fiddling about in the matrices in the wrong areas. And you can test this by removing the getTranspose() call and then trying to translate the mesh. It sortof does this weird warping thing instead! Not ideal. See this wiki page for more info, along with a nice illustration of the matrix multiplication I've described - http://en.wikipedia.org/wiki/Matrix_multiplication#Illustration<br /> | ||
+ | Anyway, yes.. we transpose it, which effectively flips the values about so that A(2, 3) becomes A(3, 2) and then when we multiply it, the rows and columns match up to the correct values - so we multiply the position with the position, rather than part of the rotation. | ||
+ | |||
+ | Right, what about the Texture matrices?<br /> | ||
+ | That's even simpler than dealing with the camera matrices. | ||
+ | void FixedFunctionContext::setFixedFunctionTextureMatrix(Matrix4* const matrix) | ||
+ | { | ||
+ | glMatrixMode(GL_TEXTURE); | ||
+ | glLoadMatrixf(matrix->getData()); | ||
+ | |||
+ | glMatrixMode(GL_MODELVIEW); | ||
+ | } | ||
+ | |||
+ | Now, this only has any effect on a Fixed Function pipeline, really. We also can't really use it for now as we have no Texture support.. but literally, all we're doing is setting the Matrix to the Texture Matrix, replacing it with our specified matrix, and returning to the ModelView.<br /> | ||
+ | We'll have a play with the texture matrix when we get to loading up textures and displaying them. | ||
+ | |||
+ | Surprisingly, that's us done.<br /> | ||
+ | We just need to fiddle with the example code a bit so it does something a bit more useful, now. | ||
+ | |||
+ | == Cameras are Fun == | ||
+ | I'm going to paste the full example code here, then we'll go through it.. and I'll highlight the interesting little gotchas involved. | ||
+ | #include <cstdio> | ||
+ | #include <cstdlib> | ||
+ | |||
+ | #include "../../Graphics/GraphicsSystem.h" | ||
+ | #include "../../Graphics/Context/FixedFunctionContext.h" | ||
+ | #include "../../Events/EventSystem.h" | ||
+ | #include "../../Input/InputSystem.h" | ||
+ | #include "../../Input/Keyboard.h" | ||
+ | #include "../../Input/Pad.h" | ||
+ | |||
+ | #include "../../Graphics/Camera.h" | ||
+ | #include "../../Graphics/Mesh.h" | ||
+ | #include "../../Graphics/VertexBuffer.h" | ||
+ | #include "../../Graphics/IndexBuffer.h" | ||
+ | #include "../../Graphics/Material.h" | ||
+ | #include "../../Maths/Matrix4.h" | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | void controlCamera(Camera* const camera, Controller::KeyboardController* const keyboard); | ||
+ | Mesh* makeSprite(); | ||
+ | |||
+ | int main(void) | ||
+ | { | ||
+ | EventSystem* eventSystem(new EventSystem); | ||
+ | InputSystem* inputSystem(new InputSystem(eventSystem)); | ||
+ | GraphicsSystem* graphicsSystem(new GraphicsSystem(GraphicsSystem::FIXED_FUNCTION_RENDERING)); | ||
+ | |||
+ | if (false == graphicsSystem->initialise("GLESGAE Fixed Function Test", 640, 480, 16, false)) { | ||
+ | //TODO: OH NOES! WE'VE DIEDED! | ||
+ | return -1; | ||
+ | } | ||
+ | |||
+ | Mesh* mesh(makeSprite()); | ||
+ | Camera* camera(new Camera(Camera::CAMERA_3D)); | ||
+ | camera->getTransformMatrix().setPosition(Vector3(0.0F, 0.0F, -5.0F)); | ||
+ | |||
+ | #ifndef GLES2 | ||
+ | FixedFunctionContext* const fixedContext(graphicsSystem->getFixedContext()); | ||
+ | if (0 != fixedContext) { | ||
+ | fixedContext->enableFixedFunctionVertexPositions(); | ||
+ | fixedContext->enableFixedFunctionVertexColours(); | ||
+ | } | ||
+ | #endif | ||
+ | |||
+ | eventSystem->bindToWindow(graphicsSystem->getWindow()); | ||
+ | |||
+ | Controller::KeyboardController* myKeyboard(inputSystem->newKeyboard()); | ||
+ | |||
+ | while(false == myKeyboard->getKey(Controller::KEY_ESCAPE)) { | ||
+ | controlCamera(camera, myKeyboard); | ||
+ | |||
+ | eventSystem->update(); | ||
+ | inputSystem->update(); | ||
+ | graphicsSystem->beginFrame(); | ||
+ | graphicsSystem->setCamera(camera); | ||
+ | graphicsSystem->drawMesh(mesh); | ||
+ | graphicsSystem->endFrame(); | ||
+ | } | ||
+ | |||
+ | delete graphicsSystem; | ||
+ | delete inputSystem; | ||
+ | delete eventSystem; | ||
+ | delete mesh; | ||
+ | |||
+ | return 0; | ||
+ | } | ||
+ | |||
+ | Mesh* makeSprite() | ||
+ | { | ||
+ | float vertexData[32] = {// Position - 16 floats | ||
+ | -1.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, -1.0F, 0.0F, 1.0F, | ||
+ | -1.0F, -1.0F, 0.0F, 1.0F, | ||
+ | // Colour - 16 floats | ||
+ | 0.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, 0.0F, 0.0F, 1.0F, | ||
+ | 0.0F, 0.0F, 1.0F, 1.0F, | ||
+ | 1.0F, 1.0F, 1.0F, 1.0F}; | ||
+ | |||
+ | unsigned int vertexSize = 32 * sizeof(float); | ||
+ | |||
+ | unsigned char indexData[6] = { 0, 1, 2, 2, 3, 0 }; | ||
+ | unsigned int indexSize = 6 * sizeof(unsigned char); | ||
+ | |||
+ | VertexBuffer* newVertexBuffer = new VertexBuffer(reinterpret_cast<unsigned char*>(&vertexData), vertexSize); | ||
+ | newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_POSITION_4F, 4U); | ||
+ | newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_COLOUR_4F, 4U); | ||
+ | IndexBuffer* newIndexBuffer = new IndexBuffer(reinterpret_cast<unsigned char*>(&indexData), indexSize, IndexBuffer::FORMAT_UNSIGNED_BYTE); | ||
+ | Material* newMaterial = new Material; | ||
+ | Matrix4* newTransform = new Matrix4; | ||
+ | |||
+ | return new Mesh(newVertexBuffer, newIndexBuffer, newMaterial, newTransform); | ||
+ | } | ||
+ | |||
+ | void controlCamera(Camera* const camera, Controller::KeyboardController* const keyboard) | ||
+ | { | ||
+ | Vector3 newPosition; | ||
+ | camera->getTransformMatrix().getPosition(&newPosition); | ||
+ | if (true == keyboard->getKey(Controller::KEY_ARROW_DOWN)) | ||
+ | newPosition.z() -= 0.01F; | ||
+ | |||
+ | if (true == keyboard->getKey(Controller::KEY_ARROW_UP)) | ||
+ | newPosition.z() += 0.01F; | ||
+ | |||
+ | if (true == keyboard->getKey(Controller::KEY_ARROW_LEFT)) | ||
+ | newPosition.x() -= 0.01F; | ||
+ | |||
+ | if (true == keyboard->getKey(Controller::KEY_ARROW_RIGHT)) | ||
+ | newPosition.x() += 0.01F; | ||
+ | camera->getTransformMatrix().setPosition(newPosition); | ||
+ | |||
+ | camera->update(); | ||
+ | } | ||
+ | |||
+ | We haven't really changed much from the stock example still.<br /> | ||
+ | We've added some camera controls, so that when we press the arrow keys, our camera translates in the x and z axes.<br /> | ||
+ | We also start our camera -5 units from the centre of the world... and this is one of the gotchas. Why is this a gotcha? Make your right hand into a fist, point your index finger straight in front of you, your thumb directly up and your middle finger to the left. This is what your camera is like in OpenGL. Now, move your whole hand to the left. Imagine you were looking down it, you would see your objects moving right. Fair enough. But our camera seems to go the opposite direction! This is because OpenGL's right vector points in a different direction from where ours does.. this is the Right Handed and Left Handed systems biting us in the bum. You can either get used to it, and handle it within your code, or you can go back to the Camera's view matrix creation, and negate the right vector when we set it's values to the matrix.<br /> | ||
+ | Oh and a quick note... DirectX and OpenGL disagree on which way the right vector is ( and which way to point down the Z axis! ).. so conversion between the two handed systems is always a good skill to have when dealing with multiple APIs. | ||
+ | |||
+ | Now for the second gotcha.<br /> | ||
+ | Change the camera to a 2D camera and run the example again.<br /> | ||
+ | Why is the quad larger than the full screen? | ||
+ | |||
+ | Look back at the mesh description we have: | ||
+ | float vertexData[32] = {// Position - 16 floats | ||
+ | -1.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, -1.0F, 0.0F, 1.0F, | ||
+ | -1.0F, -1.0F, 0.0F, 1.0F, | ||
+ | // Colour - 16 floats | ||
+ | 0.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, 0.0F, 0.0F, 1.0F, | ||
+ | 0.0F, 0.0F, 1.0F, 1.0F, | ||
+ | 1.0F, 1.0F, 1.0F, 1.0F}; | ||
+ | Our mesh points are between -1.0F and 1.0F.. which is a full screen quad when we're rendering in 3d mode, and our origin is the centre of the screen. ( Ok, well technically even that is wrong, it would be -0.5 and 0.5... but in our pre view and projection matrix phase, it did match up. )<br /> | ||
+ | In 2d mode, the origin is at the bottom left corner of the screen.. so -1.0 ( or -0.5 for the nit picky ) is way beyond the left of the screen where you can't see it. So, if we change this to between 0.0 and 1.0, we should have it working ( or 0.0 and -1.0 in our case due to the right vector nonsense. ) | ||
+ | |||
+ | You might be wondering why I haven't "fixed" the right hand vector to actually point in the correct direction.. this is because I haven't decided on a loadable mesh format yet... some mesh formats have Z as up, instead of Y for example. Some use positive Z and others use negative Z. Granted, we should define what the engine uses, and then convert whatever formats we load up to our format, but as I haven't really decided what our format actually is yet, I'm leaving things open! At any rate, you know exactly what's going on and why, and you know where to fix it if it annoys you. | ||
+ | |||
+ | I also suggest reading through the Shader Based Transforms article.. as we explain a bit more of what the camera is up to there: [[GLESGAE:Shader Based Transformations]] | ||
+ | |||
+ | == Building the Example == | ||
+ | In the SVN there are Makefiles already setup for you.. just trigger '''''make -f MakefileES1.pandora''''' or whatever your chosen configuration is, and it'll happily build for you and spit out a '''GLESGAE.pandora''' binary for you to run. | ||
+ | |||
+ | = Next Time = | ||
+ | We're going to do much the same for the Shader Based pipeline.. this requires a bit more work to our shader system, but we can reuse our Camera object as-is. | ||
+ | |||
+ | = GLESGAE - Shader Based Transformations = | ||
+ | |||
+ | == Introduction == | ||
+ | Whereas most Fixed Function vs Shader Based stuff does go off in different directions, I'm going to have to ask you to read the Fixed Function Transformations article first: [[GLESGAE:Fixed Function Transformations]] | ||
+ | |||
+ | Reason being, we went through some of the Camera code, and the whole Right Vector issue... both of which are still relevant; the right vector issue remaining until we sort out a proper mesh format - which'll happen when we switch over to VBOs. | ||
+ | |||
+ | == Fast Track == | ||
+ | We're be moving to SVN revision 6... '''''svn co -r 6 http://svn3.xp-dev.com/svn/glesgae/trunk/ glesgae''''' | ||
+ | |||
+ | == Meet the... lack of Stacks? == | ||
+ | ES 2.0 has no Matrix Stacks.<br /> | ||
+ | You don't have to deal with switching matrix mode, and pushing and popping anything on a stack - they don't exist.<br /> | ||
+ | You're expected to calculate the final matrix and send it over to a shader for processing, and this is where the ubiquitous mvp transform comes in - the ModelViewProjection big daddy matrix. | ||
+ | |||
+ | Shaders are good in that they offload from the CPU to the GPU, but there are still times the CPU is king at performing certain tasks; or indeed in this case, is the only thing that can access the memory required. If we were to send over the three matrices manually, and have the GPU calculate them in whatever variants it needed, it'd actually be slower than doing it once on the CPU and sending it over in the pre-built manner. In this case it comes down to the sending of three matrices rather than one. Such is the balancing act you have to perform when dealing with shader based rendering. | ||
+ | |||
+ | But first, we need to write a uniform system.... | ||
+ | |||
+ | == Shader Uniform Management == | ||
+ | As we've gone over before when writing the initial Shader Render Context, the Shader is king of the renderer.<br /> | ||
+ | It describes what attributes we're going to use, as well as what it does to them before they get to the screen.<br /> | ||
+ | Uniforms are used to give extra information from the engine to the shader, so that we can do more things with it. | ||
+ | |||
+ | There are several ways in which we can deal with this, but my preferred method is to have a bunch of updaters for each uniform we're interested in. This way, we keep code duplication down, and we decouple what's actually in the shaders from the engine. We can go all out and have funky caching mechanisms to ensure we only update when we really need to... but for us, we'll just go with a simple system that each shader will run through a map of updaters to update their own uniforms. Keeps things nice and straight forward. You can also learn more about a method such as this from Game Engine Gems 2, if you're interested, which also describes deferred OpenGL calls amongst other handy gems of information. | ||
+ | |||
+ | So, let's create our Shader Uniform Updater interface: | ||
+ | #ifndef _SHADER_UNIFORM_UPDATER_H_ | ||
+ | #define _SHADER_UNIFORM_UPDATER_H_ | ||
+ | |||
+ | #if defined(GLX) | ||
+ | #include "GLee.h" | ||
+ | #elif defined(GLES1) | ||
+ | #if defined(PANDORA) | ||
+ | #include <GLES/gl.h> | ||
+ | #endif | ||
+ | #elif defined(GLES2) | ||
+ | #if defined(PANDORA) | ||
+ | #include <GLES2/gl2.h> | ||
+ | #endif | ||
+ | #endif | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class Camera; | ||
+ | class Material; | ||
+ | class Matrix4; | ||
+ | class ShaderUniformUpdater | ||
+ | { | ||
+ | public: | ||
+ | virtual ~ShaderUniformUpdater() {} | ||
+ | |||
+ | /// Pure virtual to ensure you overload the update function! | ||
+ | virtual void update(const GLint uniformId, const Camera* const camera, const Material* const material, const Matrix4* const transform) = 0; | ||
+ | |||
+ | protected: | ||
+ | /// Protected constructor to ensure you derive from this, and don't create empty updaters. | ||
+ | ShaderUniformUpdater() {} | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | |||
+ | Another nice and simple class. See a pattern here? Engine code should be simple and get out the way, so that you can extend it and do what you need when writing your application or game!<br /> | ||
+ | Anyway... only partial oddity here is that we do define an include for GLES1 as a "just in case" measure. If the header gets pulled in by accident, it'll still compile. We're not doing anything particularly fancy here anyway, as we just define an interface method that takes in a GLint, Camera, Material and Transform. | ||
+ | |||
+ | We then add a map to the Shader Based Render Context, and a couple of functions: | ||
+ | public: | ||
+ | /// Add a uniform updater | ||
+ | void addUniformUpdater(const std::string& uniformName, ShaderUniformUpdater* const updater); | ||
+ | |||
+ | /// Clear uniform updaters | ||
+ | void clearUniformUpdaters() { mUniformUpdaters.clear(); } | ||
+ | |||
+ | protected: | ||
+ | /// Update all uniforms | ||
+ | void updateUniforms(Material* const material, Matrix4* const transform); | ||
+ | |||
+ | private: | ||
+ | std::map<std::string, ShaderUniformUpdater*> mUniformUpdaters; | ||
+ | |||
+ | Obviously stick them in the right parts of the file.<br /> | ||
+ | We make the add and clear functions public so that we can access them directly, and protect the updateUniforms call as we'll be handling this in-class. Of course, our actual map itself is private, out the way of meddling hands. | ||
+ | |||
+ | The addUniformUpdater call does exactly what you think it does.. it just adds the updater to the map with the uniformName as the key. Again, I'm being lazy and not checking anything, but you really should... and we shall have a clean-up day soon which'll involve writing a logging system and error checking macros. Makes things easier when certain platforms log things in peculiar ways, or when asserting could actually destroy the call stack before you get to see it! | ||
+ | |||
+ | The updateUniforms function is relatively straight forward too.. I'll print it up here though and we can go through it: | ||
+ | void ShaderBasedContext::updateUniforms(Material* const material, Matrix4* const transform) | ||
+ | { | ||
+ | std::vector<std::pair<std::string, GLint> > uniforms(mCurrentShader->getUniformArray()); | ||
+ | for (std::vector<std::pair<std::string, GLint> >::iterator itr(uniforms.begin()); itr < uniforms.end(); ++itr) | ||
+ | mUniformUpdaters[itr->first]->update(itr->second, mCamera, material, transform); | ||
+ | } | ||
+ | |||
+ | We take the material and transform in because we don't actually store these as member data. We do store the Camera as member data, however, so we can access this from wherever we like. Again, naughty points for not checking pointers.. you should! Same for ensuring that the uniform updater we're looking for actually exists.<br /> | ||
+ | But in essence, all we do here is run through the uniform array on our current shader, match up with an updater, and call it's update function.<br /> | ||
+ | And we call this function just after our bindShader call in the drawMesh function. | ||
+ | |||
+ | == Wait, what Camera?! == | ||
+ | I've mentioned storing the Camera as member data, except we haven't written any code to pass it through yet!<br /> | ||
+ | We shall do that now. | ||
+ | |||
+ | public: | ||
+ | /// Set the Camera | ||
+ | void setShaderBasedCamera(Camera* const camera) { mCamera = camera; } | ||
+ | |||
+ | private: | ||
+ | Camera* mCamera; | ||
+ | |||
+ | Done! | ||
+ | |||
+ | What, you were expecting something a bit more substantial like in the Fixed Function Context?<br /> | ||
+ | Remember, in a Shader Based Context, you have to do everything through shaders.. your entire rendering pipeline is flexible and up to you to decide what to do with it. There are no matrix stacks to deal with, no fixed light count, and no easy way out. If you want to render something, you push it through a shader, and feed that shader with enough information to make it do what you want... which in our case will be the Camera matrices which we had pushed back on to the Projection and ModelView stacks last time. | ||
+ | |||
+ | Of course, you'll need to set the Graphics System to call the correct setCamera function, but this is trivial, and in the repository for the curious. | ||
+ | |||
+ | == The MVP Uniform == | ||
+ | Now we need to send our Camera matrices to our shader.<br /> | ||
+ | As we've mentioned, we'll precompute this so we don't have to send three matrices over all the time. | ||
+ | #ifndef _MVP_UNIFORM_UPDATER_H_ | ||
+ | #define _MVP_UNIFORM_UPDATER_H_ | ||
+ | |||
+ | #include "../../Graphics/ShaderUniformUpdater.h" | ||
+ | |||
+ | class MVPUniformUpdater : public GLESGAE::ShaderUniformUpdater | ||
+ | { | ||
+ | public: | ||
+ | MVPUniformUpdater() : GLESGAE::ShaderUniformUpdater() {} | ||
+ | |||
+ | void update(const GLint uniformId, const GLESGAE::Camera* const camera, const GLESGAE::Material* const material, const GLESGAE::Matrix4* const transform); | ||
+ | }; | ||
+ | |||
+ | #endif | ||
+ | |||
+ | You might wonder where on earth this file is, considering the include definition is a bit mad.<br /> | ||
+ | This updater lives in the example folder - away from the core engine. Uniform updaters are generally app specific, so we store them outside the engine. Granted, a ModelViewProjection updater is probably going to be the same for every application, but note the "probably" .. if we didn't have the ability to change it, we'd likely need to hack around it if we needed to! | ||
+ | |||
+ | Anyway, our object file is nice and simple, as it just contains the one method: | ||
+ | #include "MVPUniformUpdater.h" | ||
+ | |||
+ | #if defined(GLX) | ||
+ | #include "../../Graphics/GLee.h" | ||
+ | #elif defined(GLES2) | ||
+ | #if defined(PANDORA) | ||
+ | #include <GLES2/gl2.h> | ||
+ | #endif | ||
+ | #endif | ||
+ | |||
+ | #include "../../Maths/Matrix4.h" | ||
+ | #include "../../Graphics/Camera.h" | ||
+ | #include "../../Graphics/Material.h" | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | void update(const GLint uniformId, Camera* const camera, Material* const material, Matrix4* const transform) | ||
+ | { | ||
+ | const Matrix4& view(camera->getViewMatrix()); | ||
+ | const Matrix4& projection(camera->getProjectionMatrix()); | ||
+ | |||
+ | const Matrix4 modelViewProjection(transform->getTranspose() * view * projection); | ||
+ | |||
+ | glUniformMatrix4fv(uniformId, 1U, false, modelViewProjection.data()); | ||
+ | } | ||
+ | |||
+ | While we technically don't need the GL includes again as our base ShaderUniformUpdater class pulls them in, it's good practice to show what files we are actually using.. so we'll pull them in again.<br /> | ||
+ | Again, this file lives in our Example folder, hence the odd include paths. | ||
+ | |||
+ | There isn't much here that should be unfamiliar to you; we grab the view and projection matrix, then multiply them all together for the concatenated ModelViewProjection matrix.<br /> | ||
+ | The only odd bit - and the most important in this case - is the glUniformMatrix4fv call. | ||
+ | |||
+ | OpenGL has a bunch of uniform functions to send over matrices, vectors, and single variables of float, byte, int, bools, etc... it can also set "samplers" ( used for texturing, which we'll get to soon enough ) and attributes as we've seen before when setting up the Render Context. I suggest you read either the Open GL/ES spec sheet from Khronos, or either the Red Book or the OpenGL ES 2.0 Programming Guide for more information on what they do, and what they are. In this case, we're sending over a single 4x4 matrix of float values - hence the "Matrix4fv" notation. | ||
+ | |||
+ | Now we just update the example code, and we're done! | ||
+ | #include <cstdio> | ||
+ | #include <cstdlib> | ||
+ | |||
+ | #include "../../Graphics/GraphicsSystem.h" | ||
+ | #include "../../Graphics/Context/ShaderBasedContext.h" | ||
+ | #include "../../Events/EventSystem.h" | ||
+ | #include "../../Input/InputSystem.h" | ||
+ | #include "../../Input/Keyboard.h" | ||
+ | #include "../../Input/Pad.h" | ||
+ | |||
+ | #include "../../Graphics/Camera.h" | ||
+ | #include "../../Graphics/Mesh.h" | ||
+ | #include "../../Graphics/VertexBuffer.h" | ||
+ | #include "../../Graphics/IndexBuffer.h" | ||
+ | #include "../../Graphics/Material.h" | ||
+ | #include "../../Graphics/Shader.h" | ||
+ | #include "../../Maths/Matrix4.h" | ||
+ | |||
+ | #include "MVPUniformUpdater.h" | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | void controlCamera(Camera* const camera, Controller::KeyboardController* const keyboard); | ||
+ | Mesh* makeSprite(Shader* const shader); | ||
+ | Shader* makeSpriteShader(); | ||
+ | |||
+ | int main(void) | ||
+ | { | ||
+ | EventSystem* eventSystem(new EventSystem); | ||
+ | InputSystem* inputSystem(new InputSystem(eventSystem)); | ||
+ | GraphicsSystem* graphicsSystem(new GraphicsSystem(GraphicsSystem::SHADER_BASED_RENDERING)); | ||
+ | |||
+ | if (false == graphicsSystem->initialise("GLESGAE Shader Based Test", 800, 480, 16, false)) { | ||
+ | //TODO: OH NOES! WE'VE DIEDED! | ||
+ | return -1; | ||
+ | } | ||
+ | |||
+ | graphicsSystem->getShaderContext()->addUniformUpdater("u_mvp", new MVPUniformUpdater); | ||
+ | |||
+ | Mesh* mesh(makeSprite(makeSpriteShader())); | ||
+ | Camera* camera(new Camera(Camera::CAMERA_2D)); | ||
+ | camera->getTransformMatrix().setPosition(Vector3(0.0F, 0.0F, -5.0F)); | ||
+ | |||
+ | eventSystem->bindToWindow(graphicsSystem->getWindow()); | ||
+ | |||
+ | Controller::KeyboardController* myKeyboard(inputSystem->newKeyboard()); | ||
+ | |||
+ | while(false == myKeyboard->getKey(Controller::KEY_ESCAPE)) { | ||
+ | controlCamera(camera, myKeyboard); | ||
+ | |||
+ | eventSystem->update(); | ||
+ | inputSystem->update(); | ||
+ | graphicsSystem->beginFrame(); | ||
+ | graphicsSystem->setCamera(camera); | ||
+ | graphicsSystem->drawMesh(mesh); | ||
+ | graphicsSystem->endFrame(); | ||
+ | } | ||
+ | |||
+ | delete graphicsSystem; | ||
+ | delete inputSystem; | ||
+ | delete eventSystem; | ||
+ | delete mesh; | ||
+ | |||
+ | return 0; | ||
+ | } | ||
+ | |||
+ | Mesh* makeSprite(Shader* const shader) | ||
+ | { | ||
+ | float vertexData[32] = {// Position - 16 floats | ||
+ | -1.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, -1.0F, 0.0F, 1.0F, | ||
+ | -1.0F, -1.0F, 0.0F, 1.0F, | ||
+ | // Colour - 16 floats | ||
+ | 0.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, 0.0F, 0.0F, 1.0F, | ||
+ | 0.0F, 0.0F, 1.0F, 1.0F, | ||
+ | 1.0F, 1.0F, 1.0F, 1.0F}; | ||
+ | |||
+ | unsigned int vertexSize = 32 * sizeof(float); | ||
+ | |||
+ | unsigned char indexData[6] = { 0, 1, 2, 2, 3, 0 }; | ||
+ | unsigned int indexSize = 6 * sizeof(unsigned char); | ||
+ | |||
+ | VertexBuffer* newVertexBuffer = new VertexBuffer(reinterpret_cast<unsigned char*>(&vertexData), vertexSize); | ||
+ | newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_POSITION_4F, 4U); | ||
+ | newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_COLOUR_4F, 4U); | ||
+ | IndexBuffer* newIndexBuffer = new IndexBuffer(reinterpret_cast<unsigned char*>(&indexData), indexSize, IndexBuffer::FORMAT_UNSIGNED_BYTE); | ||
+ | Material* newMaterial = new Material; | ||
+ | newMaterial->setShader(shader); | ||
+ | Matrix4* newTransform = new Matrix4; | ||
+ | |||
+ | return new Mesh(newVertexBuffer, newIndexBuffer, newMaterial, newTransform); | ||
+ | } | ||
+ | |||
+ | void controlCamera(Camera* const camera, Controller::KeyboardController* const keyboard) | ||
+ | { | ||
+ | Vector3 newPosition; | ||
+ | camera->getTransformMatrix().getPosition(&newPosition); | ||
+ | if (true == keyboard->getKey(Controller::KEY_ARROW_DOWN)) | ||
+ | newPosition.z() -= 0.01F; | ||
+ | |||
+ | if (true == keyboard->getKey(Controller::KEY_ARROW_UP)) | ||
+ | newPosition.z() += 0.01F; | ||
+ | |||
+ | if (true == keyboard->getKey(Controller::KEY_ARROW_LEFT)) | ||
+ | newPosition.x() -= 0.01F; | ||
+ | |||
+ | if (true == keyboard->getKey(Controller::KEY_ARROW_RIGHT)) | ||
+ | newPosition.x() += 0.01F; | ||
+ | camera->getTransformMatrix().setPosition(newPosition); | ||
+ | |||
+ | camera->update(); | ||
+ | } | ||
+ | |||
+ | #if defined(GLX) | ||
+ | #include "../../Graphics/GLee.h" | ||
+ | #endif | ||
+ | |||
+ | Shader* makeSpriteShader() | ||
+ | { | ||
+ | std::string vShader = | ||
+ | "attribute vec4 a_position; \n" | ||
+ | "attribute vec4 a_colour; \n" | ||
+ | "varying vec4 v_colour; \n" | ||
+ | "uniform mat4 u_mvp; \n" | ||
+ | "void main() \n" | ||
+ | "{ \n" | ||
+ | " gl_Position = u_mvp * a_position; \n" | ||
+ | " v_colour = a_colour; \n" | ||
+ | "} \n"; | ||
+ | |||
+ | std::string fShader = | ||
+ | #ifdef GLES2 | ||
+ | "precision mediump float; \n" | ||
+ | #endif | ||
+ | "varying vec4 v_colour; \n" | ||
+ | "void main() \n" | ||
+ | "{ \n" | ||
+ | " gl_FragColor.grb = v_colour.rgb; \n" | ||
+ | "} \n"; | ||
+ | |||
+ | #ifndef GLES1 | ||
+ | Shader* newShader(new Shader()); | ||
+ | newShader->createFromSource(vShader, fShader); | ||
+ | |||
+ | return newShader; | ||
+ | #else | ||
+ | return 0; | ||
+ | #endif | ||
+ | } | ||
+ | |||
+ | Most of this you should be familiar with now.. the bits we're interested in are in the vertex shader: | ||
+ | std::string vShader = | ||
+ | "attribute vec4 a_position; \n" | ||
+ | "attribute vec4 a_colour; \n" | ||
+ | "varying vec4 v_colour; \n" | ||
+ | "uniform mat4 u_mvp; \n" | ||
+ | "void main() \n" | ||
+ | "{ \n" | ||
+ | " gl_Position = u_mvp * a_position; \n" | ||
+ | " v_colour = a_colour; \n" | ||
+ | "} \n"; | ||
+ | |||
+ | We've added a new uniform - a Mat4, or 4x4 matrix - called u_mvp, and then we multiply this by the position attribute we get passed in. This effectively multiplies each vertex of the mesh but the ModelViewProjection matrix, which is pretty much what the fixed function pipeline does for you.<br /> | ||
+ | Of course, the other function of relevance is ''graphicsSystem->getShaderContext()->addUniformUpdater("u_mvp", new MVPUniformUpdater);'' which actually adds the uniform updater in the first place, else nothing works! | ||
+ | |||
+ | == Cameras are More Fun == | ||
+ | If you run it with the 2d camera, you get the giant quad from last time ( albeit with different colours as we swizzle the RGB values in the fragment shader; just like we did in the Shader Render Context example. ) We know why this is, so we can safely ignore it. | ||
+ | |||
+ | Our camera's "right" is still in the wrong direction. Again, we know this, and again we can ignore it. | ||
+ | |||
+ | However... set the camera to be 3D and run again.<br /> | ||
+ | Where's the quad?<br /> | ||
+ | It's behind you! ... no, seriously, it is.. press the up arrow for a bit and it'll come into view. | ||
+ | |||
+ | You've got the right to go "what the smeggity smeg is going on?!" about now.<br /> | ||
+ | What's happened is that our camera is now pointing down the Z axis in the opposite direction than what it was in the fixed function pipeline. How? We haven't touched the camera code! And we're doing everything the Fixed Function pipeline does... right? | ||
+ | |||
+ | Actually, no... | ||
+ | == Matrix Bashing == | ||
+ | OpenGL multiplies in reverse order from what you expect.<br /> | ||
+ | So our '''model * view * projection''' matrix, actually ends up '''projection * view * model'''.<br /> | ||
+ | Well actually, again, it doesn't... remember in the Fixed Function pipeline there's a bunch of stacks, and importantly, your model and view matrices all sit on the same stack. If you multiply these in reverse order from top to bottom, you DO get '''model * view''' ( or '''model * model * model * view''', etc... ) which means our final transform is really '''projection * modelview'''. Remembering that in matrix land, A * B is not necessarily B * A, we have extended fun as if we do just go ahead and shove that into our MVP updater, we still don't get anything on screen due to transpose issues. | ||
+ | |||
+ | Therefore, our "fixed" MVP updater is actually this: | ||
+ | void MVPUniformUpdater::update(const GLint uniformId, const Camera* const camera, const Material* const material, const Matrix4* const transform) | ||
+ | { | ||
+ | const Matrix4& view(camera->getViewMatrix()); | ||
+ | const Matrix4& projection(camera->getProjectionMatrix()); | ||
+ | |||
+ | const Matrix4 modelView((*transform).getTranspose() * view.getTranspose()); | ||
+ | const Matrix4 modelViewProjection(projection.getTranspose() * modelView); | ||
+ | |||
+ | glUniformMatrix4fv(uniformId, 1U, false, modelViewProjection.getTranspose().getData()); | ||
+ | } | ||
+ | |||
+ | Lots of transposing, so you're right to automatically assume that this is not ideal, and probably rather heavy to calculate.<br /> | ||
+ | However, we can get rid of the final transpose by changing the vertex shader to '''gl_Position = a_position * u_mvp;''' because, again, A*B doesn't equal B*A and this also holds true when multiplying against vectors. | ||
+ | |||
+ | So, what can we do, instead?<br /> | ||
+ | If we don't transpose all the matrices and have them in the order '''*we*''' think is right ( '''model * view * projection''' ) the camera system is in left hand mode, which is the opposite of what OpenGL expects, but our Z is right in what we think ( positive down the Z axis as things get further away. )<br /> | ||
+ | However, if we do, the camera system is in right hand mode, which is what OpenGL expects and you can test this, as if you do use gluPerspective and glFrustum in place of my camera code, you still end up with the same result as having to transpose all these matrices. | ||
+ | |||
+ | So.. to ensure that our Z points the way we want in both Fixed Function and Shader Based contexts, we need to go fix the Fixed Function Context: | ||
+ | void FixedFunctionContext::setFixedFunctionCamera(Camera* const camera) | ||
+ | { | ||
+ | glMatrixMode(GL_PROJECTION); | ||
+ | glLoadMatrixf(camera->getProjectionMatrix().getData()); | ||
+ | |||
+ | glMatrixMode(GL_MODELVIEW); | ||
+ | Matrix4& viewMatrix(camera->getViewMatrix()); | ||
+ | viewMatrix(0U, 2U) = -viewMatrix(0U, 2U); | ||
+ | viewMatrix(1U, 2U) = -viewMatrix(1U, 2U); | ||
+ | viewMatrix(2U, 2U) = -viewMatrix(2U, 2U); | ||
+ | viewMatrix(3U, 2U) = -viewMatrix(3U, 2U); | ||
+ | glLoadMatrixf(viewMatrix.getData()); | ||
+ | |||
+ | mCamera = camera; | ||
+ | } | ||
+ | |||
+ | And now our Fixed Function and Shader Pipelines match up!<br /> | ||
+ | Whew! Fun times, eh? | ||
+ | |||
+ | If you have issues understanding what's going on here, I suggest you check the code out and play about.<br /> | ||
+ | The engine is still nice and simple, so you should be easily able to mess about, break things, and fix things again to see how they work. | ||
+ | |||
+ | == Building the Example == | ||
+ | In the SVN there are Makefiles already setup for you.. just trigger '''''make -f MakefileES2.pandora''''' or whatever your chosen configuration is, and it'll happily build for you and spit out a '''GLESGAE.pandora''' binary for you to run. | ||
+ | |||
+ | = Next Time = | ||
+ | We're going to make a start on loading up textures and displaying them on our lovely test quad, then make a start at turning our current vertex array format into a vertex buffer object format. | ||
+ | |||
+ | = GLESGAE - Dealing with Textures = | ||
+ | |||
+ | == Introduction == | ||
+ | Textures are nice and simple to deal with.<br /> | ||
+ | Well... when they're in the format you want, they're nice and simple! | ||
+ | |||
+ | There are many many texture compression formats, from ETC1 and PVRTC to the usual S3TC set of DXT1, DXT3 and DXT5.<br /> | ||
+ | Not every platform supports the same set of compression formats either.<br /> | ||
+ | Our Pandoras will support ETC1 due to it being an OpenGL ES standard, and PVRTC due to the PowerVR chipset.<br /> | ||
+ | It'll also support uncompressed textures such as RGB and RGBA - and these are what we'll load up today in the good 'ol BMP format. | ||
+ | |||
+ | == Fast Track == | ||
+ | We're now on to SVN revision 7... '''''svn co -r 7 http://svn3.xp-dev.com/svn/glesgae/trunk/ glesgae''''' | ||
+ | |||
+ | == Loading a BMP == | ||
+ | There are oodles of pages and documentation on the BMP format... so we'll just get this done quickly.<br /> | ||
+ | Technically, this breaks our original design choice of having a pipeline feed us data as we're manually loading up data and converting it ourselves.. however, to get something up and running quickly, this'll do.<br /> | ||
+ | We still have a lot to get through before dealing with our data pipeline, and it's much more interesting to get something working now in a state we can upgrade later, than write screeds of code we can't test! | ||
+ | |||
+ | So yes, our quick BMP loader. | ||
+ | === Texture.h === | ||
+ | #ifndef _TEXTURE_H_ | ||
+ | #define _TEXTURE_H_ | ||
+ | |||
+ | #if defined(GLX) | ||
+ | #include "GLee.h" | ||
+ | #elif defined(PANDORA) | ||
+ | #if defined(GLES1) | ||
+ | #include <GLES/gl.h> | ||
+ | #elif defined(GLES2) | ||
+ | #include <GLES2/gl2.h> | ||
+ | #endif | ||
+ | #endif | ||
+ | |||
+ | #include <string> | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class Texture | ||
+ | { | ||
+ | public: | ||
+ | enum TextureFormat { | ||
+ | INVALID_FORMAT | ||
+ | , RGBA | ||
+ | , RGB | ||
+ | }; | ||
+ | |||
+ | Texture() : mId(), mData(0), mWidth(), mHeight(), mType(INVALID_FORMAT) {} | ||
+ | |||
+ | /// Load as BMP | ||
+ | void loadBMP(const std::string& fileName); | ||
+ | |||
+ | /// Retrieve this Texture's GL id | ||
+ | GLuint getId() const { return mId; } | ||
+ | |||
+ | /// Get Width | ||
+ | unsigned int getWidth() const { return mWidth; } | ||
+ | |||
+ | /// Get Height | ||
+ | unsigned int getHeight() const { return mHeight; } | ||
+ | |||
+ | protected: | ||
+ | /// Create GL Id | ||
+ | void createGLid(); | ||
+ | |||
+ | private: | ||
+ | GLuint mId; | ||
+ | unsigned char* mData; | ||
+ | unsigned int mWidth; | ||
+ | unsigned int mHeight; | ||
+ | TextureFormat mType; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Not much going on here.. we're storing a data pointer, the GLuint reference, the width and height, and the type of format we've loaded - be it RGB or RGBA. | ||
+ | |||
+ | To the meat! | ||
+ | === Texture.cpp === | ||
+ | #include "Texture.h" | ||
+ | |||
+ | #include <cstdio> | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | void Texture::loadBMP(const std::string& fileName) | ||
+ | { | ||
+ | FILE *file; | ||
+ | unsigned long size; // size of the image in bytes. | ||
+ | unsigned short int planes; // number of planes in image (must be 1) | ||
+ | unsigned short int bpp; // number of bits per pixel (must be 24) | ||
+ | |||
+ | // make sure the file is there. | ||
+ | if ((file = fopen(fileName.c_str(), "rb"))==NULL) | ||
+ | return; | ||
+ | |||
+ | // seek through the bmp header, up to the width/height: | ||
+ | fseek(file, 18, SEEK_CUR); | ||
+ | |||
+ | if (1 != fread(&mWidth, 4, 1, file)) | ||
+ | return; | ||
+ | |||
+ | // read the height | ||
+ | if (1 != fread(&mHeight, 4, 1, file)) | ||
+ | return; | ||
+ | |||
+ | // read the planes | ||
+ | if (1 != fread(&planes, 2, 1, file)) | ||
+ | return; | ||
+ | |||
+ | if (1 != planes) // Only supporting single layer BMP just now | ||
+ | return; | ||
+ | |||
+ | // read the bpp | ||
+ | if (1 != fread(&bpp, 2, 1, file)) | ||
+ | return; | ||
+ | |||
+ | if (24 == bpp) { | ||
+ | size = mWidth * mHeight * 3U; // RGB | ||
+ | mType = RGB; | ||
+ | } | ||
+ | else if (32 == bpp) { | ||
+ | size = mWidth * mHeight * 4U; // RGBA | ||
+ | mType = RGBA; | ||
+ | } | ||
+ | |||
+ | // seek past the rest of the bitmap header. | ||
+ | fseek(file, 24, SEEK_CUR); | ||
+ | |||
+ | // read the data. | ||
+ | mData = new unsigned char[size]; | ||
+ | if (mData == 0) | ||
+ | return; | ||
+ | |||
+ | if (1 != fread(mData, size, 1, file)) | ||
+ | return; | ||
+ | |||
+ | if (24 == bpp) { | ||
+ | for (unsigned int index(0U); index < size; index += 3U) { // reverse all of the colors. (bgr -> rgb) | ||
+ | unsigned char temp(mData[index]); | ||
+ | mData[index] = mData[index + 2U]; | ||
+ | mData[index + 2U] = temp; | ||
+ | } | ||
+ | } | ||
+ | else if (32 == bpp) { | ||
+ | for (unsigned int index(0U); index < size; index += 4U) { // reverse all of the colors. (bgra -> rgba) | ||
+ | unsigned char temp(mData[index]); | ||
+ | mData[index] = mData[index + 2U]; | ||
+ | mData[index + 2U] = temp; | ||
+ | } | ||
+ | } | ||
+ | |||
+ | createGLid(); | ||
+ | |||
+ | delete [] mData; | ||
+ | } | ||
+ | |||
+ | void Texture::createGLid() | ||
+ | { | ||
+ | glGenTextures(1, &mId); | ||
+ | glBindTexture(GL_TEXTURE_2D, mId); | ||
+ | |||
+ | // Enable some filtering | ||
+ | glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); | ||
+ | glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); | ||
+ | |||
+ | // Load up our data into the texture reference | ||
+ | switch (mType) { | ||
+ | case RGB: | ||
+ | glTexImage2D(GL_TEXTURE_2D, 0U, GL_RGB, mWidth, mHeight, 0U, GL_RGB, GL_UNSIGNED_BYTE, mData); | ||
+ | break; | ||
+ | case RGBA: | ||
+ | glTexImage2D(GL_TEXTURE_2D, 0U, GL_RGBA, mWidth, mHeight, 0U, GL_RGBA, GL_UNSIGNED_BYTE, mData); | ||
+ | break; | ||
+ | case INVALID_FORMAT: | ||
+ | default: | ||
+ | break; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | Honestly, there's not much going on here.<br /> | ||
+ | We load up the file, we skip the header to the width and height info and read them out.<br /> | ||
+ | Then, we check how many layers there are and ensure there's only the one.<br /> | ||
+ | After that, we take the Bits Per Pixel value out.. we can load up RGB - or 24bit - and RGBA - or 32bit - and we check for these and adjust our size accordingly as 24bit has 3 components - R, G and B - and 32bit has 4 components - R, G, B and A.<br /> | ||
+ | We skip the rest of the header as we're not interested in it, then load up the actual data itself. | ||
+ | |||
+ | Now the fun bit.<br /> | ||
+ | BMP actually stores information in BGR/BGRA format. We need it as RGB/RGBA so we need to swizzle the texture.<br /> | ||
+ | We do this by running through the texture, and literally swapping the values about manually. | ||
+ | |||
+ | Once that's done, we trigger GL to actually create the Texture ID and upload the texture to it.<br /> | ||
+ | Check the GL guide for what's going on here and the parameters, as there's quite a few.<br /> | ||
+ | Once GL has our texture though, we can delete it from our memory. | ||
+ | |||
+ | == Updating Material == | ||
+ | Materials are generally the objects which define how things get drawn. How shiny they are, their colours, etc... and of course, their textures. | ||
+ | |||
+ | We therefore need to add some code to Material so we can add and fiddle with our textures! | ||
+ | /// Add a Texture | ||
+ | void addTexture(Texture* const texture) { mTextures.push_back(texture); } | ||
+ | |||
+ | /// Grab a Texture | ||
+ | Texture* const getTexture(unsigned int index) const { return mTextures[index]; } | ||
+ | |||
+ | /// Grab amount of Textures we have | ||
+ | unsigned int getTextureCount() const { return mTextures.size(); } | ||
+ | |||
+ | And that's all we need, really.<br /> | ||
+ | I had already done bits of Material's texture support already, so it contains a vector of Texture* objects, and all it really needed was addTexture and getTextureCount.<br /> | ||
+ | The whole file is in SVN anyway. | ||
+ | |||
+ | == Mesh Fiddling == | ||
+ | Of course, sending a texture over is nice and all, but we need to tell GL how on Earth to draw the thing - and that's where Texture Co-ordinates come in. These map the 2D image to the vertex points on your mesh. | ||
+ | |||
+ | Let's look at our little quad as it stands: | ||
+ | float vertexData[32] = {// Position - 16 floats | ||
+ | -1.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, -1.0F, 0.0F, 1.0F, | ||
+ | -1.0F, -1.0F, 0.0F, 1.0F, | ||
+ | // Colour - 16 floats | ||
+ | 0.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, 0.0F, 0.0F, 1.0F, | ||
+ | 0.0F, 0.0F, 1.0F, 1.0F, | ||
+ | 1.0F, 1.0F, 1.0F, 1.0F}; | ||
+ | |||
+ | So we're going from top left, to top right, to bottom right, then bottom left in our Position. Let's remove the Colour values and replace it with some texture co-ordinates to match up. | ||
+ | |||
+ | float vertexData[32] = {// Position - 16 floats | ||
+ | -1.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, -1.0F, 0.0F, 1.0F, | ||
+ | -1.0F, -1.0F, 0.0F, 1.0F, | ||
+ | // Tex Coords - 8 floats | ||
+ | 1.0F, 1.0F, // top right | ||
+ | 0.0F, 1.0F, // top left | ||
+ | 0.0F, 0.0F, // bottom left | ||
+ | 1.0F, 0.0F}; // bottom right | ||
+ | |||
+ | Hold on, we're going the wrong way with the Texture Co-ordinates! Or are we?<br /> | ||
+ | The other fun thing about BMP is it stores it upside down, so I'm compensating for it here.<br /> | ||
+ | We also only store two floats as they're unit X/Y values from the texture... this means we need to change the data description as well: | ||
+ | VertexBuffer* newVertexBuffer = new VertexBuffer(reinterpret_cast<unsigned char*>(&vertexData), vertexSize); | ||
+ | newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_POSITION_4F, 4U); | ||
+ | newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_TEXTURE_2F, 4U); // replacing the Colour one we used to have. | ||
+ | |||
+ | We also need to add a Texture to the Material here, so we'll modify the function to take this in, and then add it to the Material: | ||
+ | Mesh* makeSprite(Shader* const shader, Texture* const texture) | ||
+ | { | ||
+ | ... | ||
+ | ... | ||
+ | Material* newMaterial = new Material; | ||
+ | newMaterial->setShader(shader); | ||
+ | newMaterial->addTexture(texture); | ||
+ | Matrix4* newTransform = new Matrix4; | ||
+ | |||
+ | We dummy out the shader stuff on Fixed Function pipelines, so we can safely use it without issue. | ||
+ | |||
+ | == Shaders and Textures == | ||
+ | We'll get the harder system done and out the way first... | ||
+ | |||
+ | Our shaders are going to need updated to deal with the fact we'll be sending in Texture Co-ordinates and a Texture itself.<br /> | ||
+ | So let's go and do just that: | ||
+ | std::string vShader = | ||
+ | "attribute vec4 a_position; \n" | ||
+ | "attribute vec2 a_texCoord0; \n" | ||
+ | "varying vec2 v_texCoord0; \n" | ||
+ | "uniform mat4 u_mvp; \n" | ||
+ | "void main() \n" | ||
+ | "{ \n" | ||
+ | " gl_Position = u_mvp * a_position; \n" | ||
+ | " v_texCoord0 = a_texCoord0; \n" | ||
+ | "} \n"; | ||
+ | |||
+ | std::string fShader = | ||
+ | #ifdef GLES2 | ||
+ | "precision mediump float; \n" | ||
+ | #endif | ||
+ | "varying vec2 v_texCoord0; \n" | ||
+ | "uniform sampler2D s_texture0; \n" | ||
+ | "void main() \n" | ||
+ | "{ \n" | ||
+ | " gl_FragColor = texture2D(s_texture0, v_texCoord0); \n" | ||
+ | "} \n"; | ||
+ | |||
+ | In the vertex shader, we're expecting a vec2 - a float vector of two components - with the name a_texCoord0. We then just copy it into the varying without touching it to pass it through to the fragment shader.<br /> | ||
+ | The fragment shader picks this up, and looks for a uniform of the special 2D Sampler type called s_texture0.<br /> | ||
+ | Then the interesting bit; we call a built-in function - texture2D - with our texture uniform and texture co-ordinate, and pass it to the fragment colour. That's it.. we'll get textures assuming we push in the bits we need.. so let's do that! | ||
+ | |||
+ | Remember our Uniform System?<br /> | ||
+ | Well, we've added a new unform here, so we need another updater: | ||
+ | #ifndef _TEXTURE0_UNIFORM_UPDATER_H_ | ||
+ | #define _TEXTURE0_UNIFORM_UPDATER_H_ | ||
+ | |||
+ | #include "../../Graphics/ShaderUniformUpdater.h" | ||
+ | |||
+ | class Texture0UniformUpdater : public GLESGAE::ShaderUniformUpdater | ||
+ | { | ||
+ | public: | ||
+ | Texture0UniformUpdater() : GLESGAE::ShaderUniformUpdater() {} | ||
+ | |||
+ | void update(const GLint uniformId, const GLESGAE::Camera* const camera, const GLESGAE::Material* const material, const GLESGAE::Matrix4* const transform); | ||
+ | }; | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Pretty empty header file.. most of these will be like this really. | ||
+ | |||
+ | #include "Texture0UniformUpdater.h" | ||
+ | |||
+ | #if defined(GLX) | ||
+ | #include "../../Graphics/GLee.h" | ||
+ | #elif defined(GLES2) | ||
+ | #if defined(PANDORA) | ||
+ | #include <GLES2/gl2.h> | ||
+ | #endif | ||
+ | #endif | ||
+ | |||
+ | #include "../../Maths/Matrix4.h" | ||
+ | #include "../../Graphics/Camera.h" | ||
+ | #include "../../Graphics/Material.h" | ||
+ | #include "../../Graphics/Texture.h" | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | void Texture0UniformUpdater::update(const GLint uniformId, const Camera* const camera, const Material* const material, const Matrix4* const transform) | ||
+ | { | ||
+ | Texture* const texture(material->getTexture(0U)); | ||
+ | glUniform1i(texture->getId(), 0U); | ||
+ | } | ||
+ | |||
+ | Actually, this isn't much different!<br /> | ||
+ | We're only doing two things - taking out the Texture object from the material at index 0 ( this is texture 0 after all... ) then we bind the id to the uniform. The 0 signifies that this is using texture co-ordinate set 0. | ||
+ | |||
+ | Pretty straight forward so far, isn't it? | ||
+ | |||
+ | However, we have no texture support in the Shader Based Context.. so we're best adding that now. | ||
+ | |||
+ | == Shader Based Context Fiddling == | ||
+ | We're going to do just a little bit of fiddling to the Shader Based Context to get it rendering our texture just now.<br /> | ||
+ | We will need to revisit this soon, but we need to get VBOs in first as that changes the renderer again. | ||
+ | |||
+ | So, in the header file, we'll add a new function and a new member variable: | ||
+ | public: | ||
+ | /// Update Textures | ||
+ | void updateTextures(const Material* const material); | ||
+ | |||
+ | private: | ||
+ | GLenum mLastTextureUnit; | ||
+ | |||
+ | We store the last texture unit we fiddled with as an optimization. It's bad form to constantly turn texture units on and off, especially if you only just messed with the same one in the previous frame! | ||
+ | |||
+ | Now let's look at that updateTextures function: | ||
+ | void ShaderBasedContext::updateTextures(const Material* const material) | ||
+ | { | ||
+ | const unsigned int textureCount(material->getTextureCount()); | ||
+ | |||
+ | for (unsigned int currentTexture(0U); currentTexture < textureCount; ++currentTexture) { | ||
+ | GLenum currentTextureUnit(GL_TEXTURE0 + currentTexture); | ||
+ | if (currentTextureUnit != mLastTextureUnit) { | ||
+ | glActiveTexture(currentTextureUnit); | ||
+ | mLastTextureUnit = currentTextureUnit; | ||
+ | } | ||
+ | |||
+ | Texture* const texture(material->getTexture(currentTexture)); | ||
+ | glBindTexture(GL_TEXTURE_2D, texture->getId()); | ||
+ | } | ||
+ | } | ||
+ | |||
+ | All we're doing here, is running through all the textures we may have on our material, and ensuring we have enough texture units available for them.<br /> | ||
+ | If we've already activated the texture unit, we leave it alone, but we always bind the texture as it may have changed. Nice and simple. | ||
+ | |||
+ | We call this function in the draw loop.. around here will do: | ||
+ | bindShader(material->getShader()); | ||
+ | updateUniforms(material, transform); | ||
+ | updateTextures(material); | ||
+ | |||
+ | And we can remove the variable that was storing currentTextureUnit in this function, as we don't need it any more. | ||
+ | |||
+ | Unfortunately, we didn't add in the Texture Co-ordinate bits to our big switch statement, and due to the whole attribute binding part, it's a bit trickier than we'd like.<br /> | ||
+ | We're only going to quickly push it in to get something showing, but we'll need to reinvestigate this area next time for VBOs anyway.<br /> | ||
+ | Either way, we just need to add the following to the switch statement: | ||
+ | // Texture | ||
+ | case VertexBuffer::FORMAT_TEXTURE_2F: | ||
+ | glVertexAttribPointer(a_texCoord0, 2, GL_FLOAT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | break; | ||
+ | |||
+ | And the last little bit that we need, which is rather important, is adding this to the constructor: | ||
+ | glEnable(GL_TEXTURE_2D); | ||
+ | |||
+ | Which enables texturing in the first place! | ||
+ | |||
+ | == Fixed Function Requirements == | ||
+ | Amusingly, we've already done everything we need to do for the Fixed Function pipeline to render Textures. | ||
+ | |||
+ | That was easy, wasn't it? | ||
+ | |||
+ | However, we do need to update our mesh description in main.cpp, so we'll do that: | ||
+ | #ifndef GLES2 | ||
+ | FixedFunctionContext* const fixedContext(graphicsSystem->getFixedContext()); | ||
+ | if (0 != fixedContext) { | ||
+ | fixedContext->enableFixedFunctionVertexPositions(); | ||
+ | } | ||
+ | #endif | ||
+ | |||
+ | That's us.. only had to remove the Vertex Colours description as we're not sending them over any more, and Texture Co-ordinates are done slightly differently from everything else, so we catch them dynamically when the mesh hits the context. | ||
+ | |||
+ | All source is in the SVN if you need a reminder as to what we've done. | ||
+ | |||
+ | == A Concatenated Example == | ||
+ | |||
+ | And now let's edit our example code to pull in our texture and display it.<br /> | ||
+ | Most of this should be familiar, and we've already changed it but for completeness sake: | ||
+ | #include <cstdio> | ||
+ | #include <cstdlib> | ||
+ | |||
+ | #include "../../Graphics/GraphicsSystem.h" | ||
+ | #include "../../Graphics/Context/FixedFunctionContext.h" | ||
+ | #include "../../Graphics/Context/ShaderBasedContext.h" | ||
+ | #include "../../Events/EventSystem.h" | ||
+ | #include "../../Input/InputSystem.h" | ||
+ | #include "../../Input/Keyboard.h" | ||
+ | #include "../../Input/Pad.h" | ||
+ | |||
+ | #include "../../Graphics/Camera.h" | ||
+ | #include "../../Graphics/Mesh.h" | ||
+ | #include "../../Graphics/VertexBuffer.h" | ||
+ | #include "../../Graphics/IndexBuffer.h" | ||
+ | #include "../../Graphics/Material.h" | ||
+ | #include "../../Graphics/Shader.h" | ||
+ | #include "../../Graphics/Texture.h" | ||
+ | #include "../../Maths/Matrix4.h" | ||
+ | |||
+ | #include "MVPUniformUpdater.h" | ||
+ | #include "Texture0UniformUpdater.h" | ||
+ | |||
+ | using namespace GLESGAE; | ||
+ | |||
+ | void controlCamera(Camera* const camera, Controller::KeyboardController* const keyboard); | ||
+ | Mesh* makeSprite(Shader* const shader, Texture* const texture); | ||
+ | Shader* makeSpriteShader(); | ||
+ | |||
+ | int main(void) | ||
+ | { | ||
+ | EventSystem* eventSystem(new EventSystem); | ||
+ | InputSystem* inputSystem(new InputSystem(eventSystem)); | ||
+ | GraphicsSystem* graphicsSystem(new GraphicsSystem(GraphicsSystem::FIXED_FUNCTION_RENDERING)); | ||
+ | |||
+ | if (false == graphicsSystem->initialise("GLESGAE Texturing Test", 800, 480, 16, false)) { | ||
+ | //TODO: OH NOES! WE'VE DIEDED! | ||
+ | return -1; | ||
+ | } | ||
+ | |||
+ | #ifndef GLES2 | ||
+ | FixedFunctionContext* const fixedContext(graphicsSystem->getFixedContext()); | ||
+ | if (0 != fixedContext) { | ||
+ | fixedContext->enableFixedFunctionVertexPositions(); | ||
+ | } | ||
+ | #endif | ||
+ | |||
+ | #ifndef GLES1 | ||
+ | ShaderBasedContext* const shaderContext(graphicsSystem->getShaderContext()); | ||
+ | if (0 != shaderContext) { | ||
+ | shaderContext->addUniformUpdater("u_mvp", new MVPUniformUpdater); | ||
+ | shaderContext->addUniformUpdater("s_texture0", new Texture0UniformUpdater); | ||
+ | } | ||
+ | #endif | ||
+ | |||
+ | Texture* texture(new Texture()); | ||
+ | texture->loadBMP("Texture.bmp"); | ||
+ | Mesh* mesh(makeSprite(makeSpriteShader(), texture)); | ||
+ | Camera* camera(new Camera(Camera::CAMERA_3D)); | ||
+ | camera->getTransformMatrix().setPosition(Vector3(0.0F, 0.0F, 5.0F)); | ||
+ | |||
+ | eventSystem->bindToWindow(graphicsSystem->getWindow()); | ||
+ | |||
+ | Controller::KeyboardController* myKeyboard(inputSystem->newKeyboard()); | ||
+ | |||
+ | while(false == myKeyboard->getKey(Controller::KEY_ESCAPE)) { | ||
+ | controlCamera(camera, myKeyboard); | ||
+ | |||
+ | eventSystem->update(); | ||
+ | inputSystem->update(); | ||
+ | graphicsSystem->beginFrame(); | ||
+ | graphicsSystem->setCamera(camera); | ||
+ | graphicsSystem->drawMesh(mesh); | ||
+ | graphicsSystem->endFrame(); | ||
+ | } | ||
+ | |||
+ | delete graphicsSystem; | ||
+ | delete inputSystem; | ||
+ | delete eventSystem; | ||
+ | delete mesh; | ||
+ | |||
+ | return 0; | ||
+ | } | ||
+ | |||
+ | Mesh* makeSprite(Shader* const shader, Texture* const texture) | ||
+ | { | ||
+ | float vertexData[32] = {// Position - 16 floats | ||
+ | -1.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, 1.0F, 0.0F, 1.0F, | ||
+ | 1.0F, -1.0F, 0.0F, 1.0F, | ||
+ | -1.0F, -1.0F, 0.0F, 1.0F, | ||
+ | // Tex Coords - 8 floats | ||
+ | 1.0F, 1.0F, // top right | ||
+ | 0.0F, 1.0F, // top left | ||
+ | 0.0F, 0.0F, // bottom left | ||
+ | 1.0F, 0.0F}; // bottom right | ||
+ | |||
+ | unsigned int vertexSize = 25 * sizeof(float); | ||
+ | |||
+ | unsigned char indexData[6] = { 0, 1, 2, 2, 3, 0 }; | ||
+ | unsigned int indexSize = 6 * sizeof(unsigned char); | ||
+ | |||
+ | VertexBuffer* newVertexBuffer = new VertexBuffer(reinterpret_cast<unsigned char*>(&vertexData), vertexSize); | ||
+ | newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_POSITION_4F, 4U); | ||
+ | newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_TEXTURE_2F, 4U); | ||
+ | IndexBuffer* newIndexBuffer = new IndexBuffer(reinterpret_cast<unsigned char*>(&indexData), indexSize, IndexBuffer::FORMAT_UNSIGNED_BYTE); | ||
+ | Material* newMaterial = new Material; | ||
+ | newMaterial->setShader(shader); | ||
+ | newMaterial->addTexture(texture); | ||
+ | Matrix4* newTransform = new Matrix4; | ||
+ | |||
+ | return new Mesh(newVertexBuffer, newIndexBuffer, newMaterial, newTransform); | ||
+ | } | ||
+ | |||
+ | void controlCamera(Camera* const camera, Controller::KeyboardController* const keyboard) | ||
+ | { | ||
+ | Vector3 newPosition; | ||
+ | camera->getTransformMatrix().getPosition(&newPosition); | ||
+ | if (true == keyboard->getKey(Controller::KEY_ARROW_DOWN)) | ||
+ | newPosition.z() -= 0.01F; | ||
+ | |||
+ | if (true == keyboard->getKey(Controller::KEY_ARROW_UP)) | ||
+ | newPosition.z() += 0.01F; | ||
+ | |||
+ | if (true == keyboard->getKey(Controller::KEY_ARROW_LEFT)) | ||
+ | newPosition.x() -= 0.01F; | ||
+ | |||
+ | if (true == keyboard->getKey(Controller::KEY_ARROW_RIGHT)) | ||
+ | newPosition.x() += 0.01F; | ||
+ | camera->getTransformMatrix().setPosition(newPosition); | ||
+ | |||
+ | camera->update(); | ||
+ | } | ||
+ | |||
+ | #if defined(GLX) | ||
+ | #include "../../Graphics/GLee.h" | ||
+ | #endif | ||
+ | |||
+ | Shader* makeSpriteShader() | ||
+ | { | ||
+ | std::string vShader = | ||
+ | "attribute vec4 a_position; \n" | ||
+ | "attribute vec2 a_texCoord0; \n" | ||
+ | "varying vec2 v_texCoord0; \n" | ||
+ | "uniform mat4 u_mvp; \n" | ||
+ | "void main() \n" | ||
+ | "{ \n" | ||
+ | " gl_Position = u_mvp * a_position; \n" | ||
+ | " v_texCoord0 = a_texCoord0; \n" | ||
+ | "} \n"; | ||
+ | |||
+ | std::string fShader = | ||
+ | #ifdef GLES2 | ||
+ | "precision mediump float; \n" | ||
+ | #endif | ||
+ | "varying vec2 v_texCoord0; \n" | ||
+ | "uniform sampler2D s_texture0; \n" | ||
+ | "void main() \n" | ||
+ | "{ \n" | ||
+ | " gl_FragColor = texture2D(s_texture0, v_texCoord0); \n" | ||
+ | "} \n"; | ||
+ | |||
+ | #ifndef GLES1 | ||
+ | Shader* newShader(new Shader()); | ||
+ | newShader->createFromSource(vShader, fShader); | ||
+ | |||
+ | return newShader; | ||
+ | #else | ||
+ | return 0; | ||
+ | #endif | ||
+ | } | ||
+ | |||
+ | |||
+ | If we're compiling for ES1 we need to ensure we're using FIXED_FUNCTION_RENDERING, else with ES2 it'll be SHADER_BASED_RENDERING.<br /> | ||
+ | Other than that, they should be exactly identical! | ||
+ | |||
+ | You might wonder why we don't get alpha working properly.. we haven't set any blending modes up.. we'll get to that, but for now, we have textures and the reign of terror of the coloured quad is at an end! All hail Mr Smiley Face! | ||
+ | |||
+ | == Building the Example == | ||
+ | In the SVN there are Makefiles already setup for you.. just trigger '''''make -f MakefileES2.pandora''''' or whatever your chosen configuration is, and it'll happily build for you and spit out a '''GLESGAE.pandora''' binary for you to run. | ||
+ | |||
+ | = Next Time = | ||
+ | We've one more bit of core functionality to add - VBOs.. and that'll be next. | ||
+ | |||
+ | = GLESGAE - Making another Mesh - Vertex Buffer Objects = | ||
+ | |||
+ | == Introduction == | ||
+ | We've been using Vertex Arrays up till now as they're nice and quick to get up and going with.<br /> | ||
+ | However, they have a penalty in that their data is still stored client side, and must be sent to the OpenGL server to be processed before drawing. This is where Vertex Buffer Objects come into play as they've already been processed and sit server-side, with only an identifier being kept in the client. | ||
+ | |||
+ | We will therefore convert our current Vertex Array implementation into a Vertex Buffer Object - VBO - implementation to gain some speed. | ||
+ | |||
+ | == Fast Track == | ||
+ | We're now on to SVN revision 8... '''''svn co -r 8 http://svn3.xp-dev.com/svn/glesgae/trunk/ glesgae''''' | ||
+ | |||
+ | == Creating the Buffers == | ||
+ | The creation of VBOs is really just a logical step onwards from what we already have. We need data to put into a VBO in the first place, and we already have this within our two Buffer classes, so wrapping it within a VBO becomes immensely easy. | ||
+ | |||
+ | Both classes we need to add a new variable to store the VBO Id: | ||
+ | unsigned int mVboId; | ||
+ | |||
+ | Along with an accessor: | ||
+ | unsigned int getVboId() const { return mVboId; } | ||
+ | |||
+ | And that's pretty much it apart from the constructors. | ||
+ | |||
+ | === Vertex Buffer VBO constructors === | ||
+ | Our Vertex Buffer class contains two main constructors - one for when we know the format, and another for generating it on the fly.<br /> | ||
+ | Both of these must be edited to intialise our ''mVboId'' to 0, and then have the following code in the function body: | ||
+ | glGenBuffers(1U, &mVboId); | ||
+ | glBindBuffer(GL_ARRAY_BUFFER, mVboId); | ||
+ | glBufferData(GL_ARRAY_BUFFER, mSize, mData, GL_STATIC_DRAW); | ||
+ | |||
+ | And that's it. Our Vertex Buffer automatically generates a VBO for us. | ||
+ | |||
+ | === Index Buffer VBO constructor === | ||
+ | This is much the same, apart from being an ELEMENT buffer, so the code changes to: | ||
+ | glGenBuffers(1U, &mVboId); | ||
+ | glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mVboId); | ||
+ | glBufferData(GL_ELEMENT_ARRAY_BUFFER, mSize, mData, GL_STATIC_DRAW); | ||
+ | |||
+ | Nice and easy! | ||
+ | |||
+ | == Rendering VBOs == | ||
+ | We're lucky in that our current Vertex Array method is pretty close to how a VBO is actually used.. we therefore only need to do very small changes to our Fixed Function and Shader Based contexts. | ||
+ | |||
+ | We need to bind the correct VBO before we start sending over any vertex or index information, and this is done by the ''glBindBuffer( ... );'' call again.. so for the Vertex Buffer it would be: | ||
+ | glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer->getVboId()); | ||
+ | Just before the giant Format switch in both Contexts, and just before the Index Format switch, the Index Buffer would be bound like so: | ||
+ | glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer->getVboId()); | ||
+ | |||
+ | This just leaves tidying up how we send the information over. | ||
+ | |||
+ | VBOs are already sent over and live server-side in the GL context, therefore any reference to the data client side is null and void.<br /> | ||
+ | Therefore, where we may have something like: | ||
+ | glColorPointer(4, GL_UNSIGNED_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); | ||
+ | In the Fixed Function Context, we need to change this to: | ||
+ | glColorPointer(4, GL_UNSIGNED_BYTE, 0, reinterpret_cast<char*>(itr->getOffset())); | ||
+ | As the data is already sent over, and the final parameter expects a data pointer format ( which for us, is a char pointer. ) | ||
+ | |||
+ | The final draw command which deals with the indices is changed in much the same manner.<br /> | ||
+ | Where we currently may have: | ||
+ | glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_SHORT, indexBuffer->getData()); | ||
+ | It needs to be: | ||
+ | glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_SHORT, 0); | ||
+ | As again, the buffer is already sitting in the GL Server ready to be used - we don't need to send anything else over. | ||
+ | |||
+ | And that's it.<br /> | ||
+ | We now render using VBOs. | ||
+ | |||
+ | == Whoa, hold up! What was this GL_STATIC_DRAW thing? == | ||
+ | When we were creating our buffers, we specified GL_STATIC_DRAW in the last parameter of glBufferData. This is to indicate to GL that this is a static buffer - we won't be changing this - but we'll be drawing it many many times.<br /> | ||
+ | You also have GL_DYNAMIC_DRAW for when you are going to be changing the contents of the buffer... but take note that as soon as you call glBufferData with a data pointer, that data is copied to the server as it is at that instance.. so if you mess with the data in your program and don't update it with OpenGL, it'll draw whatever you originally set it to! So be careful! | ||
+ | |||
+ | There are a few other parameters you can specify instead, but I suggest you read the Red Book for more information on them. | ||
+ | |||
+ | == Building the Example == | ||
+ | There is no example this time, as it's just changing how we store and render our Vertex and Index Buffers, so it's all internal code. | ||
+ | |||
+ | = Next Time = | ||
+ | We take a look at how to manage the mass of objects we now have, and the importance of getting it right. | ||
+ | |||
+ | = GLESGAE - Managing Resources Overview = | ||
+ | |||
+ | = Introduction = | ||
+ | |||
+ | Now we have some basic graphics rendering going on, we need to look at managing the resources we're starting to assemble before we start on anything more complicated - such as rendering 3D models, processing some logic so they do something, loading in more textures, and so on... | ||
+ | |||
+ | Sadly, this is where things get complicated, as we're now out of the realms of simple demos and into what can literally make or break an engine.<br /> | ||
+ | Too many really good engines end up unusable as their resource management is poor or non-existant and relies too much on the programmer to handle everything when they may not be aware as to what is going on under the surface with their resources. Freeing something that you may be finished with, but something internal isn't, can cause massive problems.<br /> | ||
+ | We're also programming in C++ so we have no automatic garbage collection and therefore need to deal with everything ourselves. | ||
+ | |||
+ | Of course, the biggest resource to manage is that of Memory, which also happens to be the most complicated resource to deal with as there are many ways in which to make a complete mess. A lot of engines overload the defacto new and delete methods so they can ensure that everything goes through their own memory manager. Other engines provide a large Singleton Memory Manager which you ask for memory, and use the space it gives back in whatever manner you need. | ||
+ | |||
+ | Another resource which tends to be forgotten about is that of logic itself. How you manage your states is just as important as how you manage your memory, as making a mess of how your logic gets called can provide rather drastic performance bottlenecks. Threading is also not the silver bullet that can kill your demons either, as if your logic has not been separated properly, you end up with all manner of issues dealing with deadlocks, threads overwriting memory all over the place, synchronisation issues, and other hellish nightmares you never want to have. Therefore, proper management of logic and states makes things much easier as anything that's self contained can get thrown on a thread and processed in parallel. | ||
+ | |||
+ | Finally, there's the art of debugging. It's much better to code defensively and assert as often as you can, than be left dealing with bugs near the end of the project that you need to track down. As such, having some helper functions to test for asserts, break if something goes wrong, and log warnings at various levels can be your lifeline in ensuring your logic is sane and any issues can be caught during development, rather than by the end user! | ||
+ | |||
+ | = The Resource Manager = | ||
+ | The first thing we'll be looking at in this section will be The Resource Manager. This is our all important gateway into maintaining the potentially vast amount of data that we'll be pushing through the engine. | ||
+ | |||
+ | Our Resource Manager is going to perform two primary tasks - File IO and the actual runtime management. This requires some careful fiddling however, as we need to ensure our Tools use the same data headers as the game. However, as all of our Tools will be using the same engine as the game, we'll be fine. It is something to be aware of, however, should you want to write Tools outside of the engine. If you want to output files for your game that the Resource Manager will load up, you'll need to ensure the file formats are the same. | ||
+ | |||
+ | The Resource Manager itself will manage multiple Resource Banks. These banks will only manage one type of data; a perfect example of using templates to reuse code! While this might seem restrictive, it's incredibly handy and very optimised. For example, when we start dealing with Entities we'll be able to just set a pointer on the start of the Resource Bank for our Entity set, and just let it run through them all. They'll all be in contiguous memory, so the processor will be able to cache it much more efficiently and speed through them all. That is, of course, assuming that each Entity contains all the data it needs to process directly, and it doesn't require the processor to go fetching pointers to other data elsewhere. | ||
+ | |||
+ | Each Resource will be of a template class, which itself acts like a smart pointer. What is a smart pointer? Well, whereas you could just new or malloc a random pointer anywhere you like, you need to also delete or free it when you're finished. By using a smart pointer, the deletion is taken care of for you. This is especially important for Resources as you may not be aware of something else that is using the Resource at the same time - especially in a threaded environment - and if you go and delete it while something else is depending on it, you've got yourself a fun bug hunt!<br /> | ||
+ | So let's not have bug hunts, and use a smart pointer. | ||
+ | |||
+ | = The State Manager = | ||
+ | Then, we will be looking at The State Manager. Each major area of an application is usually defined as a specific state. For example, you may have an Intro State, a Menu State and a Game State. The Intro leads into the Menu, which leads into the Game. The Game then might jump back to the Menu and loop back and forth until something quits. By separating the logic into these groupings, it makes it much more obvious what is going on while coding. | ||
+ | |||
+ | We'll also be employing the use of a State Stack. This will allow us to push and pop States onto one another; such as pushing a Pause State onto our Game State when we hit pause, and popping it back off when we un-pause. This allows us to keep all the information for the Game State active - but not processed - while the Pause State is in effect. | ||
+ | |||
+ | As you may have worked out from the previous paragraph, each State will be managing it's own objects as well. This means when we pop a State from the stack, it should clean itself up and release all resources it may have taken to initialise itself and run. | ||
+ | |||
+ | = The Memory Manager = | ||
+ | We'll have a look at implementing a Memory Manager, and wire up the State and Resource Managers to use it. | ||
+ | |||
+ | Our Memory Manager is going to be of the basic heap variety, in that you will ask it for a heap of memory and you will be free to use that as you please. This can create some odd syntax in that you need to allocate some memory from your heap, and then allocate your object into the heap, but it allows us to give anything that needs some memory it's own little portion to do whatever it pleases. As most libraries have their own memory management system that you can overload, this idea works perfectly in that we just generate a large heap for it, and wire up it's allocation utils to use it and we keep it out the way of everything else. | ||
+ | |||
+ | = The Debug and Logger System = | ||
+ | As mentioned, we should always code as defensively as possible - checking everything we do to ensure we are working with valid data before acting upon it. | ||
+ | |||
+ | When we need more information as to what's going on, we need to log out to a file or console to see the internals as we don't always have the luxury of running in a debugger - and indeed when threads collide, it's not always apparent which thread caused the issue! So a means of logging information is necessary. | ||
+ | |||
+ | = File Formats and Data Structures = | ||
+ | Before we start loading things up, we need to actually start declaring structures that we can load. This invariably involves fixed structures for faster loading, or variable structures for flexibility but the requirement to parse and slow down the load times. There are additional pros and cons for each method, but that's effectively what it boils down to. | ||
+ | |||
+ | We won't be creating File Formats and Data Structures for everything under the Sun as that's rather dependent upon what the application is up to, but we will be generating some rules to ensure that we can load and manage them properly, as well as define them properly as and when we need them. | ||
+ | |||
+ | = The Thread Manager = | ||
+ | Oh yes, there will be threads. | ||
+ | |||
+ | Threading is a black art in that there are many places that it can go wrong - even with the simplest of code. However, threading also gives us the benefit of running code in parallel which generally gives us an instant speed boost. | ||
+ | |||
+ | We'll primarily be using mutexes in order to ensure our code locks and unlocks memory that is required, and we shall also discuss what kind of code we should thread, what code we should leave alone, and how to ensure we write code that's as thread safe as possible. This will all be combined into a Thread manager to deal with the operation and management of threads, ensuring they're all cleaned up on exit and providing utility functions for our mutex handling. | ||
+ | |||
+ | = The Model Loader = | ||
+ | Once the boring stuff is out of the way, we'll start having some fun again. We shall be creating a model loader that will manage our own model format. | ||
+ | |||
+ | Of course, to get to our model format, we'll need to write a tool to do so, and this will be where everything discussed previously should fall together.<br /> | ||
+ | We'll be writing our tool to deal with both the COLLADA format and the Alias|Wavefront Obj format, so we should have everything covered. | ||
+ | |||
+ | = The Logic Loop = | ||
+ | Finally, we'll combine all the knowledge we've learned into one big demo for the end of this section and discuss the main loop at the same time. | ||
+ | |||
+ | We'll be creating ourselves a quick and simple little Model Viewer, something rather important considering we'll have created our own Model Format, so will need our own tool to view our own stuff! | ||
+ | |||
+ | = Conclusion = | ||
+ | This will be a big section, and it covers some very scary bits and pieces, all to do with managing resources - be it data or code.<br /> | ||
+ | However, it puts us in good stead to discover the ways of which our logic can be processed, and how to manage our logic through the use of Entities and Scripting. | ||
+ | |||
+ | We'll also be able to generate file formats and data structures for anything we feel like, know how to load them and save them, and all sorts of fun stuff that forms the crux of our engine. | ||
+ | |||
+ | = GLESGAE - The Resource Manager = | ||
+ | |||
+ | = Overview = | ||
+ | |||
+ | Resources are everything that gets loaded, or generated, in your application. | ||
+ | These can be anything from media objects - such as sound and textures - to more abstract things like level data, entities, or even the graphics system itself! | ||
+ | So we need some way to describe what a Resource is, and a set of common functionality for dealing with them. | ||
+ | Additionally, we probably want to manage these things, and loading/saving them would be a nicety too, if possible. | ||
+ | |||
+ | This section is all about that. | ||
+ | We'll be building up a Resource Manager ( which has taken a fair amount of beating from myself to get right ) and a set of classes to describe Resources and Resource Banks - groups, if you will - that can be used to store and categorise our Resources. | ||
+ | |||
+ | == Quick Start == | ||
+ | We're upto revision 9 on the SVN now: ''' svn co https://svn.xp-dev.com/svn/glesgae/trunk -r 9 ''' ( though it's not currently live... - Stuckie 7th Feb 2012 )<br /> | ||
+ | For the brave, there's also the bleedin' edge git repository available: ''' https://github.com/stuckie/glesgae ''' though this may not always compile on anything but Linux. | ||
+ | |||
+ | == What Is A Resource Manager? == | ||
+ | |||
+ | Traditionally, there's usually just the one lone Resource Manager. | ||
+ | This is generally tied in to the system's core file i/o and manages the loading and saving of pretty much everything. | ||
+ | For systems such as Android where the entire GL Context can be lost at any time, having a Resource Manager which knows what has been loaded up - and more importantly in this case, how to reload it - is almost a requirement to stave off insanity. | ||
+ | |||
+ | Depending upon your view on Resources, the Manager can also tend to categorise Resources for easier access; for example keeping all Resources for rendering a Mesh close to each other so that the Renderer isn't hopping all over the memory to access things. | ||
+ | |||
+ | == The GLESGAE Resource Manager == | ||
+ | |||
+ | Our Resource Manager is going to be optional. | ||
+ | It can be used, or it can be ignored. | ||
+ | This is mostly because we haven't really written a set of file i/o utilities yet, so we're still either generating everything in code, or we're having Texture load up it's own bitmaps.. but also because there are times when a full blown Resource Manager is just too much for what we want to do. | ||
+ | Having options is always nice. | ||
+ | |||
+ | Our Manager is going to manage Resource Banks - or groups of Resources, if you will. | ||
+ | It'll be primarily responsible for the creation and management of banks of resources. | ||
+ | This makes things much easier to deal with as when we're finished with file i/o and have all our formats defined, we can just tell the resource manager to load up huge blocks of data, and categorise them as needed. | ||
+ | |||
+ | == Resource Banks == | ||
+ | |||
+ | As stated, our Resource Banks will effectively act as fancy arrays of data. | ||
+ | Each Bank is templated to a specific type of Resource, and can contain various groups, categorised into whatever we like. | ||
+ | This means that we can have one Bank of Mesh objects, and group them specific to each Model, Level, whatever... so when we pull out a group to iterate over, they're all specific for that item. | ||
+ | This will become important for when we get on to Entities and Components, as we'll be wanting to turn Components on and off, and the fastest way of processing anything to see if it's on or off, is to just not include the off objects in the same array as the on objects... effectively, a Resource Bank where the groups are Active and Inactive. | ||
+ | |||
+ | == The Resource == | ||
+ | |||
+ | The Resource itself is a relatively simple template class. | ||
+ | It does follow a smart pointer-style setup, in that it tracks how many instances of itself there are, and only deletes the actual data it's holding when all instances are gone. | ||
+ | It will also allow you to recast the Resource into other things - very handy because a Resource of a base class is a wholly different type than a Resource of a derived class from that base class.. so being able to recast to get back and forth is particularly handy - especially if the derived class has functionality the base does not. | ||
+ | |||
+ | === Smart Pointers === | ||
+ | |||
+ | Smart Pointers are named-so due to the fact they tend to track their instantiation, and only completely kill themselves when all instances are removed. | ||
+ | |||
+ | The Boost Library has shared_ptr, which is what our Resource is partly modelled on. | ||
+ | You may be wondering why I didn't just keep Resource simple, and use smart pointers throughout anyway... this is because I do believe that while smart pointers are great for debugging, they're a bit large for normal use - and especially on more constrained hardware. | ||
+ | So, I created Resource to imitate the bits I need. | ||
+ | |||
+ | = Building our Resource = | ||
+ | |||
+ | Our Resource Class needs to be a template. | ||
+ | We have to be able to new our object directly into it, and then we can effectively forget about it. | ||
+ | We also need to be able to track and find it again if we stash it somewhere. | ||
+ | Of course, we can should also be able to ignore most of this functionality if need be. | ||
+ | |||
+ | == The Hiccup == | ||
+ | |||
+ | The current Resource code that's in the SVN is broken.. or at least, led to a more broken system. | ||
+ | As such, we're going to be a bit confusing here and talk about the fixed system instead. | ||
+ | The fixed system requires a few utility classes first, however... so we'll have a quick overview of them first. | ||
+ | |||
+ | == The HashString == | ||
+ | |||
+ | A HashString is effectively a number that's been generated from a string. | ||
+ | As string methods can generally be rather expensive, we convert them to a number.. as equating numbers are much faster than strings, for instance. There's a caveat to this however, in that the conversion is a one-way process. For the most part, that's not a problem, it just makes debugging a bit more interesting when printing out a HashString gives out "60263687" instead of "Bob" ( for example ) | ||
+ | |||
+ | There are many equations that can be used for HashStrings - from CRC, MD5, etc... though some have a bigger runtime hit than others. | ||
+ | The one I settled upon is a slightly modified version of djb2. You can find it, some others, and more information here: http://www.cse.yorku.ca/~oz/hash.html | ||
+ | |||
+ | The HashString class can be found in the repository. | ||
+ | |||
+ | == The Logger == | ||
+ | |||
+ | I've also resurrected an old Logger class I did for a previous engine. | ||
+ | It can spit out HTML formatted text, standard text, or to the console, and has INFO, DEBUG and ERROR log levels - to which DEBUG only shows if the DEBUG define is set. It could do with some sprucing up, but it's fine for our purposes for now. | ||
+ | |||
+ | The Logger class can also be found in the repository. | ||
+ | |||
+ | == The Base Resource == | ||
+ | |||
+ | As Resource itself is a templated class, we can't easily store a pointer to it. | ||
+ | So, we do our usual trick of having a Base class that we can grab a pointer to, and redirect to the Template class anything that doesn't require the actual type. | ||
+ | |||
+ | We also store our Location information here, so we can find Resources again should we organise them into Groups and Banks. | ||
+ | #ifndef _BASE_RESOURCE_H_ | ||
+ | #define _BASE_RESOURCE_H_ | ||
+ | |||
+ | #include "../GLESGAETypes.h" | ||
+ | #include "../Utils/HashString.h" | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | namespace Resources | ||
+ | { | ||
+ | typedef HashString Type; | ||
+ | typedef unsigned int Id; | ||
+ | typedef unsigned int Count; | ||
+ | typedef unsigned int Group; | ||
+ | |||
+ | struct Locator | ||
+ | { | ||
+ | Id bank; | ||
+ | Type type; | ||
+ | Group group; | ||
+ | Id resource; | ||
+ | |||
+ | Locator() : bank(INVALID), type(INVALID_HASHSTRING), group(INVALID), resource(INVALID) {} | ||
+ | }; | ||
+ | |||
+ | // System Resources | ||
+ | extern Type Camera; | ||
+ | extern Type Controller; | ||
+ | extern Type IndexBuffer; | ||
+ | extern Type Material; | ||
+ | extern Type Matrix2; | ||
+ | extern Type Matrix3; | ||
+ | extern Type Matrix4; | ||
+ | extern Type Mesh; | ||
+ | extern Type Shader; | ||
+ | extern Type ShaderUniformUpdater; | ||
+ | extern Type Texture; | ||
+ | extern Type Timer; | ||
+ | extern Type Vector2; | ||
+ | extern Type Vector3; | ||
+ | extern Type Vector4; | ||
+ | extern Type VertexBuffer; | ||
+ | } | ||
+ | |||
+ | class BaseResourceBank; | ||
+ | class BaseResource | ||
+ | { | ||
+ | public: | ||
+ | virtual ~BaseResource(); | ||
+ | |||
+ | /// Get the Type of this Resource | ||
+ | Resources::Type getType() const { return mType; } | ||
+ | |||
+ | /// Get which Group this Resource belongs to | ||
+ | Resources::Group getGroup() const { return mGroup; } | ||
+ | |||
+ | /// Get the Id of this Resource | ||
+ | Resources::Id getId() const { return mId; } | ||
+ | |||
+ | /// Get the Instance Count of this Resource | ||
+ | Resources::Count getCount() const { return *mCount; } | ||
+ | |||
+ | protected: | ||
+ | /// Set the count | ||
+ | void setCount(const Resources::Count& count) { *mCount = count; } | ||
+ | |||
+ | /// Private constructor as this is a derived class only | ||
+ | BaseResource(const Resources::Group group, const Resources::Type type, const Resources::Id id); | ||
+ | |||
+ | /// Overloaded Copy Constructor, so we keep track of how many instances we have. | ||
+ | BaseResource(const BaseResource& resource); | ||
+ | |||
+ | /// Overloaded Assignment Operator, so we can keep track of everything. | ||
+ | BaseResource& operator=(const BaseResource& resource) | ||
+ | { | ||
+ | mGroup = resource.mGroup; | ||
+ | mType = resource.mType; | ||
+ | mId = resource.mId; | ||
+ | mCount = resource.mCount; | ||
+ | |||
+ | return *this; | ||
+ | } | ||
+ | |||
+ | Resources::Group mGroup; | ||
+ | Resources::Type mType; | ||
+ | Resources::Id mId; | ||
+ | mutable Resources::Count* mCount; | ||
+ | }; | ||
+ | |||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | So, what have we got in here....<br/> | ||
+ | We have a Locator struct. This'll be for sending into a Group or Bank so we can find a Resource again if need be. The more information we can fill in, the quicker the search will be.<br/> | ||
+ | We also define some types - Type, Id, Group and Count - which we define in the class itself; marking mCount as mutable so we can change it in const functions, the reason for which will become apparent in a minute!<br/> | ||
+ | Finally, we extern a bunch of Types for various system types which we have in the engine.. this is so we don't need to recalculate the HashString for it at run time - something which can turn out to be a costly procedure! | ||
+ | |||
+ | == The Resource == | ||
+ | With our BaseResource defined, we need our actual Resource class itself, which is where the magic happens. | ||
+ | #ifndef _RESOURCE_H_ | ||
+ | #define _RESOURCE_H_ | ||
+ | |||
+ | #include "BaseResource.h" | ||
+ | #include <cassert> | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | template <typename T_Resource> class ResourceBank; | ||
+ | template <typename T_Resource> class Resource : public BaseResource | ||
+ | { | ||
+ | // Resource Bank is a friend to access purge. | ||
+ | friend class ResourceBank<T_Resource>; | ||
+ | // It's a friend to itself to deal with the recast functionality. | ||
+ | template <typename T_ResourceCast> friend class Resource; | ||
+ | public: | ||
+ | /// Dummy Constructor for creation of empty Resources. | ||
+ | Resource() : BaseResource(INVALID, INVALID_HASHSTRING, INVALID), mResource(0) {} | ||
+ | ~Resource() { remove(); } | ||
+ | |||
+ | /// Constructor for taking ownership over raw pointers. | ||
+ | explicit Resource(T_Resource* const resource) : BaseResource(INVALID, INVALID_HASHSTRING, INVALID), mResource(resource) { instance(); } | ||
+ | |||
+ | /// Pointer Operator overload to return the actual resource. | ||
+ | T_Resource* operator-> () { return mResource; } | ||
+ | |||
+ | /// Const Pointer Operator overload. | ||
+ | T_Resource* operator-> () const { return mResource; } | ||
+ | |||
+ | /// Dereference Operator overload to return the actual resource. | ||
+ | T_Resource& operator* () { return *mResource; } | ||
+ | |||
+ | /// Const Dereference operator overload. | ||
+ | const T_Resource& operator* () const { return *mResource; } | ||
+ | |||
+ | /// Recast Copy into a another Resource. | ||
+ | template <typename T_ResourceCast> Resource<T_ResourceCast> recast() | ||
+ | { | ||
+ | Resource<T_ResourceCast> newResource(mGroup, mType, mId, reinterpret_cast<T_ResourceCast*>(mResource)); | ||
+ | delete newResource.mCount; | ||
+ | newResource.mCount = mCount; | ||
+ | instance(); | ||
+ | return newResource; | ||
+ | } | ||
+ | |||
+ | /// Recast Copy into a another Resource. | ||
+ | template <typename T_ResourceCast> const Resource<T_ResourceCast> recast() const | ||
+ | { | ||
+ | Resource<T_ResourceCast> newResource(mGroup, mType, mId, reinterpret_cast<T_ResourceCast*>(mResource)); | ||
+ | delete newResource.mCount; | ||
+ | newResource.mCount = mCount; | ||
+ | instance(); | ||
+ | return newResource; | ||
+ | } | ||
+ | |||
+ | /// Overloaded Copy Constructor, so we keep track of how many instances we have. | ||
+ | Resource(const Resource& resource) | ||
+ | : BaseResource(resource) | ||
+ | , mResource(resource.mResource) | ||
+ | { | ||
+ | instance(); | ||
+ | } | ||
+ | |||
+ | /// Overloaded Assignment Operator to ensure we keep track of everything properly. | ||
+ | Resource& operator=(const Resource& resource) | ||
+ | { | ||
+ | if (this != &resource) { // if someone's being daft and assigning ourselves, do nothing else we're likely to commit suicide. | ||
+ | remove(); | ||
+ | |||
+ | BaseResource::operator=(resource); | ||
+ | mResource = resource.mResource; | ||
+ | |||
+ | instance(); | ||
+ | } | ||
+ | |||
+ | return *this; | ||
+ | } | ||
+ | |||
+ | /// Overloaded Equals Operator for pointer checking. | ||
+ | bool operator==(const Resource& resource) const | ||
+ | { | ||
+ | return (mResource == resource.mResource); | ||
+ | } | ||
+ | |||
+ | /// Overloaded Equals Operator for 0 pointer checking. | ||
+ | bool operator==(const void* rhs) const | ||
+ | { | ||
+ | return (reinterpret_cast<void*>(mResource) == rhs); | ||
+ | } | ||
+ | |||
+ | /// Overloaded Not Equals Operator for pointer checking. | ||
+ | bool operator!=(const Resource& resource) const | ||
+ | { | ||
+ | return !(*this == resource); | ||
+ | } | ||
+ | |||
+ | /// Overloaded Not Equals Operator for 0 pointer checking. | ||
+ | bool operator!=(const void* rhs) const | ||
+ | { | ||
+ | return !(*this == rhs); | ||
+ | } | ||
+ | |||
+ | /// Increase the instance count of this Resource. | ||
+ | /// Be exceptionally careful with using this manually, you will need to call remove manually as well! | ||
+ | /// This is however, useful for anything that has to be sent a raw pointer which may leave C-scope. | ||
+ | /// For example, Physics Engines and Scripting Languages. | ||
+ | void instance() const | ||
+ | { | ||
+ | assert(mCount); | ||
+ | ++(*mCount); | ||
+ | } | ||
+ | |||
+ | /// Remove an instance count of this Resource, and if there are no more instances, purge it. | ||
+ | /// Calling this manually should be used with caution, and only on a Resource which has been manually instanced. | ||
+ | /// Otherwise, you will get into a situation whereby you've deleted something which still has a reference. | ||
+ | /// Again, this is primarily useful for Physics Engines and Scripting Languages only. | ||
+ | void remove() | ||
+ | { | ||
+ | assert(mCount); | ||
+ | if ((*mCount) > 0U) | ||
+ | --(*mCount); | ||
+ | |||
+ | if ((*mCount) == 0U) | ||
+ | purge(); | ||
+ | } | ||
+ | |||
+ | protected: | ||
+ | /// Protected Constructor so we can't create Managed Resources all over the place. | ||
+ | explicit Resource(const Resources::Group group, const Resources::Type type, const Resources::Id id, T_Resource* const resource) | ||
+ | : BaseResource(group, type, id) | ||
+ | , mResource(resource) | ||
+ | { | ||
+ | instance(); | ||
+ | } | ||
+ | |||
+ | /// Delete the actual resource. | ||
+ | void purge() | ||
+ | { | ||
+ | if (0 != mResource) { | ||
+ | delete mResource; | ||
+ | mResource = 0; | ||
+ | } | ||
+ | |||
+ | if (0 != mCount) { | ||
+ | delete mCount; | ||
+ | mCount = 0; | ||
+ | } | ||
+ | } | ||
+ | |||
+ | private: | ||
+ | T_Resource* mResource; | ||
+ | }; | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Now, this is a beast of a class, so let's slowly walk through what we're doing here. | ||
+ | |||
+ | We have a bunch of constructors that do various things.<br/> | ||
+ | If we want to create an empty resource, for example, we can do something like '''Resource<Material> myMaterial;''' and '''myMaterial''' will effectively be a null pointer. This has many benefits as we can pre-allocate arrays of them, and not fill them in as yet, and use them as class variables that get filled later on and not necessarily in the constructor.<br/> | ||
+ | We can also feed a new Resource an already existing pointer: '''Material* myRawMaterial(new Material); Resource<Material> myMaterial(myRawMaterial);''' and Resource now manages '''myRawMaterial'''. The caveat here is that we should not delete '''myRawMaterial''', as Resource will do it automatically for us when '''myMaterial''' goes out of scope.<br /> | ||
+ | We have overloaded the copy constructor and assignment operator so we can keep track of how many instances we have.<br /> | ||
+ | We also overload the pointer operator to give direct access to our data within, and do something slightly special with our equals operator, in that one of them takes a const void pointer. This is so that we can check whether our data is null or not.<br /> | ||
+ | Additionally, we can instance and remove ourselves if need be - sort of like the Obj-C retain and release ideology - but this can cause issues so should only be used if you know what you're doing!<br /> | ||
+ | Finally, we have a couple of special functions known as recast... which we use to recast a pointer to another. This is primarily for recasting up or down a class hierarchy - such as a Base Class to a Derived Class - as a Resource<BaseClass> is a completely different pointer to Resource<DerivedClass> even if DerivedClass is derived from BaseClass. This could be handy when DerivedClass offers additional functionality not found on BaseClass, but is platform specific.<br /> | ||
+ | And that's it really.. not that much of a scary class, it just does a lot of things.<br /> | ||
+ | You'll also see that the reason we made mCount mutable back in BaseResource, is that we need to modify it in our const recast function. | ||
+ | |||
+ | = Building our Resource Banks = | ||
+ | |||
+ | Resource Banks act as Managers for groups of Resources. Effectively, glorified arrays of specific types. | ||
+ | They're again split into a Base class and a derived Template class.. so let's look at the Base class first. | ||
+ | |||
+ | == The Base Resource Bank == | ||
+ | #ifndef _BASE_RESOURCE_BANK_H_ | ||
+ | #define _BASE_RESOURCE_BANK_H_ | ||
+ | |||
+ | #include "BaseResource.h" | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class BaseResourceBank | ||
+ | { | ||
+ | friend class BaseResource; | ||
+ | friend class ResourceManager; | ||
+ | |||
+ | public: | ||
+ | virtual ~BaseResourceBank() {} | ||
+ | |||
+ | /// Get the Type of this Resource Bank | ||
+ | Resources::Type getType() const { return mType; } | ||
+ | |||
+ | /// Get the If of this Resource Bank | ||
+ | Resources::Id getId() const { return mId; } | ||
+ | |||
+ | /// Create a new resource group. | ||
+ | virtual Resources::Group newGroup() = 0; | ||
+ | |||
+ | /// Remove Group | ||
+ | virtual void removeGroup(const Resources::Group groupId) = 0; | ||
+ | |||
+ | |||
+ | protected: | ||
+ | /// Private constructor as this is a derived class only | ||
+ | BaseResourceBank(const Resources::Id id, const Resources::Type type) | ||
+ | : mId(id) | ||
+ | , mType(type) | ||
+ | { | ||
+ | |||
+ | } | ||
+ | |||
+ | private: | ||
+ | // No Copying Allowed | ||
+ | BaseResourceBank(const BaseResourceBank&); | ||
+ | BaseResourceBank& operator=(const BaseResourceBank&); | ||
+ | |||
+ | Resources::Id mId; | ||
+ | Resources::Type mType; | ||
+ | }; | ||
+ | |||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | Quite a simple class, and bears much resemblance to BaseResource, in that it stores an Id and Type and not much else. | ||
+ | One thing it does do, is provide interface ( pure virtual ) functions to create and remove groups which we will overload next. | ||
+ | |||
+ | == The Resource Bank == | ||
+ | #ifndef _RESOURCE_BANK_H_ | ||
+ | #define _RESOURCE_BANK_H_ | ||
+ | |||
+ | #include "Resource.h" | ||
+ | #include "BaseResourceBank.h" | ||
+ | |||
+ | #include <vector> | ||
+ | #include <cassert> | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | template <typename T_Resource> class ResourceBank : public BaseResourceBank | ||
+ | { | ||
+ | // Resource is a friend to access instance. | ||
+ | friend class Resource<T_Resource>; | ||
+ | public: | ||
+ | ResourceBank(const Resources::Id id, const Resources::Type type) : BaseResourceBank(id, type), mResources() {} | ||
+ | ~ResourceBank(); | ||
+ | |||
+ | /// Create a new resource group. | ||
+ | Resources::Group newGroup(); | ||
+ | |||
+ | /// Get an entire Resource group. | ||
+ | const std::vector<Resource<T_Resource> >& getGroup(const Resources::Group groupId) const; | ||
+ | |||
+ | /// Remove Group | ||
+ | void removeGroup(const Resources::Group groupId); | ||
+ | |||
+ | /// Add a single Resource manually, and return the Resource version. | ||
+ | Resource<T_Resource>& add(const Resources::Group groupId, const Resources::Type typeId, T_Resource* const resource); | ||
+ | |||
+ | /// Add a group of Resources, and return the Group Id | ||
+ | Resources::Group addGroup(const std::vector<Resource<T_Resource> >& resourceGroup); | ||
+ | |||
+ | /// Get a Resource immediately | ||
+ | Resource<T_Resource>& get(const Resources::Group groupId, const Resources::Id resourceId); | ||
+ | |||
+ | private: | ||
+ | // Scary stuff... an array ( which we can access via Group Id) holding another array. | ||
+ | // Second array holds the actual resource, where the id is it's array index. | ||
+ | std::vector<std::vector<Resource<T_Resource> > > mResources; | ||
+ | }; | ||
+ | |||
+ | template <typename T_Resource> ResourceBank<T_Resource>::~ResourceBank() | ||
+ | { | ||
+ | for (unsigned int index(0U); index < mResources.size(); ++index) | ||
+ | removeGroup(index); | ||
+ | } | ||
+ | |||
+ | template <typename T_Resource> Resources::Group ResourceBank<T_Resource>::newGroup() | ||
+ | { | ||
+ | std::vector<Resource<T_Resource> > resourceArray; | ||
+ | mResources.push_back(resourceArray); | ||
+ | |||
+ | return mResources.size() - 1U; | ||
+ | } | ||
+ | |||
+ | template <typename T_Resource> const std::vector<Resource<T_Resource> >& ResourceBank<T_Resource>::getGroup(const Resources::Group groupId) const | ||
+ | { | ||
+ | // TODO: Scream if that groupId isn't valid, or doesn't exist... | ||
+ | if (groupId == GLESGAE::INVALID) { | ||
+ | assert(0); | ||
+ | return; | ||
+ | } | ||
+ | |||
+ | return mResources[groupId]; | ||
+ | } | ||
+ | |||
+ | template <typename T_Resource> void ResourceBank<T_Resource>::removeGroup(const Resources::Group groupId) | ||
+ | { | ||
+ | // TODO: Scream if that groupId isn't valid or doesn't exist... | ||
+ | if (groupId == GLESGAE::INVALID) | ||
+ | return; | ||
+ | |||
+ | std::vector<Resource<T_Resource> >& resourceArray(mResources[groupId]); | ||
+ | for (typename std::vector<Resource<T_Resource> >::iterator itr(resourceArray.begin()); itr < resourceArray.end(); ++itr) { | ||
+ | if (itr->getCount() > 1U) { | ||
+ | // TODO: scream bloody mary that there's still something using this resource. | ||
+ | } | ||
+ | } | ||
+ | |||
+ | resourceArray.clear(); | ||
+ | } | ||
+ | |||
+ | template <typename T_Resource> Resource<T_Resource>& ResourceBank<T_Resource>::add(const Resources::Group groupId, const Resources::Type typeId, T_Resource* const resource) | ||
+ | { | ||
+ | std::vector<Resource<T_Resource> >& resourceArray(mResources[groupId]); | ||
+ | const Resources::Id resourceId(resourceArray.size()); | ||
+ | |||
+ | resourceArray.push_back(Resource<T_Resource>(groupId, typeId, resourceId, resource)); | ||
+ | |||
+ | return resourceArray[resourceId]; | ||
+ | } | ||
+ | |||
+ | template <typename T_Resource> Resources::Group ResourceBank<T_Resource>::addGroup(const std::vector<Resource<T_Resource> >& resourceGroup) | ||
+ | { | ||
+ | const Resources::Group groupId(mResources.size()); | ||
+ | |||
+ | mResources.push_back(resourceGroup); | ||
+ | return groupId; | ||
+ | } | ||
+ | |||
+ | template <typename T_Resource> Resource<T_Resource>& ResourceBank<T_Resource>::get(const Resources::Group groupId, const Resources::Id resourceId) | ||
+ | { | ||
+ | assert(groupId != GLESGAE::INVALID); | ||
+ | assert(resourceId != GLESGAE::INVALID); | ||
+ | return mResources[groupId][resourceId]; | ||
+ | } | ||
+ | |||
+ | } | ||
+ | |||
+ | |||
+ | #endif | ||
+ | |||
+ | The actual Resource Bank itself is a bit more complicated, as it does all the work.<br /> | ||
+ | You'll also notice usage of assert everywhere, this is good defensive programming and should be used often! This is why the Resource system ended up in such a mess in the first place - I wasn't using enough asserts to catch things, and memory was being overwritten all over the place. | ||
+ | |||
+ | Anyway, the basic crux of the Resource Manager, is that it stores an array of arrays.<br /> | ||
+ | These are indexed firstly by the groupId to find the correct group, followed by the resourceId to find the correct Resource. If you remember our Locator struct we defined in BaseResource, that's how we can find things when needed. | ||
+ | |||
+ | One interesting thing is we also take in the Type and an Id for the Resource Bank. This is because we can have many Resource Banks attached to the Resource Manager, so each needs it's own Id. The Type is used for double checking, even though we template it to a specific class/struct type, we need to know it's HashString type to be able to search for them without requiring to know it's template type. | ||
+ | |||
+ | = Building our Resource Manager = | ||
+ | |||
+ | Our last piece of the system, is the Resource Manager itself.<br/> | ||
+ | This isn't much different from the Resource Bank above, in that it's more a wrapper around managing arrays. | ||
+ | |||
+ | == The Resource Manager == | ||
+ | #ifndef _RESOURCE_MANAGER_H_ | ||
+ | #define _RESOURCE_MANAGER_H_ | ||
+ | |||
+ | #include "ResourceBank.h" | ||
+ | |||
+ | #include <string> | ||
+ | #include <map> | ||
+ | #include <cassert> | ||
+ | |||
+ | namespace GLESGAE | ||
+ | { | ||
+ | class ResourceManager | ||
+ | { | ||
+ | friend class BaseResource; | ||
+ | public: | ||
+ | ResourceManager() : mResourceBanks() {} | ||
+ | ~ResourceManager() { assert(mResourceBanks.empty()); } | ||
+ | |||
+ | /// Create a new Resource Bank of the specified | ||
+ | template <typename T_Resource> ResourceBank<T_Resource>& createBank(const Resources::Type bankType); | ||
+ | |||
+ | /// Retrieve a Resource Bank | ||
+ | template <typename T_Resource> ResourceBank<T_Resource>& getBank(const Resources::Id bankId, const Resources::Type bankType); | ||
+ | |||
+ | /// Delete a Resource Bank | ||
+ | template <typename T_Resource> void removeBank(const Resources::Id bankId, const Resources::Type bankType); | ||
+ | |||
+ | /// Load Resource Bank from Disk | ||
+ | template <typename T_Resource> void loadBankResources(const std::string& bankSet, const Resources::Id bankId, const Resources::Type bankType); | ||
+ | |||
+ | /// Save Resource Bank to Disk | ||
+ | template <typename T_Resource> void saveBankResources(const std::string& bankSet, const Resources::Id bankId, const Resources::Type bankType); | ||
+ | |||
+ | private: | ||
+ | // A map denoting the name of a resource bank, and the resource bank pointer itself. | ||
+ | std::map<Resources::Id, BaseResourceBank*> mResourceBanks; | ||
+ | }; | ||
+ | |||
+ | template <typename T_Resource> ResourceBank<T_Resource>& ResourceManager::createBank(const Resources::Type bankType) | ||
+ | { | ||
+ | // TODO: check bank doesn't already exist. | ||
+ | Resources::Id bankId = mResourceBanks.size(); | ||
+ | mResourceBanks[bankId] = new ResourceBank<T_Resource>(bankId, bankType); | ||
+ | return *(reinterpret_cast<ResourceBank<T_Resource>*>(mResourceBanks[bankId])); | ||
+ | } | ||
+ | |||
+ | template <typename T_Resource> ResourceBank<T_Resource>& ResourceManager::getBank(const Resources::Id bankId, const Resources::Type) | ||
+ | { | ||
+ | // TODO: check bank actually exists. | ||
+ | assert(bankId != INVALID); | ||
+ | return *(reinterpret_cast<ResourceBank<T_Resource>*>(mResourceBanks[bankId])); | ||
+ | } | ||
+ | |||
+ | template <typename T_Resource> void ResourceManager::removeBank(const Resources::Id bankId, const Resources::Type /*typeId*/) | ||
+ | { | ||
+ | std::map<Resources::Id, BaseResourceBank*>::iterator bank(mResourceBanks.find(bankId)); | ||
+ | if (bank != mResourceBanks.end()) { | ||
+ | // TODO: Check that the bankType matches up! | ||
+ | delete reinterpret_cast<ResourceBank<T_Resource>*>(bank->second); | ||
+ | bank->second = 0; | ||
+ | } | ||
+ | |||
+ | // TODO: Error that we can't find this bank! | ||
+ | } | ||
+ | /* | ||
+ | template <typename T_Resource> void ResourceManager::loadBankResources(const std::string& bankSet, const Resources::Id bankId, const Resources::Type bankType) | ||
+ | { | ||
+ | } | ||
+ | |||
+ | template <typename T_Resource> void ResourceManager::saveBankResources(const std::string& bankSet, const Resources::Id bankId, const Resources::Type bankType) | ||
+ | { | ||
+ | } | ||
+ | */ | ||
+ | } | ||
+ | |||
+ | #endif | ||
+ | |||
+ | There are a few oddities with this class.<br /> | ||
+ | Firstly, it's a bit incomplete ( the load/save functionality ) as we still have to write proper File utilities.<br /> | ||
+ | Secondly, the destructor doesn't actually wipe out any Resource Banks that may still be left around. The reason for this is that you really should be doing this manually. We do assert if it's not empty, however... and further functionality will have us going over this class to add in the load/save as well as outputting which banks have been left behind for the user to fix and clean up. | ||
+ | |||
+ | = Conclusion = | ||
+ | |||
+ | A bit of a long one this time.. but we needed to get the entire Resource System out in one go.<br /> | ||
+ | As we can see.. we can use Resource<T> on it's own, or use it in conjunction with the just Resource Banks, or the entire Resource System.<br /> | ||
+ | We have a helper struct to be able to pass around where Resources can be found if we need to find them elsewhere. This may seem a bit useless for now, but as it's a POD-style struct, we can use these for when we do start to load and save the Resource Banks. | ||
+ | |||
+ | Next up, we'll be looking into the State System, and then onto some Scripting. | ||
+ | |||
+ | [[Category:GLESGAE]] | ||
+ | [[Category:Tutorials]] |
Latest revision as of 14:12, 17 August 2013
Contents
- 1 GLESGAE - GL ES Game Application Engine
- 2 GLESGAE Overview
- 3 Engine Design Overview
- 4 Environment Setup
- 5 GLESGAE - Setting Up A Window and Context
- 6 Opening A Window
- 7 Render Contexts
- 8 A Simple Test
- 9 Next Time
- 10 GLESGAE - Event and Input Systems
- 11 The Event System
- 12 The Input System
- 13 A Simple Test
- 14 Next Time
- 15 GLESGAE - Making a Mesh
- 16 Next Time
- 17 GLESGAE - Fixed Function Rendering Contexts
- 18 Next Time
- 19 GLESGAE - The Shader Based Context
- 20 The Shader
- 21 Shader Context Additions
- 22 Feeding The Shader
- 23 A Simple Test
- 24 Next Time
- 25 GLESGAE - The Transform Stack
- 26 Next Time
- 27 GLESGAE - Fixed Function Transformations
- 28 Next Time
- 29 GLESGAE - Shader Based Transformations
- 30 Next Time
- 31 GLESGAE - Dealing with Textures
- 32 Next Time
- 33 GLESGAE - Making another Mesh - Vertex Buffer Objects
- 34 Next Time
- 35 GLESGAE - Managing Resources Overview
- 36 Introduction
- 37 The Resource Manager
- 38 The State Manager
- 39 The Memory Manager
- 40 The Debug and Logger System
- 41 File Formats and Data Structures
- 42 The Thread Manager
- 43 The Model Loader
- 44 The Logic Loop
- 45 Conclusion
- 46 GLESGAE - The Resource Manager
- 47 Overview
- 48 Building our Resource
- 49 Building our Resource Banks
- 50 Building our Resource Manager
- 51 Conclusion
GLESGAE - GL ES Game Application Engine
GLESGAE is a vehicle for a series of tutorials on building a game engine for the Pandora console from scratch.
These will be written when I have time.. the uncreated ones below being an indicator of what's to come.
Unwritten parts may be split up, moved, or otherwise manipulated before actually going live, so don't take the following as set in stone till the link turns blue!
Table of Contents
Part One - Setup Stuff!
- GLESGAE Overview
- Engine Design Overview
- Environment Setup
- GLESGAE:Setting Up A Window and Context
- GLESGAE:The Event and Input Systems
Part Two - Show me Stuff!
- GLESGAE:Making a Mesh
- GLESGAE:Fixed Function Rendering Contexts
- GLESGAE:Shader Based Contexts
- GLESGAE:The Transform Stack
- GLESGAE:Fixed Function Transformations
- GLESGAE:Shader Based Transformations
- GLESGAE:Dealing with Textures
- GLESGAE:Making another Mesh with Vertex Buffers
Part Three - Manage my Stuff!
- GLESGAE:Managing Resources Overview
- GLESGAE:The Resource Manager
- GLESGAE:State Management Overview
- GLESGAE:Implementing The Game States
Part Four - The First Evaluation
- GLESGAE:First Evaluation Overview
- GLESGAE:First Evaluation Graphics
- GLESGAE:First Evaluation Resources
Part Five - Make it do Stuff!
Part Six - Push Stuff around!
- GLESGAE:Physics Processing Overview
- GLESGAE:Implementing Box2D Physics
- GLESGAE:Implementing Bullet Physics
Part Seven - Make Stuff squeak!
Part Eight - The Second Evaluation
- GLESGAE:Second Evaluation Overview
- GLESGAE:Second Evaluation Logic
- GLESGAE:Second Evaluation Physics
- GLESGAE:Second Evaluation Sound
Part Nine - Poke Stuff from afar!
Part Ten - Advanced Stuff!
Part Eleven - Tool Stuff!
GLESGAE Overview
To try not spam the wiki to death, I'll include the first two parts here, but subsequent parts will be in their own pages as they're more to deal with actual development problems rather than general overview stuff.
Where possible, I'll also combine things if they're related to the last part that I did - IE: generating a window, and then adding ES contexts.
Originally, I was going to attempt to take part in the Platformer Homebrew Competition over on the OpenPandora boards. However, with free time being very limited, no engine to speak of, the need to do assets and that it's already been going a month so I'm a bit behind, I've decided to tackle a somewhat different challenge - build and document an engine, originally for Pandora and spawning out elsewhere later.
Yes, another engine.
I've been dragging my old SGZEngine around for quite a while now.. though it's never really got much further than it has, and it's full of weird quirks and bugs that with each project, I spend more time working around than writing actual logic.
I did start another engine - SGEngine ( dropped the Z ) - however this was highly experimental in fobbing off every subsystem to a dynamically runtime loadable DLL to facilitate mix and matching bits and pieces. Especially useful for testing OpenGL and D3D renderers and a neat hack, but not much use as it was becoming highly complicated to do anything.
This brings us to GLESGAE. The name being chosen due to me being Scottish, and it being an amusing mnemoic to begin with.
Recently, I've been doing a lot of Android programming.
This involved writing a custom renderer for GLES1 and then onto GLES2.
With my previous experience of writing such low level GL code being glBegin(); ... glEnd(); I effectively got thrown in at the deep end and had to fight a bit to stay afloat. However, I pulled through, and while furthering the work engine is always going to be appreciated by them; there's already a defined system of how things work, and I wanted to change a bit too much of that.. so a new personal engine it is; using the new found knowledge I've just gained.
So, with the introduction out of the way, this ( weekly, with any luck ) set of tutorials, guides and random gibberings on building an engine while I continue GLESGAE shall begin.
Engine Design Overview
Game Engines are somewhat of a necessary evil these days if you have any inclination of producing more than one game on a system, or one game on many systems. However, there is also a very real danger of writing a solution for a non-existing problem - an engine without a game. As such, I will be writing a game alongside this as well, to make sure that the engine does in fact have useful features.
The danger of writing an engine without a game in mind, is that you keep adding bits and pieces and end up not really getting anywhere. Which is exactly what happened with SGEngine. Lots of neat hacks, but it never really got anywhere, and it's a right pig to try and get working for anything serious now. It was still a useful educational experience, as I learned how to deal with dynamic runtime libraries across Windows and *nix systems, as well as a more saner route for platform independent modules - SGZEngine essentially had a platform folder where all the code went, and interfaces and objects everywhere else. It wasn't particularly clean, even though it sounds like it should've been.
So what is an Engine?
In the purest sense, it's a collection of generic functions that when wired up can help you create code much quicker. Generally, a Game Engine can pull in other Engines such as Rendering Engines, Audio Engines and Physics Engines - all tailor made for their own specific domains.
As programmers we do tend to have the habit of being a bit ego-centric with an "I can rewrite the wheel better!" attitude. This can get us in to trouble at times! While I specifically want to deal with GL ES rendering on my own, I'll be pulling in OpenAL for audio and bullet or perhaps box2d for Physics; while trying to leave things open to be able to switch these out for something else at a later date.
This is therefore going to be a set of tutorials on building a Game Engine with a custom Graphics Engine in particular.
However, a Game Engine isn't just Audio, Graphics and Physics - though these are by far the most interesting parts of a Game Engine.
You'll also generally need a set of utility functions that range from file I/O, input handling, event handling, memory management, resource management, and much more.
You may also want to abstract logic out to a scripting engine; something I've a lot of experience with and quite fond of, so shall be pulling in Lua as well for this purpose.
While on PC development ( and by extension, Pandora ) you can generally get away with just new/malloc random things at any point and free/delete when necessary all over the place, certain consoles don't particularly like that and you're generally better off managing your own heap and memory pages so you know exactly where your memory is at any time, and can cache things yourself, rather than relying on anything that may or may not do what you expect.
File Management can also catch you unaware on other platforms. Android, for example, has a rather strict permission system whereby you only really have access to your own package, and the contents of the sdcard. Granted, Android 2.3 gives you more control, but if you're writing NDK apps for <2.3 you'll have to jump back and forth between Java and C where file management becomes a whole new game of fun.
And what about input? The Pandora has those nubs! those lovely lovely nubs, dpad, face buttons, shoulder buttons, touchscreen and a full keyboard! You've also got the possibility of godknows what connected via USB - game pads, mice, full-sized keyboards...
I won't even start about threading and the chaos that can bring... then there's networking, which is even worse!
Finally, there's the thought of how you organize data and feed it to your engine.
These days, most engines follow a very data-driven design - and for good reason! You don't want to have to recompile half your codebase just for changing some NPC text, repositioning a graphic, loading a new model, etc.. GLESGAE is going to be data driven - and that also means tools that will be able to create the data to feed it, in the format that works best for the target platform.
We shall be building these tools along the way too, and where possible, also having them run on the Pandora itself.
I'll cover more bits as we get to them.. for now, we'll get the Environment Setup.
Environment Setup
I'm Lazy, Give Me A Pre-Configured Thing!
Here you go: http://www.stuckiegamez.co.uk/apps/pandora/SimpleDev/zaxxon-premade-dev.tar.bz2 ~250mb
Extract to an ext2/3 formatted SD card, and boot. Simple!
NOTE: This is a bit old, now.. and may have somewhat dodgy WiFi, but I'll get round to fixing this soon ( hopefully by 13th May. )
Tell Me What You Did
This weekend is essentially the overview and setup phase, so it's a bit boring I'm afraid.
To keep everyone on the same page, I'm going to assume you're using Angstrom from an SD card, that you've installed GCC et al on it, and you'll be booting from it for development purposes.
This gives us a few benefits;
- We are developing on the target hardware and can test things immediately.
- We can keep the NAND in a near enough vanilla state to ensure we don't accidentally pull in and use random libraries that not everyone will have.
- If we do something really bad, we've only messed up an SD card and can just re-extract the tarball and start again, rather than reflash the NAND!
If you've already got an SD card setup with dev tools, then you can leave class early and I'll see you next week.
Same for if you have a preferred development environment already.. if it works for you, there's no point changing it.
The rest of you, pay attention!
We'll do everything on the Pandora, to save having to deal with Linux, Windows, Mac, BSD, BeOS, whatever... madness.
I advise at least grabbing yourself a 2Gig SD card.. go for a bigger card if you like, but 2Gig is probably a good minimum and are reasonably cheap these days.
Grab your SD card and ensure everything you want from it has been removed - we're about to sacrifice it to the Pandora Dev Gods.
Stick it in your left slot.
Open up a terminal.
You'll need to manually unmount it before going near it with cfdisk to repartition.
sudo umount /dev/mmcblk0p1 -- and possibly p2, p3, p# depending on how many partitions it has. Generally, it'll only have the one.
sudo cfdisk /dev/mmcblk0 -- this'll launch cfdisk on your card.. if you see more than one partition and you've only unmounted one partition, then quit and unmount them!
We want to delete all partitions on this card, so press right and then return to delete the current partition.
Press up and down to move the selector if need be to remove the rest of them if you've more than one.
Now we want to create a new partition, so with the Free Space selected, press right to highlight [ New ] and hit return, select [ Primary ], and let it use the full card ( just hit return. )
Press Left to highlight [ Write ] and press return. Type "yes" and hit return to confirm the changes, then [ Quit ]
You could have added swap if you wanted.. it's up to you really.
Now we have to format it.
sudo mkfs.ext2 /dev/mmcblk0p1
Remove the card and reinsert so that the system re-reads the partition table correctly and gives you access to your newly formatted partition.
Now, we download the latest rootfs from OpenPandora.org and extract it to the card. We shall be lazy and stay in the terminal for this so...
cd /media/mmcblk0p1
sudo su -- we'll need to be root for this, as we'll have no permission by default to touch this card.
wget -c http://openpandora.org/firmware/pandora-rootfs.tar.bz2 - this grabs us the latest rootfs - though lately, these appear to be very out of sync between Pandora OE and Angstrom OE so be careful!
tar -xjpf pandora-rootfs.tar.bz2' -- you could add v to the arguments if you like.. it'll let you see what it's extracting and is slightly more exciting than just waiting for it to finish! The p is for preserving permissions, x to extract, j for bz2 support and f for file.
rm pandora-rootfs.tar.bz2
We want the system to autoboot this when the card is inserted, so let's create autoboot.txt
nano autoboot.txt
Fill it with the following:
setenv bootargs root=/dev/mmcblk0p1 rw rootwait vram=6272K omapfb.vram=0:3000K mmc_core.removable=0 ext2load mmc 0 0x80300000 /boot/uImage-2.6.27.46-omap1 bootm 0x80300000
That's us.. reboot and run through the First Time Configuration stuff, being sure to choose XFCE over MiniMenu, and then feel free to configure the look as you see fit.
Now the fun bit.
Warning - This is potentially dangerous as Angstrom and Pandora libraries may have gone off at tangents at this point... this is why we're doing this on an SD card rather than the NAND so if we stuff it up, we only need to reformat an SD card and not reflash the NAND!
Make sure your Pandora is connected to the net by whatever means you have.
Open up a terminal
sudo opkg update
sudo opkg install gcc g++ make binutils-dev cpp cpp-symlinks g++-symlinks gcc-symlinks libstdc++-dev libgles-omap3-dev subversion - You could install sdl etc.. too if you want, but that's all we'll be using for now; and you'll be needing subversion later to keep up with the project.
Now the ever popular Hello World.
mkdir Projects
cd Projects
nano main.cpp
#include <cstdio> int main(void) { printf("Hello World!\n"); return 0; }
g++ -o main main.cpp
./main
Interesting Gotcha - In the rootfs I downloaded (HF5 RC1), ncurses hadn't been installed... sudo opkg install libncurses5 if you get "cannot open shared object file libncurses.so.5" when invoking nano.
That's all for this week.. course you could go and install Geany, or whatever code editor you prefer.
Next time, we shall be opening up a window via badgering X11 directly, and getting a GL ES context up and running.
GLESGAE - Setting Up A Window and Context
Introduction
For the most part, opening up a Window and generating a rendering Context is a pretty simple task, and gets you on your way to pushing stuff to the screen.
Of course, doing so in a manner that's open enough to add differing platforms to at later date can be a bit tricky - especially so when some platforms have widely different ideas on what a Window actually is, and how that Window gets created.
For the GLESGAE engine, we will be using C++ predominantly to abstract things out for us.
That's not to say you couldn't do the same thing in C, but I find that even just the addition of classes makes things cleaner to work with - specially with multiple platforms.
With that out of the way, let's have a look at getting a window opened using X directly.
Why X? Why not SDL?
SDL is good.. I do like SDL.. but SDL can also be a bit heavy - especially if all you're wanting to do is use it to open up a Window!
Granted, generally you want to steal use of it's event system as well for access to controllers.. but our Pandora has some rather custom controllers so even then you're jumping out of SDL to use them.
We're also going to be using GL ES rather than pushing pixels directly, so.. perhaps in this instance, why SDL?
I also like coding down as close to the hardware if at all possible - as then if something goes wrong, it's my own fault, and not some black box library that I'm not sure what's going on in.
There's also the real possibility of perhaps porting your work to a platform that SDL hasn't been ported to yet; or has a poor implementation of. You'd need to start adding a ton of custom classes just for that platform, where if you ignored it from the start, your custom class framework already exists so you're not trying to jerry-rig it in!
Therefore, GLESGAE will be battering X directly, as that's what the base library is on Linux - and by extension - Pandora.
I'll also add WinAPI support into the SVN later on, but for now, only dealing with Linux/Pandora is best.
Checking out the SVN
svn co -r 2 http://svn3.xp-dev.com/svn/glesgae/trunk glesgae This tutorial uses SVN revision 2, so be sure to check that out for the full code.
Opening A Window
Actually opening a Window is fairly trivial.
It sometimes looks like a crazy long piece of madness, but you get a lot of control over how your window appears.
WinAPI is especially mad - huge amount of code just to open a window, but you get quite fine-grained control over it.
As we're aiming to be cross-platform from the outset, we'll split some stuff up out the road, so let's go create some directory hierarchy.
- Graphics
- Window
- Context
We'll deal with the Window bit first.
The Window Class
Some systems allow us to create multiple windows, others use the entire screen as one window. In a game application, we're usually only interested in the latter, so we'll design primarily for that use case.
Window.h
#ifndef _WINDOW_H_ #define _WINDOW_H_ namespace GLESGAE { class RenderContext; class Window { public: Window() : mContext(0) , mWidth(0U) , mHeight(0U) { } virtual ~Window() {} /// Open this window with the given width and height virtual void open(const unsigned int width, const unsigned int height) = 0; /// Set this Window's Render Context virtual void setContext(RenderContext* const context) = 0; /// Get this Window's Render Context - returning the specified type template<typename T_Context> const T_Context* getContext() const { return reinterpret_cast<T_Context*>(mContext); } /// Get this Window's Render Context const RenderContext* getContext() const { return mContext; } /// Get the Width of this Window const unsigned int getWidth() const { return mWidth; } /// Get the Height of this Window const unsigned int getHeight() const { return mHeight; } /// Refresh the Window virtual void refresh() = 0; /// Close this window virtual void close() = 0; protected: RenderContext* mContext; unsigned int mWidth; unsigned int mHeight; }; } #endif
This is our Window class.
We'll be using the Interface paradigm a lot, which is why these are all pure virtual functions ( means they must be overloaded in derived classes. )
There's also one funny thing we're doing here with getContext - and that's making it a templated function so we can directly cast to the type we want via GLES1Context* myContext(myWindow->getContext<GLES1Context>()); rather than having to pull the pointer out, then waste another line to reinterpret_cast the pointer ourselves.
Of course, we also specify a standard getContext should we just need to use the standard interface - but grabbing the specific type in one line is particularly handy at times.
So, let's implement the X11 window open-up-er!
X11 Window Class
X11Window.h
#ifndef _X11_WINDOW_H_ #define _X11_WINDOW_H_ #include "Window.h" namespace X11 { #include <X11/Xlib.h> } namespace GLESGAE { class X11Window : public Window { public: X11Window(); ~X11Window(); /// Open this window with the given width and height void open(const unsigned int width, const unsigned int height); /// Set this Window's Render Context void setContext(RenderContext* const context); /// Refresh the Window void refresh(); /// Close this window void close(); /// Returns the Display for platform specific bits X11::Display* const getDisplay() const { return mDisplay; } /// Returns the Window for platform specific bits X11::Window const getWindow() const { return mWindow; } private: X11::Display* mDisplay; X11::Window mWindow; }; } #endif
This one does some mad stuff.
We wrap the Xlib include around an X11 namespace. This might seem mad but.. we've just created our own Window class, and X11 has it's own Window class so there's a conflict there.
Luckily with C++, you can set namespaces to separate chunks of code - and that's exactly what we're doing here!
NOTE: I've actually removed the namespace hacks and just renamed everything to stop conflicting... the namespaces for GL and X11 were starting to conflict themselves and it just proved to be a vicious nightmare when implementing shader support... so I redid it all! That said, as we're using SVN here, the SVN repository version and this guide still match up, so feel free to follow through, but remember that it changes later on!
X11Window.cpp
#include "X11Window.h" #include "../Context/RenderContext.h" namespace X11 { #include <X11/Xlib.h> } using namespace GLESGAE; using namespace X11; X11Window::X11Window() : mDisplay(XOpenDisplay(0)) , mWindow(0) { } X11Window::~X11Window() { if (0 != mWindow) close(); XDestroyWindow(mDisplay, mWindow); XCloseDisplay(mDisplay); } void X11Window::open(const unsigned int width, const unsigned int height) { // Store the width and height. mWidth = width; mHeight = height; // Create the actual window and store the pointer. mWindow = XCreateWindow(mDisplay // Pointer to the Display , DefaultRootWindow(mDisplay) // Parent Window , 0 // X of top-left corner , 0 // Y of top-left corner , width // requested width , height // requested height , 0 // border width , CopyFromParent // window depth , CopyFromParent // window class - InputOutput / InputOnly / CopyFromParent , CopyFromParent // visual type , 0 // value mask , 0); // attributes // Map the window to the display. XMapWindow(mDisplay, mWindow); } void X11Window::setContext(RenderContext* const context) { mContext = context; } void X11Window::refresh() { mContext->refresh(); XFlush(mDisplay); // this is a crutch till we handle events.. ignore! } void X11Window::close() { XDestroyWindow(mDisplay, mWindow); }
There's our namespace hack again, and some actual code that does something!
As stated in the comment, ignore that XFlush.. it's because we haven't written anything to deal with events yet, so that flushes all events out.
The CreateWindow call has all it's parameters commented to let you know what each part is.. on a Linux machine, typing man XCreateWindow will give you the man page for what they all mean. Linux manpages are very useful in coding!
We now need a RenderContext to play with... and this is where the fun begins!
Render Contexts
As we're striving for multi-platform goodness, we need to create a base Render Context implementation that they all conform to.
We also want to ensure that our Window and Render Context stuff is separate from one another - which is what we're doing here - as then we can do things like the X11Window Class, which'll run on standard Linux machines as well as the Pandora. It's not much, but the less duplicated code in an Engine, the easier it is to maintain!
For our purposes, we have two types of Render Contexts we can create on the Pandora - GLES1 and GLES2.
Discussion - ES1 vs ES2
Choosing one over the other is not quite a trivial process as you'd think, as they each have their own little pros and cons.
ES1 is fixed function and is pretty much designed for low end hardware.
ES2 is shader based - you have full control over the vertex and fragment processing parts of the graphics pipeline.
If you're not wholly sure what this Graphics Pipeline malarkey is, I suggest you read up on it. Here's some handy links:
- http://en.wikipedia.org/wiki/Graphics_pipeline
- http://developer.nvidia.com/page/documentation.html - particularly the free books on CG and GPU Gems 1-3 - for theory if nothing else.
- http://www.khronos.org/opengles/2_X/ - specially the images at the bottom which shows what gets replaced!
However, as the old adage goes, with great power comes great responsibility. As the images on Khronos' site shows - ES2 replaces a rather large chunk of the fixed function pipeline. You therefore need to deal with the following yourself:
- Matrix Stacks for Model, View, Projection and Textures ( though Texture Stack isn't usually used much. )
- Alpha Test ( this can be an absolute killer.. )
- Transform and Lighting ( that's effectively the Matrix Stacks, and lighting is done in either Vertex or Fragment shaders now )
OpenGL 1.5 vs ES1
Generally, OpenGL 1.5 is essentially what ES1 is based on. Both are fixed function predominantly, but ES1 does rip out a number of things:
- No Immediate Mode (glBegin/glEnd).
- Tends to favour fixed point arithmetic.
- No quad rendering.
- Some Stack handling you need to do yourself.
- Display Lists, Accumulators, and a bunch of other stuff...
Generally however, if your app has been using VBOs or Vertex Arrays, there shouldn't be a great deal of a change.
One sneaky gotcha is that there are two main ES1 profiles - Common and Common Lite ... Common Lite ONLY has fixed function, where as Common has both.
Additionally, ES1.0 does not support VBOs whereas ES1.1 does.
OpenGL 2.0 vs ES2
Again, OpenGL 2.0 is essentially what ES2 is based on... but specifically, the programmable pipeline part - all fixed function pipeline functions have now been removed. Therefore, as stated above, you need to deal with:
- Matrix Stacks for Model, View, Projection and Textures ( though Texture Stack isn't usually used much. )
- Alpha Test ( this can be an absolute killer.. )
- Transform and Lighting ( that's effectively the Matrix Stacks, and lighting is done in either Vertex or Fragment shaders now )
Along with:
- Vertex Attributes
- Shader Management
- Headaches - especially when trying to balance load across the vertex and fragment processors!
The RenderContext Class
Like Window, we'll have a common interface to deal with things.
This is only a simple tutorial to instantiate a Window and Context without actually rendering anything as yet, so these classes are still pretty simple.
As we're wanting to support ES1 and ES2, we're going to do things a bit differently from our Window class as well...
RenderContext.h
#ifndef _RENDER_CONTEXT_H_ #define _RENDER_CONTEXT_H_ namespace GLESGAE { class Window; class RenderContext { public: RenderContext() {} virtual ~RenderContext() {} /// Initialise this Context virtual void initialise() = 0; /// Shutdown this Context virtual void shutdown() = 0; /// Refresh this Context virtual void refresh() = 0; /// Bind to a Window virtual void bindToWindow(Window* const window) = 0; }; } #endif
Pretty simple.. no crazy tricks as in Window.
FixedFunctionContext.h
#ifndef _FIXED_FUNCTION_CONTEXT_H_ #define _FIXED_FUNCTION_CONTEXT_H_ namespace GLESGAE { class FixedFunctionContext { public: FixedFunctionContext() {} virtual ~FixedFunctionContext() {} }; } #endif
This is where things get a bit more interesting... this class is empty as we're not defining anything, but you may be wondering what the point of this actually is?
As stated already, I'm aiming GLESGAE to be multi-platform... some platforms have both Fixed Function and Shader Based contexts, and it's not to say you can't emulate Fixed Function on a Shader Based context either, so we shall separate the two styles and pull them in using multiple inheritance later on.
ShaderBasedContext.h
#ifndef _SHADER_BASED_CONTEXT_H_ #define _SHADER_BASED_CONTEXT_H_ namespace GLESGAE { class ShaderBasedContext { public: ShaderBasedContext() {} virtual ~ShaderBasedContext() {} }; } #endif
Again, empty.. but as stated above.. some platforms can support both.
This'll come in handy for when we want to do some development tests on our Desktop machines before pushing over to the Pandora, for instance; or any other platform we write support for.
In the repository, there is a GLXContext class which'll run under Linux; which shows how easy it is to add support for other Contexts. We're interested in the Pandora here though, so we'll generate Contexts suitable for it now.
GLES1 Context Class
Again, we want to be able to be multi platform.. we know where Pandora keeps it's EGL and GLES files, but that _can_ change depending on the platform.. so we need to protect this a bit.
GLES1Context.h
#ifndef _GLES1_CONTEXT_H_ #define _GLES1_CONTEXT_H_ #if defined(PANDORA) #include <EGL/egl.h> #endif #include "RenderContext.h" #include "FixedFunctionContext.h" namespace GLESGAE { class Window; class X11Window; class GLES1Context : public RenderContext, public FixedFunctionContext { public: GLES1Context(); ~GLES1Context(); /// Initialise this Context void initialise(); /// Shutdown this Context void shutdown(); /// Refresh this Context void refresh(); /// Bind us to a Window void bindToWindow(Window* const window); private: X11Window* mWindow; EGLDisplay mDisplay; EGLContext mContext; EGLSurface mSurface; }; } #endif
GLES1 only supports fixed function, so we bring that in so that when we write the Renderer, we call the right functions on it.
Technically, being dependent on X11Window is a bit naughty.. but we can deal with that later...
Nothing crazy here, so we shall continue...
GLES1Context.cpp
#if defined(GLES1) #include <cstdio> #include "GLES1Context.h" #include "../Window/X11Window.h" #if defined(PANDORA) #include <GLES/gl.h> #endif using namespace GLESGAE; GLES1Context::GLES1Context() : mWindow(0) , mDisplay(0) , mContext(0) , mSurface(0) { } GLES1Context::~GLES1Context() { shutdown(); } void GLES1Context::initialise() { // Get the EGL Display.. mDisplay = eglGetDisplay( (reinterpret_cast<EGLNativeDisplayType>(mWindow->getDisplay())) ); if (EGL_NO_DISPLAY == mDisplay) { printf("failed to get egl display..\n"); } // Initialise the EGL Display if (0 == eglInitialize(mDisplay, NULL, NULL)) { printf("failed to init egl..\n"); } // Now we want to find an EGL Surface that will work for us... EGLint eglAttribs[] = { EGL_BUFFER_SIZE, 16 // 16bit Colour Buffer , EGL_NONE }; EGLConfig eglConfig; EGLint numConfig; if (0 == eglChooseConfig(mDisplay, eglAttribs, &eglConfig, 1, &numConfig)) { printf("failed to get context..\n"); } // Create the actual surface based upon the list of configs we've just gotten... mSurface = eglCreateWindowSurface(mDisplay, eglConfig, reinterpret_cast<EGLNativeWindowType>(mWindow->getWindow()), NULL); if (EGL_NO_SURFACE == mSurface) { printf("failed to get surface..\n"); } // Setup the EGL Context EGLint contextAttribs[] = { EGL_CONTEXT_CLIENT_VERSION, 1 , EGL_NONE }; // Create our Context mContext = eglCreateContext (mDisplay, eglConfig, EGL_NO_CONTEXT, contextAttribs); if (EGL_NO_CONTEXT == mContext) { printf("failed to get context...\n"); } // Bind the Display, Surface and Contexts together eglMakeCurrent(mDisplay, mSurface, mSurface, mContext); // Set up our viewport glViewport(0, 0, mWindow->getWidth(), mWindow->getHeight()); } void GLES1Context::shutdown() { eglDestroyContext(mDisplay, mContext); eglDestroySurface(mDisplay, mSurface); eglTerminate(mDisplay); } void GLES1Context::refresh() { // Trigger a buffer swap eglSwapBuffers(mDisplay, mSurface); // Clear the buffers for the next frame. glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); } void GLES1Context::bindToWindow(Window* const window) { // Rememeber the Window we're bound to mWindow = reinterpret_cast<X11Window*>(window); // Set the context as us. mWindow->setContext(this); } #endif
This one can catch you offguard as we #ifdef the entire file to ensure we're not compiling if we're specified a GLX context.
I'm also being lazy here in not actually dealing with the failure operations... technically you really should be...
The other thing to get your head around is there are two Displays here.. there is the EGLDisplay and the X11 Display.
This is just something you need to deal with, as on Android for instance, the "X11 Display" is dalvik's renderer window, but the EGLDisplay calls remain the same. It's for cross-platform compatibility more than anything else.
GLES2 Context Class
ES2 is actually almost exactly the same as ES1 at this point.. just that the class names are different, it pulls in ShaderBasedContext, and what we pass in to the EGLConfig and Context are slightly different.
For completeness sake, here are the two files anyway.
GLES2Context.h
#ifndef _GLES2_CONTEXT_H_ #define _GLES2_CONTEXT_H_ #if defined(PANDORA) #include <EGL/egl.h> #endif #include "RenderContext.h" #include "ShaderBasedContext.h" namespace GLESGAE { class Window; class X11Window; class GLES2Context : public RenderContext, public ShaderBasedContext { public: GLES2Context(); ~GLES2Context(); /// Initialise this Context void initialise(); /// Shutdown this Context void shutdown(); /// Refresh this Context void refresh(); /// Bind us to a Window void bindToWindow(Window* const window); private: X11Window* mWindow; EGLDisplay mDisplay; EGLContext mContext; EGLSurface mSurface; }; } #endif
GLES2Context.cpp
#if defined(GLES2) #include <cstdio> #include "GLES2Context.h" #include "../Window/X11Window.h" #if defined(PANDORA) #include <GLES2/gl2.h> #endif using namespace GLESGAE; GLES2Context::GLES2Context() : mWindow(0) , mDisplay(0) , mContext(0) , mSurface(0) { } GLES2Context::~GLES2Context() { shutdown(); } void GLES2Context::initialise() { // Get the EGL Display.. mDisplay = eglGetDisplay( (reinterpret_cast<EGLNativeDisplayType>(mWindow->getDisplay())) ); if (EGL_NO_DISPLAY == mDisplay) { printf("failed to get egl display..\n"); } // Initialise the EGL Display if (0 == eglInitialize(mDisplay, NULL, NULL)) { printf("failed to init egl..\n"); } // Now we want to find an EGL Surface that will work for us... EGLint eglAttribs[] = { EGL_BUFFER_SIZE, 16 // 16bit Colour Buffer , EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT // We want an ES2 config , EGL_NONE }; EGLConfig eglConfig; EGLint numConfig; if (0 == eglChooseConfig(mDisplay, eglAttribs, &eglConfig, 1, &numConfig)) { printf("failed to get context..\n"); } // Create the actual surface based upon the list of configs we've just gotten... mSurface = eglCreateWindowSurface(mDisplay, eglConfig, reinterpret_cast<EGLNativeWindowType>(mWindow->getWindow()), NULL); if (EGL_NO_SURFACE == mSurface) { printf("failed to get surface..\n"); } // Setup the EGL context EGLint contextAttribs[] = { EGL_CONTEXT_CLIENT_VERSION, 2 , EGL_NONE }; // Create our Context mContext = eglCreateContext (mDisplay, eglConfig, EGL_NO_CONTEXT, contextAttribs); if (EGL_NO_CONTEXT == mContext) { printf("failed to get context...\n"); } // Bind the Display, Surface and Contexts together eglMakeCurrent(mDisplay, mSurface, mSurface, mContext); // Setup the viewport glViewport(0, 0, mWindow->getWidth(), mWindow->getHeight()); } void GLES2Context::shutdown() { eglDestroyContext(mDisplay, mContext); eglDestroySurface(mDisplay, mSurface); eglTerminate(mDisplay); } void GLES2Context::refresh() { // Trigger a buffer swap eglSwapBuffers(mDisplay, mSurface); // Clear the buffers for the next frame. glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); } void GLES2Context::bindToWindow(Window* const window) { // Rememeber the Window we're bound to mWindow = reinterpret_cast<X11Window*>(window); // Set the context as us. mWindow->setContext(this); } #endif
A Simple Test
Of course, you probably want to see this in action!
It's a bit anti-climatic, but it works at least.. so let's create a little test program ( this is WindowContextTest in the SVN )
main.cpp
#include <cstdlib> #if defined(LINUX) #include "../../Graphics/Window/X11Window.h" #endif #if defined(GLX) #include "../../Graphics/Context/GLXContext.h" #elif defined(GLES1) #include "../../Graphics/Context/GLES1Context.h" #elif defined(GLES2) #include "../../Graphics/Context/GLES2Context.h" #endif using namespace GLESGAE; int main(void) { #if defined(GLX) GLXContext* context(new GLXContext); #elif defined(GLES1) GLES1Context* context(new GLES1Context); #elif defined(GLES2) GLES2Context* context(new GLES2Context); #endif #if defined(LINUX) X11Window* window(new X11Window); #endif context->bindToWindow(window); window->open(800, 480); context->initialise(); window->refresh(); while(1) window->refresh(); }
It's not that scary honest! Most of it is just dealing with what version of Window and RenderContext to pull in.
As said, the SVN has a GLXContext for Linux as well... WinAPI support will be added at some point as well.
So, logic wise, we create a Window and a RenderContext.
We then bind the Window to the Context, then open said Window with a specified Width and Height ( 800 x 480 in this case. )
Then we initialise the Context.
Finally, we go into an infinite loop, refreshing the Window ( and in turn, the Context bound to it. )
Simple!
To stop it, just close the Window.. as we're not catching events yet, this will cause it to crash.
Building the Example
In the SVN there are Makefiles already setup for you.. just trigger make -f MakefileES1.pandora or whatever your chosen configuration is, and it'll happily build for you and spit out a GLESGAE.pandora binary for you to run.
Alternatively, if you use CodeLite, there's a Workspace/Project set for you preconfigured.
Gotcha I had an issue about not having -lgcc_s ... I did have /lib/libgcc_s.so.1 so all I had to do was sudo ln -s /lib/libgcc_s.so.1 /lib/libgcc_s.so and it was happy again.
Next Time
We'll be looking at an Event Queue next time.. dealing with the Window Events and some of those lovely Pandora controls!
Then after that, we shall setup the Renderers and get some stuff on screen.
GLESGAE - Event and Input Systems
Fast Track
We're on SVN revision 3 now so, while I finish writing this up, feel free to grab and mess about:
svn co -r 3 http://svn3.xp-dev.com/svn/glesgae/trunk/ glesgae
Introduction
Dealing with Input - and by extension Events - can be a right pain in the backside.
Each platform generally comes with it's own set of Events to deal with - even though most of them are pretty common.
Therefore, each platform should generally have it's own Event System to deal with them, and a global Engine Event System that the game deals with after the platform specifics have been filtered out.
So what game-side events would we be interested in? Input, for one... our application being closed for whatever reason... and for platform specifics; Android has that Activity Lifecycle you need to be aware of and handle.
In terms of our Pandora, there aren't many platform events we really need to deal with. Our application being closed being perhaps the only immediate one, as well as the lid being shut.
We also need to deal with the touchscreen ( mouse events ) and keyboard though - and that'll come through X along with the other platform events.
Our nubs, face buttons, shoulder buttons and dpad we need to deal with manually, however.
Beware though, just handling Input as standard Events is a bit iffy.. so we'll abstract out some Input interfaces so we can perform some more sensible logic such as if(dpad->Left()) moveLeft(); rather than polling an event queue and pulling off the events manually.
The Event System
As said, our Event System will come in two parts - a generic version that a game will use, and a platform specific one that filters interesting events and such like.
We'll start with the generic one first, but we'll switch between them during the rest of the weekend.
Generic Event System
I'm going to model the event system sortof on the Android Activity life cycle - more that that's the weirdest setup you'll likely come across, and it's very painful to take something that doesn't conform in the slightest, and try get it working in this style.
So, we need to think about a generic set of Events that could happen that we'd be interested in. Android has the following in the Activity cycle:
- onCreate
- onStart
- onPause
- onResume
- onRestart
- onStop
- onDestroy
For our purposes, we can simplify this to:
- onStart
- onPause
- onResume
- onDestroy
Gotcha I've missed out onCreate and gone to onStart ... onCreate is not guaranteed to be the first one run! If dalvik has decided to cache your library for whatever reason, this can be skipped. But I digress, this is Android stuff, not Pandora!
You might be sitting there thinking "why bother with the Android stuff if we're doing Pandora related things?"
Well, think of it like this.. we have a lid that we can close. We'd want to trigger an onPause style event when that happens, and an onResume event when it's lifted.
It's also handy to fire an event round during creation and destruction of our app, as we'll be taking the Observer pattern for the Event System, so being able to prod things to start up and shut themselves down is particulaly handy. And obviously, an onDestroy event can be triggered via quitting the game via closing the window, or whatever other means.
Event.h
#ifndef _EVENT_H_ #define _EVENT_H_ #include "EventTypes.h" namespace GLESGAE { class Event { public: Event(const EventType& eventType) : mEventType(eventType) {} virtual ~Event() {} /// Get the type of event this is.. useful for classes which mark themselves as EventObservers for multiple Events const EventType& getEventType() const { return mEventType; } private: EventType mEventType; }; } #endif
Event is fairly straight forward. The atypical base class for polymorphic nightmares, with a bit of custom RTTI thrown in for good measure.
EventType itself is defined in EventTypes.h, which you can find below:
EventTypes.h
#ifndef _EVENT_TYPES_H_ #define _EVENT_TYPES_H_ #include <string> namespace GLESGAE { typedef std::string EventType; namespace SystemEvents { namespace App { extern EventType Started; extern EventType Paused; extern EventType Resumed; extern EventType Destroyed; } namespace Window { extern EventType Opened; extern EventType Resized; extern EventType Closed; } } } #endif
This is where things start to get a little strange.
We're using namespaces here to separate out the Event Types. This stops us from polluting the main namespace with global spam; and although these are still effectively globals, they're at least organised spam!
Another thing is we've typedefined EventType to a string. We've typedefined it so we can change it to something a bit less heavy to check against later, but strings made the most sense at the time - code obvious first, optimise later .. or K.I.S.S - Keep It Simple, Stupid ;)
The externs are of course, defined in the cpp file. They're just the namespace again as it's string, so I won't bother printing up the file as it's in the SVN if you want it anyway!
The Common Event System
As stated, we have a generic Event System and then a platform specific one.
Our generic one is pretty powerful in itself though.. let's have a look at the interface:
EventSystem.h
#ifndef _EVENT_SYSTEM_H_ #define _EVENT_SYSTEM_H_ #include "EventTypes.h" // This defines general purpose Event System logic. // Each of the platform specific headers ( included below ) extend this. #include <map> #include <vector> namespace GLESGAE { class Event; class EventObserver; class EventTrigger; class Window; class CommonEventSystem { public: CommonEventSystem() : mEventObservers() , mEventTriggers() { } virtual ~CommonEventSystem() {} /// Update the Event System. virtual void update() = 0; /// Bind to specified Window. virtual void bindToWindow(Window* const window) = 0; /// Register an Event Type. void registerEventType(const EventType& eventType); /// Register an Event Observer with Event Type. void registerObserver(const EventType& eventType, EventObserver* const observer); /// Deregister an Event Observer from Event Type. void deregisterObserver(const EventType& eventType, EventObserver* const observer); /// Register a Custom Event Trigger. void registerEventTrigger(const EventType& eventType, EventTrigger* const trigger); /// Deregister a Custom Event Trigger. void deregisterEventTrigger(const EventType& eventType, EventTrigger* const trigger); /// Send an Event to all Observers of this type. /// If you want to retain this event beyond it being in the receiving scope, you'll have to copy it. void sendEvent(const EventType& eventType, Event* event); /// Update all the Triggers to send Events if necessary. void updateAllTriggers(); protected: std::map<EventType, std::vector<EventObserver*> > mEventObservers; //!< Outer = Event Type; Inner = Array of Event Observers for this Event Type std::map<EventType, EventTrigger*> mEventTriggers; }; } #if defined(LINUX) #include "X11/X11EventSystem.h" #endif #endif
Oopsie In my mad dash to get this finished and put up, I've only just noticed a small little hiccup in my logic - updateAllTriggers should be protected or private as we only really want to call it from update, and not let random outside forces prod an event through anytime they feel like it!
Fairly weighty class, methinks!
The comments describe pretty much what each function does, but I'll describe a few of the more interesting ones.
Firstly, the class is CommonEventSystem rather than EventSystem.. and then does something strange by pulling in a platform specific header at the bottom. Why is this? So you can platform-agnostically pull in "EventSystem.h" and it'll grab the correct platform Event System for you, without having to deal with it on your side.
It does kindof go against the grain of Class Name = File Name... but for sanity sake in your application, it's much better, believe me!
Event Systems generally don't work unless they have a handle to something the OS has given them. This is usually the window, which is why we bind the Event System to a Window via the bindToWindow call. It's rather platform specific, so it's set up as a pure virtual to force it to be overloaded further up the chain.. this means it can be recast to a Window that's specific to that platform to get all the gooey bits out - X11Display, etc..
Now we get to the Events handling bit itself.
Generally, we'd be going through the following process:
- registerEventType
- registerEventTrigger
- perhaps multiple registerObservers
So, we'd register an Event type.. this creates space in our Observers array. Optionally, we could ( and I might do this later ) extend the Trigger map to work in the same vein.
We then register an Event Trigger, and bind as many Event Observers to that type as we want.
This basically boils down to Triggers send events, Observers wait for them. Nice and simple!
When building your class you can inherit from EventTrigger and EventObserver, and over load their respective functions.
The platform specific Input Systems pull in EventObserver and registers itself as an Observer of many Events. This is very useful, but it does give you extra overhead in having to figure out what Event you've been given, and recasting it to access. This is why EventType has been typedefined so we can change it later, as parsing strings isn't cheap! It does make it pretty straight forward though as to what's going on, so for the time being, it's particularly useful. We'll need a Resource Manager or some form of CRC-like encoding class further down the line, and these will all be changed to use that.
EventTrigger.h
#ifndef _EVENT_TRIGGER_H_ #define _EVENT_TRIGGER_H_ namespace GLESGAE { class Event; class EventTrigger { public: EventTrigger() {} virtual ~EventTrigger() {} /// Check if this trigger has an Event ready. /// This MUST return 0 if there is no Event, otherwise it's the event pointer. /// This event WILL be cleaned up by the Event System once all Observers have observed the event. virtual Event* hasEvent() = 0; }; } #endif
Nice and simple, does what it says on the tin.
Make note of the comment though - as there's only the one function, it's checked to see if you return something.. if you do not have an Event when this function is called, you must return 0!
EventObserver.h
#ifndef _EVENT_OBSERVER_H_ #define _EVENT_OBSERVER_H_ namespace GLESGAE { class Event; class EventObserver { public: EventObserver() {} virtual ~EventObserver() {} /// Trigger this Observer to receive an event. virtual void receiveEvent(Event* const event) = 0; }; } #endif
Perhaps even simpler... it's up to the derived class to deal with event, checking types, etc..
Gotcha I haven't quite mentioned this yet as we haven't gone to it, but marking this here seems rather important. When receiveEvent is called from the EventSystem, it'll be from the sendEvent function which takes an unprotected pointer because it WILL delete your event after sending it to all Observers. This is important to note as when your receiveEvent gets triggered, if you want to keep that Event around for whatever reason, you will have to perform a deep copy on it - as a shallow copy will get wiped out!
Why do we do this? So you can effectively "fire and forget" your events, and have the system clean up after you.
The X11 Event System
Now onto the actual Event System we'll be using on our Pandoras.
It comes with it's own set of Events, so we'll define them as follows:
X11Events.h
#ifndef _X11_EVENT_TYPES_H_ #define _X11_EVENT_TYPES_H_ #include "../EventTypes.h" #include "../Event.h" namespace X11 { #include <X11/Xlib.h> } namespace GLESGAE { namespace X11Events { namespace Input { namespace Mouse { extern EventType Moved; extern EventType ButtonDown; extern EventType ButtonUp; class MovedEvent : public Event { public: MovedEvent(int x, int y) : Event(Moved) , mX(x) , mY(y) { } /// retrieve the new X co-ord const int getX() const { return mX; } /// retrieve the new Y co-ord const int getY() const { return mY; } private: int mX; int mY; }; class ButtonDownEvent : public Event { public: ButtonDownEvent(unsigned int button) : Event(ButtonDown) , mButton(button) { } /// retrieve the button pressed const unsigned int getButton() const { return mButton; } private: unsigned int mButton; }; class ButtonUpEvent : public Event { public: ButtonUpEvent(unsigned int button) : Event(ButtonUp) , mButton(button) { } /// retrieve the button released const unsigned int getButton() const { return mButton; } private: unsigned int mButton; }; } namespace Keyboard { extern EventType KeyDown; extern EventType KeyUp; class KeyDownEvent : public Event { public: KeyDownEvent(X11::KeySym key) : Event(KeyDown) , mKey(key) { } /// retrieve the key pressed const X11::KeySym getKey() const { return mKey; } private: X11::KeySym mKey; }; class KeyUpEvent : public Event { public: KeyUpEvent(X11::KeySym key) : Event(KeyUp) , mKey(key) { } /// retrieve the key released const X11::KeySym getKey() const { return mKey; } private: X11::KeySym mKey; }; } } } } #endif
You should be used to my mad coding style by now, and this is pretty straight forward anyway.
We define the Event Types as externs as normal ( so we only ever have one copy of them in the .cpp file and not recreating instances of it all over the place, ) and then define the Events themselves. As they're simple, and effectively need to be fast and inlined, I've defined them all in the header.
Only oddness is our X11 namespace hack makes a re-appearance, but other than that, it's fairly typical.
X11EventSystem.h
#ifndef _X11_EVENT_SYSTEM_H_ #define _X11_EVENT_SYSTEM_H_ namespace GLESGAE { class Window; class X11Window; class EventSystem : public CommonEventSystem { public: EventSystem(); ~EventSystem(); /// Update the Event System void update(); /// Bind to the Window void bindToWindow(Window* const window); private: bool mActive; X11Window* mWindow; int mCurrentPointerX; int mCurrentPointerY; }; } #endif
Not a great deal in here, is there?
We just overload the standard CommonEventSystem interface, and store an X11Window pointer directly rather than a generic Window pointer.
The fun stuff is in the cpp...
X11EventSystem.cpp
#if defined(LINUX) #include <cstdio> #include "../EventSystem.h" #include "../EventTypes.h" #include "../SystemEvents.h" #include "X11Events.h" #include "../../Graphics/Window/X11Window.h" namespace X11 { #include <X11/Xlib.h> } using namespace GLESGAE; EventSystem::EventSystem() : CommonEventSystem() , mActive(true) , mWindow(0) , mCurrentPointerX(0) , mCurrentPointerY(0) { // Register System Events registerEventType(SystemEvents::App::Started); registerEventType(SystemEvents::App::Paused); registerEventType(SystemEvents::App::Resumed); registerEventType(SystemEvents::App::Destroyed); registerEventType(SystemEvents::Window::Opened); registerEventType(SystemEvents::Window::Resized); registerEventType(SystemEvents::Window::Closed); // Register X11 Specific Events registerEventType(X11Events::Input::Mouse::Moved); registerEventType(X11Events::Input::Mouse::ButtonDown); registerEventType(X11Events::Input::Mouse::ButtonUp); registerEventType(X11Events::Input::Keyboard::KeyDown); registerEventType(X11Events::Input::Keyboard::KeyUp); } EventSystem::~EventSystem() { } void EventSystem::bindToWindow(Window* const window) { mWindow = reinterpret_cast<X11Window*>(window); } void EventSystem::update() { // Deal with the pointer first. X11::Window rootReturn; X11::Window childReturn; int rootXReturn; int rootYReturn; int pointerX; int pointerY; unsigned int maskReturn; if (true == X11::XQueryPointer(mWindow->getDisplay(), mWindow->getWindow() , &rootReturn, &childReturn , &rootXReturn, &rootYReturn , &pointerX, &pointerY, &maskReturn)) { if ((pointerX != mCurrentPointerX) && (pointerY != mCurrentPointerY)) sendEvent(X11Events::Input::Mouse::Moved, new X11Events::Input::Mouse::MovedEvent(pointerX, pointerY)); mCurrentPointerX = pointerX; mCurrentPointerY = pointerY; } // Rest of the events... X11::XEvent event; while (X11::XPending(mWindow->getDisplay())) { X11::XNextEvent(mWindow->getDisplay(), &event); switch (event.type) { case Expose: if (event.xexpose.count != 0) break; break; case ConfigureNotify: sendEvent(SystemEvents::Window::Resized, new SystemEvents::Window::ResizedEvent(event.xconfigure.width, event.xconfigure.height)); break; case KeyPress: sendEvent(X11Events::Input::Keyboard::KeyDown, new X11Events::Input::Keyboard::KeyDownEvent(X11::XLookupKeysym(&event.xkey, 0))); break; case KeyRelease: sendEvent(X11Events::Input::Keyboard::KeyUp, new X11Events::Input::Keyboard::KeyUpEvent(X11::XLookupKeysym(&event.xkey, 0))); break; case ButtonRelease: sendEvent(X11Events::Input::Mouse::ButtonUp, new X11Events::Input::Mouse::ButtonUpEvent(event.xbutton.button)); break; case ButtonPress: sendEvent(X11Events::Input::Mouse::ButtonDown, new X11Events::Input::Mouse::ButtonDownEvent(event.xbutton.button)); break; case ClientMessage: if (*X11::XGetAtomName(mWindow->getDisplay(), event.xclient.message_type) == *"WM_PROTOCOLS") { sendEvent(SystemEvents::Window::Closed, new SystemEvents::Window::ClosedEvent); sendEvent(SystemEvents::App::Destroyed, new SystemEvents::App::DestroyedEvent); } else if (event.xclient.data.l[0] == mWindow->getDeleteMessage()) { sendEvent(SystemEvents::Window::Closed, new SystemEvents::Window::ClosedEvent); sendEvent(SystemEvents::App::Destroyed, new SystemEvents::App::DestroyedEvent); } break; default: break; } } } #endif
Now we have some actual code.
First off, we register all the System and X11 specific events we'll be sending. This allows us to create the arrays and set things up so that other systems can observe these events, and we can actually send them.
When we bind the Window, we recast it to our specific type and store it.. nothing else really needed.
Going through the update function, however... is a bit more involved.
I've split it up in handling the Pointer co-ordinates first, as they're dealt with slightly differently, and then running through the standard Event Loop to catch things we're interested in.
The pointer stuff is pretty straight forward - store the current position and if it's moved, send an Event. It's all API gubbins really, and you can read up on it if you like.
One hidden gotcha is XQueryPointer will give you the position of the mouse pointer as long as it's on the same screen as your application - therefore, you can get into a situation where you're receiving co-ordinates that are outside your window! So be careful!
The Event Queue itself is also API gubbins for the most part, and sending Events for the parts we're interested in.
Of note is the Client Message where we detect the closure of the window... these are generally handled via the WM_PROTOCOLS set, or via a special delete message atom... so we check for both, just in case.
The Input System
The general Input System is pretty basic, and just defines an interface for the platform specific variants to conform to.
Inputs generally link up with Events.. basic events being Keyboard and Pointers from the Windowing System.
Joysticks and other things can come from X11 as well.. though are not currently supported due to lack of time, motivation and testing equipment.
Controllers
We're again abusing namespaces here to separate things out. This does give us huge amounts of control over generic Controllers like Pads and Joysticks as we can define our own button sets pretty quickly - see the PandoraInputSystem for more details on the Pandora Buttons Set.
Generally, these are all pretty similar, so check the code out if you want more info, for I'll just define the PadController here...
Pad.h
#ifndef _PAD_H_ #define _PAD_H_ #include "Controller.h" #include "ControllerTypes.h" #include <vector> namespace GLESGAE { class InputSystem; namespace Controller { class PadController : public CommonController { friend class GLESGAE::InputSystem; public: PadController(const Controller::Id controllerId, const int buttons) : CommonController(Pad, controllerId) , mButtons(buttons) { } /// Get the value of the specified button const float getButton(const Button button) const { // TODO: actual error checking return mButtons[button]; } /// Get amount of buttons const unsigned int getNumButtons() const { return mButtons.size(); } protected: /// Set Button data void setButton(const Button button, const float data) { // TODO: actual error checking mButtons[button] = data; } private: std::vector<float> mButtons; }; } } #endif
Again, my lack of error checking is shocking, so I shall leave that up to you!
Literally all a Pad is, is an array of buttons. We've defined the array as a float in case we have pressure sensitive buttons, but we could get away with booleans for simple digital pads that are just on/off.
We've also protected the set functions, and made the Input System a friend so it can access them.
This is to stop accidental setting of the buttons from random systems, as that'd just confuse everything.
CommonController is just a base class which contains a Type and Id of the Controller.. Type being Pad, Keyboard, etc.. and the Id being a unique identifier for this Controller. Again, read the SVN if you want more details, as it's pretty straight forward.
Pandora Input System
The Pandora Input System is mostly the Linux Input System with the addition of using libPND to access the nubs and dpad/face buttons.
Like the Event System, the Inputs get automatically updated for you during the Input System update, so you don't need to poll manually.
We could have abused the Event System further by tagging each action as an event to watch, but that's perhaps a bit overboard.
So let's have a look at the Common Input System first.. it's pretty big but defines everything we'd ever want, really.
InputSystem.h
#ifndef _INPUT_SYSTEM_H_ #define _INPUT_SYSTEM_H_ #include "ControllerTypes.h" namespace GLESGAE { namespace Controller { class KeyboardController; class PointerController; class JoystickController; class PadController; } class CommonInputSystem { public: CommonInputSystem() {} virtual ~CommonInputSystem() {} /// Update the Input System virtual void update() = 0; /// Retreive number of Active Keyboards. virtual const Controller::Id getNumberOfKeyboards() const = 0; /// Retreive number of Active Joysticks. virtual const Controller::Id getNumberOfJoysticks() const = 0; /// Retreive number of Active Pads. virtual const Controller::Id getNumberOfPads() const = 0; /// Retreive number of Active Pointers. virtual const Controller::Id getNumberOfPointers() const = 0; /// Create new Keyboard - will return NULL if no more available. virtual Controller::KeyboardController* const newKeyboard() = 0; /// Create new Joystick - will return NULL if no more available. virtual Controller::JoystickController* const newJoystick() = 0; /// Create new Pad - will return NULL if no more available. virtual Controller::PadController* const newPad() = 0; /// Create new Pointer - will return NULL if no more available. virtual Controller::PointerController* const newPointer() = 0; /// Grab another instance of the specified Keyboard - returns NULL if not created. virtual Controller::KeyboardController* const getKeyboard(const Controller::Id id) = 0; /// Grab another instance of the specified Joystick - returns NULL if not created. virtual Controller::JoystickController* const getJoystick(const Controller::Id id) = 0; /// Grab another instance of the specified Pointer - returns NULL if not created. virtual Controller::PointerController* const getPointer(const Controller::Id id) = 0; /// Grab another instance of the specified Pad - returns NULL if not created. virtual Controller::PadController* const getPad(const Controller::Id id) = 0; /// Destroy a Keyboard. virtual void destroyKeyboard(Controller::KeyboardController* const keyboard) = 0; /// Destroy a Joystick. virtual void destroyJoystick(Controller::JoystickController* const joystick) = 0; /// Destroy a Pad. virtual void destroyPad(Controller::PadController* const pad) = 0; /// Destroy a Pointer. virtual void destroyPointer(Controller::PointerController* const pointer) = 0; }; } #if defined(PANDORA) #include "Pandora/PandoraInputSystem.h" #elif defined(LINUX) #include "Linux/LinuxInputSystem.h" #endif #endif
Again, we abuse the fact that we just need to pull in InputSystem.h and we get the platform specific variant automatically.
There isn't really much to the Input System.. we have functionality for creating, retrieving and deleting Pads, Pointers, Keyboards and Joysticks. We can also query how many there are connected to the system, and call the standard update function.
PandoraInputSystem.h
#ifndef _PANDORA_INPUT_SYSTEM_H_ #define _PANDORA_INPUT_SYSTEM_H_ #include <vector> #include "../ControllerTypes.h" #include "../../Events/EventObserver.h" namespace X11 { #include <X11/Xlib.h> } namespace GLESGAE { namespace Controller { namespace Pandora { extern Controller::Button Up; extern Controller::Button Down; extern Controller::Button Left; extern Controller::Button Right; extern Controller::Button Start; extern Controller::Button Select; extern Controller::Button Pandora; extern Controller::Button Y; extern Controller::Button B; extern Controller::Button X; extern Controller::Button A; extern Controller::Button L1; extern Controller::Button L2; extern Controller::Button R1; extern Controller::Button R2; extern Controller::Id LeftNub; extern Controller::Id RightNub; extern Controller::Id Buttons; } } class Event; class EventSystem; class InputSystem : public CommonInputSystem, public EventObserver { public: InputSystem(EventSystem* const eventSystem); ~InputSystem(); /// Update the Input System void update(); /// Receive an Event void receiveEvent(Event* const event); /// Retreive number of Active Keyboards. const Controller::Id getNumberOfKeyboards() const; /// Retreive number of Active Joysticks. const Controller::Id getNumberOfJoysticks() const; /// Retreive number of Active Pads. const Controller::Id getNumberOfPads() const ; /// Retreive number of Active Pointers. const Controller::Id getNumberOfPointers() const; /// Create new Keyboard - will return NULL if no more available. Controller::KeyboardController* const newKeyboard(); /// Create new Joystick - will return NULL if no more available. Controller::JoystickController* const newJoystick(); /// Create new Pad - will return NULL if no more available. Controller::PadController* const newPad(); /// Create new Pointer - will return NULL if no more available. Controller::PointerController* const newPointer(); /// Grab another instance of the specified Keyboard - returns NULL if not created. Controller::KeyboardController* const getKeyboard(const Controller::Id id); /// Grab another instance of the specified Joystick - returns NULL if not created. Controller::JoystickController* const getJoystick(const Controller::Id id); /// Grab another instance of the specified Pointer - returns NULL if not created. Controller::PointerController* const getPointer(const Controller::Id id); /// Grab another instance of the specified Pad - returns NULL if not created. Controller::PadController* const getPad(const Controller::Id id); /// Destroy a Keyboard. void destroyKeyboard(Controller::KeyboardController* const keyboard); /// Destroy a Joystick. void destroyJoystick(Controller::JoystickController* const joystick); /// Destroy a Pad. void destroyPad(Controller::PadController* const pad); /// Destroy a Pointer. void destroyPointer(Controller::PointerController* const pointer); protected: const Controller::KeyType convertKey(X11::KeySym x11Key); private: Controller::KeyboardController* mKeyboard; Controller::PointerController* mPointer; std::vector<Controller::JoystickController*> mJoysticks; std::vector<Controller::PadController*> mPads; EventSystem* mEventSystem; }; } #endif
Effectively, we just define what's already in the CommonInputSystem, but we also have a bunch of custom buttons, so we define these as well.
We also define the Ids for the Left and Right Nubs, and the Game Button set.
PandoraInputSystem.cpp
I'm not going to print up all of this, as it's rather large.. if you want the full code, check the SVN.
Most of the class does exactly what you think, so we'll take a look at just a few of the functions here.
InputSystem::InputSystem(EventSystem* const eventSystem) : CommonInputSystem() , mKeyboard(0) , mPointer(0) , mJoysticks() , mPads() , mEventSystem(eventSystem) { mJoysticks.push_back(new Controller::JoystickController(Controller::Pandora::LeftNub, 2U, 0U)); // ID, 2 Axes, 0 Buttons mJoysticks.push_back(new Controller::JoystickController(Controller::Pandora::RightNub, 2U, 0U)); // ID, 2 Axes, 0 Buttons mPads.push_back(new Controller::PadController(Controller::Pandora::Buttons, 15U)); // 15 Buttons - Up/Down/Left/Right - Start/Select/Pandora - Y/B/X/A - L1/R1/L2/R2 ( L2/R2 being optional, of course ) pnd_evdev_open(pnd_evdev_dpads); pnd_evdev_open(pnd_evdev_nub1); pnd_evdev_open(pnd_evdev_nub2); }
The Constructor is fairly straight-forward.. we setup how many Nubs we have ( marking them as Joystick devices of 2 axes and 0 buttons, effectively ) along with the buttons and then call the libpnd functions to open up the devices we want.
InputSystem::~InputSystem() { if (0 != mKeyboard) { delete mKeyboard; mKeyboard = 0; } if (0 != mPointer) { delete mPointer; mPointer = 0; } for (std::vector<Controller::JoystickController*>::iterator itr(mJoysticks.begin()); itr != mJoysticks.end(); ++itr) delete (*itr); mJoysticks.clear(); for (std::vector<Controller::PadController*>::iterator itr(mPads.begin()); itr != mPads.end(); ++itr) delete (*itr); mPads.clear(); pnd_evdev_close(pnd_evdev_dpads); pnd_evdev_close(pnd_evdev_nub1); pnd_evdev_close(pnd_evdev_nub2); }
The Destructor tidies up our mess, with some actual checking to ensure we're being sane about it.
void InputSystem::update() { // Use libPND to update Nubs and Buttons... keyboard and pointer come through as events. pnd_evdev_catchup(0); pnd_nubstate_t nubState; // Left Nub if (pnd_evdev_nub_state(pnd_evdev_nub1, &nubState) > 0) { mJoysticks[Controller::Pandora::LeftNub]->setAxis(Controller::Axis::X, static_cast<float>(nubState.x)); mJoysticks[Controller::Pandora::LeftNub]->setAxis(Controller::Axis::Y, static_cast<float>(nubState.y)); } // Right Nub if (pnd_evdev_nub_state(pnd_evdev_nub2, &nubState) > 0) { mJoysticks[Controller::Pandora::RightNub]->setAxis(Controller::Axis::X, static_cast<float>(nubState.x)); mJoysticks[Controller::Pandora::RightNub]->setAxis(Controller::Axis::Y, static_cast<float>(nubState.y)); } // Buttons Controller::Button buttonState(pnd_evdev_dpad_state(pnd_evdev_dpads)); mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::Left, (buttonState & pnd_evdev_left)); mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::Right, (buttonState & pnd_evdev_right)); mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::Up, (buttonState & pnd_evdev_up)); mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::Down, (buttonState & pnd_evdev_down)); mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::X, (buttonState & pnd_evdev_x)); mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::Y, (buttonState & pnd_evdev_y)); mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::A, (buttonState & pnd_evdev_a)); mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::B, (buttonState & pnd_evdev_b)); mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::L1, (buttonState & pnd_evdev_ltrigger)); mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::R1, (buttonState & pnd_evdev_rtrigger)); mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::Start, (buttonState & pnd_evdev_start)); mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::Select, (buttonState & pnd_evdev_select)); mPads[Controller::Pandora::Buttons]->setButton(Controller::Pandora::Pandora, (buttonState & pnd_evdev_pandora)); }
Now then, the update function actually performs a fair bit of logic here.
The cleanliness of abusing namespaces also makes this fairly readable - see, method in my madness!
The first thing we do is effectively poll the pnd device state to get everything.
We then update the nubs one at a time.
Then we move on to the buttons themselves, which are stored in a bitmask - hence the funky use of the ampersand.
This is all hidden out the road of the user though, so all they need to call is const float value(pandoraButtons->getButton(Controller::Pandora::B)); and be done with it.
void InputSystem::receiveEvent(Event* const event) { if (event->getEventType() == X11Events::Input::Keyboard::KeyDown) mKeyboard->setKey(convertKey(reinterpret_cast<X11Events::Input::Keyboard::KeyDownEvent*>(event)->getKey()), true); else if (event->getEventType() == X11Events::Input::Keyboard::KeyUp) mKeyboard->setKey(convertKey(reinterpret_cast<X11Events::Input::Keyboard::KeyUpEvent*>(event)->getKey()), false); else if (event->getEventType() == X11Events::Input::Mouse::Moved) { mPointer->setAxis(Controller::Axis::X, reinterpret_cast<X11Events::Input::Mouse::MovedEvent*>(event)->getX()); mPointer->setAxis(Controller::Axis::Y, reinterpret_cast<X11Events::Input::Mouse::MovedEvent*>(event)->getY()); } else if (event->getEventType() == X11Events::Input::Mouse::ButtonDown) mPointer->setButton(reinterpret_cast<X11Events::Input::Mouse::ButtonDownEvent*>(event)->getButton(), 1.0F); else if (event->getEventType() == X11Events::Input::Mouse::ButtonUp) mPointer->setButton(reinterpret_cast<X11Events::Input::Mouse::ButtonUpEvent*>(event)->getButton(), 0.0F); }
Finally, we actually get to implement a receiveEvent function!
These get sent from our Event System as they're dealt with via X11.. so we sit and wait for an event, and figure out what type it is before acting on it.
As we currently have EventType set to a string, we can't do a switch statement here.. however, I'm planning on updating this to use unsigned ints instead so we can add in a switch statement instead - which is much faster than string comparing!
The rest of the class does what you'd think.. for example; create a new Keyboard interface, register ourselves as an observer of Keyboard events for it so we can pick them up, return the Keyboard interface, remove the Keyboard interface and deregister ourselves with the event system, etc..
The only fun function is converting the X11 Keyboard symcodes to our own Key codes... but that's just API gubbins.
A Simple Test
Of course, you probably want to see this in action!
It's still a bit empty and not hugely exciting I'm afraid.. but we shall start fixing that soon!
We are at least cleaning up properly now :)
#include <cstdio> #include <cstdlib> #include "../../Events/EventSystem.h" #include "../../Input/InputSystem.h" #if defined(LINUX) #include "../../Graphics/Window/X11Window.h" #endif #if defined(GLX) #include "../../Graphics/Context/GLXContext.h" #elif defined(GLES1) #include "../../Graphics/Context/GLES1Context.h" #elif defined(GLES2) #include "../../Graphics/Context/GLES2Context.h" #endif #include "../../Input/Keyboard.h" #include "../../Input/Pad.h" using namespace GLESGAE; int main(void) { EventSystem* eventSystem(new EventSystem); InputSystem* inputSystem(new InputSystem(eventSystem)); #if defined(LINUX) X11Window* window(new X11Window); #endif #if defined(GLX) GLXContext* context(new GLXContext); #elif defined(GLES1) GLES1Context* context(new GLES1Context); #elif defined(GLES2) GLES2Context* context(new GLES2Context); #endif eventSystem->bindToWindow(window); context->bindToWindow(window); window->open(800, 480); context->initialise(); window->refresh(); Controller::KeyboardController* myKeyboard(inputSystem->newKeyboard()); #ifdef PANDORA Controller::PadController* pandoraButtons(inputSystem->getPad(Controller::Pandora::Buttons)); #endif while(false == myKeyboard->getKey(Controller::KEY_Q)) { window->refresh(); eventSystem->update(); inputSystem->update(); #ifdef PANDORA if (pandoraButtons->getButton(Controller::Pandora::B)) printf("B! I got a B! Is mummy proud? :D\n"); #endif } delete context; delete window; delete inputSystem; delete eventSystem; return 0; }
Building the Example
In the SVN there are Makefiles already setup for you.. just trigger make -f MakefileES1.pandora or whatever your chosen configuration is, and it'll happily build for you and spit out a GLESGAE.pandora binary for you to run.
Alternatively, if you use CodeLite, there's a Workspace/Project set for you preconfigured.
Gotcha This time, I had an issue with libpnd... I was a bit heavy handed this time and just did ln -s /usr/lib/libpnd.so.1 /usr/lib/libpnd.so which is naughty, as we should install the dev package of libpnd instead. This is why I sneakily included a version of one of the headers in the repository, something which I shall fix in the next commit.
Next Time
We'll be looking at the Renderers.. we'll have two of them - FixedFunctionRenderer and ShaderBasedRenderer - so depending on how it works out, it may be two articles or a really really heavy one!
Following that, we shall start setting up some engine defines, declare the data pipeline, and perhaps maybe even load up a model!
After that, some State Systems to organise the mess we're starting to develop.
GLESGAE - Making a Mesh
Introduction
Originally, I was just going to dive right in to Shader Based Rendering.
This was a stupid idea... what was I going to render?
So, I've split up the original behemoth of a Chapter and decided to split out the Vertex Array part as it was fairly self-contained and could prove handy as another alternative to the existing tutorial on the Wiki.
Vertex Array Overview
In the olden days, OpenGL had Immediate Mode.
This allowed you to do glBegin() .. .. .. glEnd() and effectively whatever points and things you specified would be drawn in that order, immediately, on the screen. Nice and simple.
Obviously this was a bit iffy, so Display Lists were developed to take a copy of your glBegin/glEnd logic's final outcome, and instance it all over the place instead of stopping to plot your points again. Magic!
However, this was still slow in that you still had to do the initial glBegin/glEnd logic, which although is rather easy, it's also rather wordy and has a hit on what the Graphics Card is actually doing. This is what Vertex Arrays are designed to fix.
Vertex Arrays describe Meshes.
How they describe them is effectively up to you, but in general they're going to have Vertices, Normals, Texture Co-ordinates, Colours and potentially Indices as well.
These can all be managed in one of two ways - separate Vertex Arrays for each Attribute, or one giant Interleaved Array with relevant offsets and stride values.
In GLESGAE, we use interleaved arrays.
Interleave my What-now?
Interleaving is just a fancy way of squishing all the data in one array. This actually works out faster as we know for a fact that all our Vertex information is going to be in one long memory chunk, so we just set a pointer somewhere and run through it all, rather than having to jump about all the time to find bits and pieces.
Here's some example vertex data - in particular, the vertex data we will be drawing later on:
float vertexData[24] = {// Position - 12 floats -1.0F, 1.0F, 0.0F, 1.0F, 1.0F, 0.0F, 1.0F, -1.0F, 0.0F, -1.0F, -1.0F, 0.0F, // Colour - 12 floats 0.0F, 1.0F, 0.0F, 1.0F, 0.0F, 0.0F, 0.0F, 0.0F, 1.0F, 1.0F, 1.0F, 1.0F};
As you can see, I'm using three floats to define a positional point, and three floats to define a colour.
With four points, we can draw a quad. No, really! As GLESGAE also deals with index buffers. Here's the one for the quad:
unsigned char indexData[6] = { 0, 1, 2, 2, 3, 0 }
This is stating that we're using point 0, 1 and 2 as Triangle 1, and 2, 3 and 0 as Triangle 2 - making up our quad on the screen.
- Point 0 corresponds to -1.0F, 1.0F, 0.0F
- Point 1 corresponds to 1.0F, 1.0F, 0.0F
- Point 2 corresponds to 1.0F, -1.0F, 0.0F
and so on...
Defining a Format
So let's go create a Vertex Buffer class to tie everything up, and allow us to tell the engine how to draw our vertex data.
VertexBuffer.h
#ifndef _VERTEX_BUFFER_H_ #define _VERTEX_BUFFER_H_ #include <vector> namespace GLESGAE { class VertexBuffer { public: enum FormatType { // Float FORMAT_CUSTOM_4F , FORMAT_CUSTOM_3F , FORMAT_CUSTOM_2F , FORMAT_POSITION_2F , FORMAT_POSITION_3F , FORMAT_POSITION_4F , FORMAT_NORMAL_3F , FORMAT_COLOUR_3F // Not available in GLES1 , FORMAT_COLOUR_4F , FORMAT_TEXTURE_2F , FORMAT_TEXTURE_3F , FORMAT_TEXTURE_4F // Unsigned/Byte , FORMAT_CUSTOM_2B , FORMAT_CUSTOM_3B , FORMAT_CUSTOM_4B , FORMAT_POSITION_2B , FORMAT_POSITION_3B , FORMAT_POSITION_4B , FORMAT_NORMAL_3B , FORMAT_COLOUR_3UB // Not available in GLES1 , FORMAT_COLOUR_4UB , FORMAT_TEXTURE_2B , FORMAT_TEXTURE_3B , FORMAT_TEXTURE_4B // Short , FORMAT_CUSTOM_2S , FORMAT_CUSTOM_3S , FORMAT_CUSTOM_4S , FORMAT_POSITION_2S , FORMAT_POSITION_3S , FORMAT_POSITION_4S , FORMAT_NORMAL_3S , FORMAT_COLOUR_3S // Not available in GLES1 , FORMAT_COLOUR_4S // Not available in GLES1 , FORMAT_TEXTURE_2S , FORMAT_TEXTURE_3S , FORMAT_TEXTURE_4S }; class Format { public: Format(const FormatType type, unsigned int offset) : mType(type) , mOffset(offset) { switch (mType) { case FORMAT_CUSTOM_2F: case FORMAT_POSITION_2F: case FORMAT_TEXTURE_2F: mSize = sizeof(float) * 2; break; case FORMAT_CUSTOM_3F: case FORMAT_POSITION_3F: case FORMAT_NORMAL_3F: case FORMAT_COLOUR_3F: case FORMAT_TEXTURE_3F: mSize = sizeof(float) * 3; break; case FORMAT_CUSTOM_4F: case FORMAT_POSITION_4F: case FORMAT_COLOUR_4F: case FORMAT_TEXTURE_4F: mSize = sizeof(float) * 4; break; case FORMAT_CUSTOM_2B: case FORMAT_POSITION_2B: case FORMAT_TEXTURE_2B: mSize = sizeof(char) * 2; break; case FORMAT_CUSTOM_3B: case FORMAT_POSITION_3B: case FORMAT_NORMAL_3B: case FORMAT_COLOUR_3UB: case FORMAT_TEXTURE_3B: mSize = sizeof(char) * 3; break; case FORMAT_CUSTOM_4B: case FORMAT_POSITION_4B: case FORMAT_COLOUR_4UB: case FORMAT_TEXTURE_4B: mSize = sizeof(char) * 4; case FORMAT_CUSTOM_2S: case FORMAT_POSITION_2S: case FORMAT_TEXTURE_2S: mSize = sizeof(short) * 2; break; case FORMAT_CUSTOM_3S: case FORMAT_POSITION_3S: case FORMAT_NORMAL_3S: case FORMAT_COLOUR_3S: case FORMAT_TEXTURE_3S: mSize = sizeof(short) * 3; break; case FORMAT_CUSTOM_4S: case FORMAT_POSITION_4S: case FORMAT_COLOUR_4S: case FORMAT_TEXTURE_4S: mSize = sizeof(short) * 4; break; default: break; }; } /// Retrieve the type of this Format Identifier const FormatType getType() const { return mType; } /// Retrieve the offset of this Format Identifier const unsigned int getOffset() const { return mOffset; } /// Retrieve the size of this Format Identifier const unsigned int getSize() const { return mSize; } private: FormatType mType; unsigned int mSize; unsigned int mOffset; }; VertexBuffer(unsigned char* const data, const unsigned int size, const std::vector<Format>& format); VertexBuffer(unsigned char* const data, const unsigned int size); VertexBuffer(const VertexBuffer& vertexBuffer); /// Retrieve format details const std::vector<Format>& getFormat() const { return mFormat; } /// Retrieve data const unsigned char* getData() const { return mData; } /// Retrieve size const unsigned int getSize() const { return mSize; } /// Retrieve stride const unsigned int getStride() const { return mStride; } /// Add a Format Identifier void addFormatIdentifier(const FormatType formatType, const unsigned int amount); protected: /// Protected equals operator - use the copy constructor instead. VertexBuffer operator=(const VertexBuffer&) { return *this; } private: unsigned char* mData; unsigned int mSize; unsigned int mStride; std::vector<Format> mFormat; }; } #endif
Firstly, I'm being my usual pedantic self and ensuring I can support every thing I can get away with. There are a few things which ES1 doesn't support however; which I have highlighted.
To render anything effectively, and if you can at all get away with it, your meshes should really use the smallest data unit you can. For example, using unsigned bytes for Colour values, Shorts for Vertices ( Bytes are probably a bit too small for large meshes, ) etc.. also be careful in that ES 1 expects 4 colour values rather than 3 - it requires that extra alpha bit!
Ignore the Custom declarations for now, as they're for Shader based rendering, where your Vertex Attributes can be whatever you like and you're not limited to the standard four types.
Of course, the irony is that as a fair chunk of the code is defined in the header, the meat of the object file is actually pretty small:
VertexBuffer.cpp
#include "VertexBuffer.h" #include <cstring> // for memcpy using namespace GLESGAE; VertexBuffer::VertexBuffer(unsigned char* const data, const unsigned int size, const std::vector<Format>& format) : mData(new unsigned char[size]) , mSize(size) , mFormat(format) , mStride(0U) { std::memcpy(mData, data, size); for (std::vector<Format>::const_iterator itr(mFormat.begin()); itr < mFormat.end(); ++itr) mStride += itr->getSize(); } VertexBuffer::VertexBuffer(unsigned char* const data, const unsigned int size) : mData(new unsigned char[size]) , mSize(size) , mFormat() , mStride(0U) { std::memcpy(mData, data, size); } VertexBuffer::VertexBuffer(const VertexBuffer& vertexBuffer) : mData(vertexBuffer.mData) , mSize(vertexBuffer.mSize) , mFormat(vertexBuffer.mFormat) , mStride(vertexBuffer.mStride) { } void VertexBuffer::addFormatIdentifier(const FormatType formatType, const unsigned int amount) { Format newFormat(formatType, mStride); mStride += newFormat.getSize() * amount; mFormat.push_back(newFormat); }
All we're really doing here, is making sure we copy the mesh structure properly when we have to, and that we deal with the Format parameters properly and update our current stride amount when adding a new format identifier. Simples!
IndexBuffer.h
Considering all our Index Buffer has to do is store the order of the vertices to render, there's not much to it.
#ifndef _INDEX_BUFFER_H_ #define _INDEX_BUFFER_H_ namespace GLESGAE { class IndexBuffer { public: enum FormatType { FORMAT_FLOAT // unsupported by ES 1 , FORMAT_UNSIGNED_BYTE , FORMAT_UNSIGNED_SHORT }; IndexBuffer(unsigned char* const data, const unsigned int size, const FormatType format); IndexBuffer(const IndexBuffer& indexBuffer); /// Retrieve format details const FormatType getFormat() const { return mFormat; } /// Retrieve data const unsigned char* getData() const { return mData; } /// Retrieve size const unsigned int getSize() const { return mSize; } protected: /// Protected equals operator - use the copy constructor instead. IndexBuffer operator=(const IndexBuffer&) { return *this; } private: unsigned char* mData; unsigned int mSize; FormatType mFormat; }; } #endif
Again, as with the Vertex Buffer, if you can at all get away with it, use the smallest data unit you can for your indices.
Be advised that ES 1 does not support Float indices so you're best using unsigned short. Actually, I'd recommend unsigned short overall... if you're dealing with meshes large enough to require Floats, you're probably doing it wrong or have pretty specific needs which this set of tutorials probably isn't going to help you with.
IndexBuffer.cpp
#include "VertexBuffer.h" #include <cstring> // for memcpy using namespace GLESGAE; VertexBuffer::VertexBuffer(unsigned char* const data, const unsigned int size, const std::vector<Format>& format) : mData(new unsigned char[size]) , mSize(size) , mFormat(format) , mStride(0U) { std::memcpy(mData, data, size); for (std::vector<Format>::const_iterator itr(mFormat.begin()); itr < mFormat.end(); ++itr) mStride += itr->getSize(); } VertexBuffer::VertexBuffer(unsigned char* const data, const unsigned int size) : mData(new unsigned char[size]) , mSize(size) , mFormat() , mStride(0U) { std::memcpy(mData, data, size); } VertexBuffer::VertexBuffer(const VertexBuffer& vertexBuffer) : mData(vertexBuffer.mData) , mSize(vertexBuffer.mSize) , mFormat(vertexBuffer.mFormat) , mStride(vertexBuffer.mStride) { } void VertexBuffer::addFormatIdentifier(const FormatType formatType, const unsigned int amount) { Format newFormat(formatType, mStride); mStride += newFormat.getSize() * amount; mFormat.push_back(newFormat); }
The object file is just as weedy as the last one.
The Mesh Object
Now, to actually draw this, we're still in need of a few more bits and pieces.. but as we haven't really gotten to them yet, we'll just do some place holders.
Place holder Objects
Generally, Meshes will have a Material definition. This will tell the Graphics System how shiny the object is, for example, and how the lighting affects it. However, how Materials are actually used are a bit dependent upon the Rendering System's context, so are a bit out of scope for now.
We can therefore get away with the following:
Material.h
#ifndef _MATERIAL_H_ #define _MATERIAL_H_ #include <vector> namespace GLESGAE { class Shader; class Texture; class Material { public: Material(); /// Grab the Shader that's linked to this Material Shader* const getShader() const { return mShader; } /// Set a new Shader on this Material void setShader(Shader* const shader) { mShader = shader; } /// Grab a Texture Texture* const getTexture(unsigned int index) const { return mTextures[index]; } private: Shader* mShader; std::vector<Texture*> mTextures; }; } #endif
You might have noticed the Shader and Texture pointers there.
Luckily, as we've forward declared them up top, we don't really need to define them much more than this, so can ignore them for now!
Transforms
This one's a bit more complex as generally a Transform Matrix is a 4x4 Matrix, and requires a bit of Math knowledge to fiddle with.
Again, place holder time:
Matrix4.h
#ifndef _MATRIX4_H_ #define _MATRIX4_H_ namespace GLESGAE { class Matrix4 { public: Matrix4(); /// Retrieve a pointer to the matrix float* const getData() { return mMatrix; } /// Set a new matrix void setMatrix(const float matrix[16]); /// Set to Identity void setToIdentity(); /// Set to Zero void setToZero(); private: float mMatrix[16]; }; } #endif
Matrix4.cpp
#include "Matrix4.h" #include <cstring> // for memcpy using namespace GLESGAE; Matrix4::Matrix4() : mMatrix() { setToIdentity(); } void Matrix4::setMatrix(const float matrix[16]) { std::memcpy(mMatrix, matrix, sizeof(float) * 16); } void Matrix4::setToIdentity() { // 0 1 2 3 // 4 5 6 7 // 8 9 10 11 // 12 13 14 15 mMatrix[0] = mMatrix[5] = mMatrix[10] = mMatrix[15] = 1.0F; mMatrix[1] = mMatrix[2] = mMatrix[3] = 0.0F; mMatrix[4] = mMatrix[6] = mMatrix[7] = 0.0F; mMatrix[8] = mMatrix[9] = mMatrix[11] = 0.0F; mMatrix[12] = mMatrix[13] = mMatrix[14] = 0.0F; } void Matrix4::setToZero() { mMatrix[0] = mMatrix[1] = mMatrix[2] = mMatrix[3] = 0.0F; mMatrix[4] = mMatrix[5] = mMatrix[6] = mMatrix[7] = 0.0F; mMatrix[8] = mMatrix[9] = mMatrix[10] = mMatrix[11] = 0.0F; mMatrix[12] = mMatrix[13] = mMatrix[14] = mMatrix[15] = 0.0F; }
Fairly standard stuff.. an Identity matrix just has 1's down the diagonal and 0's elsewhere as shown, and is pretty handy for drawing stuff dead centre. We'll be using this to draw our quad on the screen starting from the centre so it fills the screen with coloured joy. A zero matrix pretty much explains itself.
Mesh.h
Binding all this up into one Mesh object is then fairly straight-forward:
#ifndef _MESH_H_ #define _MESH_H_ namespace GLESGAE { class VertexBuffer; class IndexBuffer; class Material; class Matrix4; class Mesh { public: Mesh(VertexBuffer* const vertexBuffer , IndexBuffer* const indexBuffer , Material* const material , Matrix4* const matrix); ~Mesh(); /// Grab the current Vertex Buffer - read-only const VertexBuffer* const getVertexBuffer() const { return mVertexBuffer; } /// Grab the current Index Buffer - read-only const IndexBuffer* const getIndexBuffer() const { return mIndexBuffer; } /// Grab the current Material - read-only const Material* const getMaterial() const { return mMaterial; } /// Grab the current Transform Matrix - read-only const Matrix4* const getMatrix() const { return mMatrix; } /// Grab an Editable Vertex Buffer VertexBuffer* const editVertexBuffer() { return mVertexBuffer; } /// Grab an Editable Index Buffer IndexBuffer* const editIndexBuffer() { return mIndexBuffer; } /// Grab an Editable Material Material* const editMaterial() { return mMaterial; } /// Grab an Editable Transform Matrix Matrix4* const editMatrix() { return mMatrix; } private: VertexBuffer* mVertexBuffer; IndexBuffer* mIndexBuffer; Material* mMaterial; Matrix4* mMatrix; }; } #endif
Mesh.cpp
#include "Mesh.h" using namespace GLESGAE; Mesh::Mesh(VertexBuffer* const vertexBuffer, IndexBuffer* const indexBuffer, Material* const material, Matrix4* const matrix) : mVertexBuffer(vertexBuffer) , mIndexBuffer(indexBuffer) , mMaterial(material) , mMatrix(matrix) { } Mesh::~Mesh() { //TODO: Resource Management delete mVertexBuffer; delete mIndexBuffer; delete mMaterial; delete mMatrix; }
What could be simpler than that? a bunch of accessors, and a constructor which takes in a pointer to each part that makes up the Mesh.
Do note that we delete everything in the constructor - we're assuming that as soon as you give the Mesh the stuff it needs, that it takes full ownership of it.
Making a Mesh
Going back to our earlier example of showing you some vertex data to define a quad, here it is again in a handy function using what we've just done:
Mesh* makeSprite() { float vertexData[24] = {// Position - 12 floats -1.0F, 1.0F, 0.0F, 1.0F, 1.0F, 0.0F, 1.0F, -1.0F, 0.0F, -1.0F, -1.0F, 0.0F, // Colour - 12 floats 0.0F, 1.0F, 0.0F, 1.0F, 0.0F, 0.0F, 0.0F, 0.0F, 1.0F, 1.0F, 1.0F, 1.0F}; unsigned int vertexSize = 24 * sizeof(float); unsigned char indexData[6] = { 0, 1, 2, 2, 3, 0 }; unsigned int indexSize = 6 * sizeof(unsigned char); VertexBuffer* newVertexBuffer = new VertexBuffer(reinterpret_cast<unsigned char*>(&vertexData), vertexSize); newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_POSITION_3F, 4U); newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_COLOUR_3F, 4U); IndexBuffer* newIndexBuffer = new IndexBuffer(reinterpret_cast<unsigned char*>(&indexData), indexSize, IndexBuffer::FORMAT_UNSIGNED_BYTE); Material* newMaterial = new Material; Matrix4* newTransform = new Matrix4; return new Mesh(newVertexBuffer, newIndexBuffer, newMaterial, newTransform); }
Nice and simple, huh? While I'm not using the smallest format for everything; I am doing a reasonably even mix.
Checking out the SVN
There isn't actually any useful code for this Chapter that would make it standalone.
You'll just need to bear with me and wait till you get to the Rendering Contexts.
Building the Example
There is no example this time, as there'd be nothing to show!
Instead, jump to either GLESGAE:Fixed Function Rendering Contexts or GLESGAE:Shader Based Contexts for displaying your Mesh in the chosen manner.
Next Time
We'll be looking at rendering this mess... err, mesh.
GLESGAE - Fixed Function Rendering Contexts
Introduction
Fixed Function Rendering is actually pretty easy.
You've a specific set of things you can and cannot do, and also your Meshes must conform to a reasonable standard for the most part.
This makes getting something up and running really quick and easy.
Following on from GLESGAE:Making a Mesh, we now have a Mesh object available to draw... so let's go ahead and do so.
Fast Track
We're on SVN revision 4 now, which includes all the mesh code from previous guide. svn co -r 4 http://svn3.xp-dev.com/svn/glesgae/trunk/ glesgae
The Fixed Function Context
Our Fixed Function Context is probably still quite empty, so let's start filling it up with some functions:
FixedFunctionContext.h
#ifndef _FIXED_FUNCTION_CONTEXT_H_ #define _FIXED_FUNCTION_CONTEXT_H_ #if defined(GLX) #include "../GLee.h" #elif defined(PANDORA) #include <GLES/gl.h> #endif namespace GLESGAE { class Material; class Mesh; class FixedFunctionContext { /** The quickest way for a Fixed Function pipeline to work, is to have all data match up to a common format. As such, we use global state to deal with Vertex Attributes. This means that you really need to order your data correctly so you're doing as few state switches as possible when rendering - the context just renders, it doesn't organise for you. Additionally, dealing with multiple texture co-ordinates is a pain as we can only really deal with them as and when we get them, and store how many we've enabled/disabled since last time. Also, with a Fixed Function pipeline, each Texture must have Texture Co-ordinates, so be mindful of that with your data! **/ public: FixedFunctionContext(); virtual ~FixedFunctionContext(); /// Enable Vertex Positions void enableFixedFunctionVertexPositions(); /// Disable Vertex Positions void disableFixedFunctionVertexPositions(); /// Enable Vertex Colours void enableFixedFunctionVertexColours(); /// Disable Vertex Colours void disableFixedFunctionVertexColours(); /// Enable Vertex Normals void enableFixedFunctionVertexNormals(); /// Disable Vertex Normals void disableFixedFunctionVertexNormals(); protected: /// Draw a Mesh using the Fixed Function Pipeline void drawMeshFixedFunction(Mesh* const mesh); /// Setup texturing - check if the requested texture unit is on and bind a texture from the material. void setupFixedFunctionTexturing(unsigned int* textureUnit, const Material* const material); /// Disable Texture Units void disableFixedFunctionTexturing(const unsigned int currentTextureUnit); private: bool mFixedFunctionTexUnits[8]; // 8 Texture Units sounds like they'd be enough to me! unsigned int mFixedFunctionLastTexUnit; // Last Texture Unit we were working on, in case it's the same. }; } #endif
FixedFunctionContext.cpp
#include "FixedFunctionContext.h" #include "../IndexBuffer.h" #include "../Material.h" #include "../Mesh.h" #include "../Texture.h" #include "../VertexBuffer.h" using namespace GLESGAE; FixedFunctionContext::FixedFunctionContext() : mFixedFunctionTexUnits() , mFixedFunctionLastTexUnit(0U) { // Mark all texture co-ordinate arrays as offline. for (unsigned int index(0U); index < 8U; ++index) mFixedFunctionTexUnits[index] = false; } FixedFunctionContext::~FixedFunctionContext() { } void FixedFunctionContext::drawMeshFixedFunction(Mesh* const mesh) { const IndexBuffer* const indexBuffer(mesh->getIndexBuffer()); const VertexBuffer* const vertexBuffer(mesh->getVertexBuffer()); const Material* const material(mesh->getMaterial()); unsigned int currentTextureUnit(0U); const std::vector<VertexBuffer::Format>& meshFormat(vertexBuffer->getFormat()); for (std::vector<VertexBuffer::Format>::const_iterator itr(meshFormat.begin()); itr < meshFormat.end(); ++itr) { switch (itr->getType()) { // Position case VertexBuffer::FORMAT_POSITION_2F: glVertexPointer(2, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_POSITION_3F: glVertexPointer(3, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_POSITION_4F: glVertexPointer(4, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_POSITION_2S: glVertexPointer(2, GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_POSITION_3S: glVertexPointer(3, GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_POSITION_4S: glVertexPointer(4, GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_POSITION_2B: glVertexPointer(2, GL_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_POSITION_3B: glVertexPointer(3, GL_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_POSITION_4B: glVertexPointer(4, GL_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); break; // Normal case VertexBuffer::FORMAT_NORMAL_3F: glNormalPointer(GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_NORMAL_3S: glNormalPointer(GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_NORMAL_3B: glNormalPointer(GL_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); break; // Colour case VertexBuffer::FORMAT_COLOUR_4F: glColorPointer(4, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_COLOUR_3F: glColorPointer(3, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_COLOUR_4S: glColorPointer(4, GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_COLOUR_3S: glColorPointer(3, GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_COLOUR_4UB: glColorPointer(4, GL_UNSIGNED_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_COLOUR_3UB: glColorPointer(3, GL_UNSIGNED_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); break; // Textures and Co-ordinates case VertexBuffer::FORMAT_TEXTURE_2F: setupFixedFunctionTexturing(¤tTextureUnit, material); glTexCoordPointer(2, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_TEXTURE_3F: setupFixedFunctionTexturing(¤tTextureUnit, material); glTexCoordPointer(3, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_TEXTURE_4F: setupFixedFunctionTexturing(¤tTextureUnit, material); glTexCoordPointer(4, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_TEXTURE_2S: setupFixedFunctionTexturing(¤tTextureUnit, material); glTexCoordPointer(2, GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_TEXTURE_3S: setupFixedFunctionTexturing(¤tTextureUnit, material); glTexCoordPointer(3, GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_TEXTURE_4S: setupFixedFunctionTexturing(¤tTextureUnit, material); glTexCoordPointer(4, GL_SHORT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_TEXTURE_2B: setupFixedFunctionTexturing(¤tTextureUnit, material); glTexCoordPointer(2, GL_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_TEXTURE_3B: setupFixedFunctionTexturing(¤tTextureUnit, material); glTexCoordPointer(3, GL_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_TEXTURE_4B: setupFixedFunctionTexturing(¤tTextureUnit, material); glTexCoordPointer(4, GL_BYTE, 0, vertexBuffer->getData() + itr->getOffset()); break; default: break; }; } disableFixedFunctionTexturing(currentTextureUnit); // Disable any excess texture units switch (indexBuffer->getFormat()) { case IndexBuffer::FORMAT_FLOAT: glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_FLOAT, indexBuffer->getData()); break; case IndexBuffer::FORMAT_UNSIGNED_BYTE: glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_BYTE, indexBuffer->getData()); break; case IndexBuffer::FORMAT_UNSIGNED_SHORT: glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_SHORT, indexBuffer->getData()); break; default: break; }; mFixedFunctionLastTexUnit = currentTextureUnit; } void FixedFunctionContext::enableFixedFunctionVertexPositions() { glEnableClientState(GL_VERTEX_ARRAY); } void FixedFunctionContext::disableFixedFunctionVertexPositions() { glDisableClientState(GL_VERTEX_ARRAY); } void FixedFunctionContext::enableFixedFunctionVertexColours() { glEnableClientState(GL_COLOR_ARRAY); } void FixedFunctionContext::disableFixedFunctionVertexColours() { glDisableClientState(GL_COLOR_ARRAY); } void FixedFunctionContext::enableFixedFunctionVertexNormals() { glEnableClientState(GL_NORMAL_ARRAY); } void FixedFunctionContext::disableFixedFunctionVertexNormals() { glDisableClientState(GL_NORMAL_ARRAY); } void FixedFunctionContext::setupFixedFunctionTexturing(unsigned int* textureUnit, const Material* const material) { if (false == mFixedFunctionTexUnits[*textureUnit]) { // This texture unit isn't currently enabled mFixedFunctionTexUnits[*textureUnit] = true; glClientActiveTexture(GL_TEXTURE0 + *textureUnit); glEnable(GL_TEXTURE_2D); glEnableClientState(GL_TEXTURE_COORD_ARRAY); } else { if (*textureUnit != mFixedFunctionLastTexUnit) { // This texture unit is enabled but it's not the current one glActiveTexture(GL_TEXTURE0 + *textureUnit); } } Texture* const texture(material->getTexture(*textureUnit)); glBindTexture(GL_TEXTURE_2D, texture->getId()); *textureUnit++; } void FixedFunctionContext::disableFixedFunctionTexturing(const unsigned int currentTextureUnit) { unsigned int delta(mFixedFunctionLastTexUnit); while (delta > currentTextureUnit) { // We're using less texture units than we need glClientActiveTexture(GL_TEXTURE0 + delta); glDisable(GL_TEXTURE_2D); glDisableClientState(GL_TEXTURE_COORD_ARRAY); --delta; } }
Most of this should be fairly straight-forward... but let's go into some of it, anyway.
Enabling/Disabling Vertex Attributes
A Vertex Attribute is generally one of the following:
- Vertex Position
- Vertex Normal
- Vertex Colour
- Vertex Texture Co-ordinate
We need to enable/disable these based on what our Vertex Format is.
It's also rather costly to enable/disable these at will, so generally we'll want all our meshes to correspond to a set standard. Therefore, the enable/disableFixedFunctionVertex* functions affect your global rendering state, and are public to be accessed when needed.
Additionally, enabling and disabling Texture Units is just as costly, so we have to store our own state here too.
Rendering via Vertex Arrays
The real meat of the class is the draw function.
It's not really that scary when you sit down and pick it apart, though! Really!
Most of it is my pedanticness of ensuring I can support as wide a range of things as possible.
So let's take one strip out - in particular, the stuff we need to draw our Mesh described in the previous Chapter:
void FixedFunctionContext::drawMeshFixedFunction(Mesh* const mesh) { const IndexBuffer* const indexBuffer(mesh->getIndexBuffer()); const VertexBuffer* const vertexBuffer(mesh->getVertexBuffer()); const Material* const material(mesh->getMaterial()); unsigned int currentTextureUnit(0U); const std::vector<VertexBuffer::Format>& meshFormat(vertexBuffer->getFormat()); for (std::vector<VertexBuffer::Format>::const_iterator itr(meshFormat.begin()); itr < meshFormat.end(); ++itr) { switch (itr->getType()) { // Position case VertexBuffer::FORMAT_POSITION_3F: glVertexPointer(3, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_COLOUR_3F: glColorPointer(3, GL_FLOAT, 0, vertexBuffer->getData() + itr->getOffset()); break; default: break; }; }
So, bit more concise and easier to read, now!
We don't have any normal data, so we don't draw a normal. We do have Positional data, represented by 3 Floats, so we mark a glVertexPointer to the current iterator position offset from the vertex Buffer's data pointer - which is our big fat interleaved array of goodies.
The iterator itself is actually running through the Vertex Buffer's format array which we described and setup in the last guide, in the newSprite function, so it'll run through all the Format descriptors to find one in the switch and act appropriately, such as setting the glVertexPointer, or the glColorPointer.
disableFixedFunctionTexturing(currentTextureUnit); // Disable any excess texture units switch (indexBuffer->getFormat()) { case IndexBuffer::FORMAT_UNSIGNED_BYTE: glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_BYTE, indexBuffer->getData()); break; default: break; }; mFixedFunctionLastTexUnit = currentTextureUnit; }
Although we don't deal with texturing yet, I've put the code in to handle most of it already. In this case, we're seeing if we need to enable/disable more texture units than we had the previous frame. Pretty straight forward stuff.. leaving texture units enabled when you don't need them can cause oddness, for example.. and obviously not enabling enough isn't really going to help much either.
Anyway, we get on to running through our Index Buffer to see what type it is, and then call glDrawElements specifying the Mesh as a set of triangles, how many indices to draw, the type, and where the data resides. This then draws all the Attributes we've defined based upon the indices specified in our Index buffer. Pretty nifty, huh? We don't need to know or care what comes in any more as long as it's been wrapped properly in a Mesh object.
Multi-Coloured Quad Goodness
Of course, we're still missing stuff - the transform matrices, for example - but we'll get there in due course. We can now display our quad on the screen with the following example program.
#include <cstdio> #include <cstdlib> #include "../../Graphics/GraphicsSystem.h" #include "../../Graphics/Context/FixedFunctionContext.h" #include "../../Events/EventSystem.h" #include "../../Input/InputSystem.h" #include "../../Input/Keyboard.h" #include "../../Input/Pad.h" #include "../../Graphics/Mesh.h" #include "../../Graphics/VertexBuffer.h" #include "../../Graphics/IndexBuffer.h" #include "../../Graphics/Material.h" #include "../../Maths/Matrix4.h" using namespace GLESGAE; void updateRender(Mesh* const mesh); Mesh* makeSprite(); int main(void) { EventSystem* eventSystem(new EventSystem); InputSystem* inputSystem(new InputSystem(eventSystem)); GraphicsSystem* graphicsSystem(new GraphicsSystem(GraphicsSystem::FIXED_FUNCTION_RENDERING)); if (false == graphicsSystem->initialise("GLESGAE Fixed Function Test", 800, 480, 16, false)) { //TODO: OH NOES! WE'VE DIEDED! return -1; } Mesh* mesh(makeSprite()); FixedFunctionContext* const fixedContext(graphicsSystem->getFixedContext()); if (0 != fixedContext) { fixedContext->enableFixedFunctionVertexPositions(); fixedContext->enableFixedFunctionVertexColours(); } eventSystem->bindToWindow(graphicsSystem->getWindow()); Controller::KeyboardController* myKeyboard(inputSystem->newKeyboard()); while(false == myKeyboard->getKey(Controller::KEY_ESCAPE)) { eventSystem->update(); inputSystem->update(); graphicsSystem->beginFrame(); graphicsSystem->drawMesh(mesh); graphicsSystem->endFrame(); } delete graphicsSystem; delete inputSystem; delete eventSystem; delete mesh; return 0; } Mesh* makeSprite() { float vertexData[24] = {// Position - 12 floats -1.0F, 1.0F, 0.0F, 1.0F, 1.0F, 0.0F, 1.0F, -1.0F, 0.0F, -1.0F, -1.0F, 0.0F, // Colour - 12 floats 0.0F, 1.0F, 0.0F, 1.0F, 0.0F, 0.0F, 0.0F, 0.0F, 1.0F, 1.0F, 1.0F, 1.0F}; unsigned int vertexSize = 24 * sizeof(float); unsigned char indexData[6] = { 0, 1, 2, 2, 3, 0 }; unsigned int indexSize = 6 * sizeof(unsigned char); VertexBuffer* newVertexBuffer = new VertexBuffer(reinterpret_cast<unsigned char*>(&vertexData), vertexSize); newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_POSITION_3F, 4U); newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_COLOUR_3F, 4U); IndexBuffer* newIndexBuffer = new IndexBuffer(reinterpret_cast<unsigned char*>(&indexData), indexSize, IndexBuffer::FORMAT_UNSIGNED_BYTE); Material* newMaterial = new Material; Matrix4* newTransform = new Matrix4; return new Mesh(newVertexBuffer, newIndexBuffer, newMaterial, newTransform); }
Building the Example
In the SVN there are Makefiles already setup for you.. just trigger make -f MakefileES1.pandora or whatever your chosen configuration is, and it'll happily build for you and spit out a GLESGAE.pandora binary for you to run.
Alternatively, if you use CodeLite, there's a Workspace/Project set for you preconfigured.
Next Time
Let's render the same thing again, but through a Shader based system.
GLESGAE - The Shader Based Context
Fast Track
We're on SVN revision 4 now, which includes all the rendering stuff, and the mesh code from previous guides. svn co -r 4 http://svn3.xp-dev.com/svn/glesgae/trunk/ glesgae
Introduction
This was much bigger than it is now.. as I originally jumped from Events to Shader Based Rendering due to there already being some Fixed Function stuff on the Wiki. However, I also needed to create a Mesh object and some foundation, so I split things up and did everything else first.
Therefore, it is highly advisable to have at least read GLESGAE:Making a Mesh before continuing. Reading GLESGAE:Fixed Function Rendering Contexts wouldn't kill you either.
Platform Specific Madness
As an aside, this is where platform specifics can bite your bum.
On my netbook with Intel GMA 950, my default gl.h does not include shader support, so I had to pull in GLee instead, which rips it out from the driver itself, and gives you handy checking abilities to ensure you're not trying to use something which isn't there.
Technically, GLee and GLEW do the same thing, so it's a matter of taste as to which one you use really, but they're immensely handy for when you can't be bothered doing all the actual extension-o-rama with OpenGL yourself.
You can get GLee from: http://elf-stone.com/glee.php
The Shader
In a shader-based rendering system, the shader is perhaps one of - if not the most - important parts of the renderer.
What do I mean by this? Well, your shader will be what defines what gets sent to the GPU - vertex positions, colours, texture co-ordinates, etc...
If the shader isn't expecting it, it can cause Open GL to crash viciously.
Therefore, it's of high importance to get the whole concept of a shader completely defined from the start - what it does, and how you use them.
Attributes and Uniforms
A shader comprises of two main types of modifiable data that we can send to it from the engine - attributes and uniforms.
Attributes are used only in the Vertex Shader, and are then converted to Varyings which are passed to the Fragment Shader.
Attributes are your vertex descriptors as previously mentioned; position, colour, co-ordinates, etc... whereas uniforms are any random bit of data that you want your shader to process; time for example.
Now, obviously we should know what our shader is expecting as we wrote it, but it's much more convenient to pull this information back out of the shader as we load it up.
This allows us to just push any old shader in, and our rendering context will configure itself accordingly.
Of course, we still have to send the data over, and if we don't feed our shader what it needs, it can cause rather bizarre effects - or worse, crashes.
Before we get that far though, we need to load the shader up.
Writing our First Shaders
In the GL ES variant of GLSL you're allowed vertex and fragment shaders. When combined, this creates a shader program.
Vertex Shaders deal with manipulating geometry; pushing vertices around on the screen.
Fragment Shaders deal with manipulating fragments - or pixels; such as texturing or colouring.
Vertex Shaders deal with attributes, modifying these as varyings to pass to the Fragment shader; both of which can also deal with uniforms.
A Shader Program can comprise of many of these Vertex and Fragment shaders, but must have only one main() function for each Vertex/Fragment part.
To make matters trickier than they need be, GLSL is not file-based like HLSL or CG, therefore you need to manage common functionality yourself rather than define a set of common functions in one shader file, and including it where it needs be.
Believe me, this is an arse to deal with properly and you really need to come up with some form of effects descriptor to handle it correctly... something beyond our scope for the moment.
Anyway, once we load up a Vertex and Fragment shader, we link them together to form a Shader Program. We can then pull out all the attributes and uniforms which have not been optimised out.
Read that last line again, as it's rather important. The GLSL shader compiler will automatically optimise out attributes and uniforms which are not used, so although you may define them, and expect them to be there; if they've not been used, they won't be!
This is why I much prefer to check the Shader Program to see what it has, rather than assuming what I'm going to get.
It's easier to understand what's going on with an example:
A Simple Vertex Shader
attribute vec4 a_position; attribute vec4 a_colour; varying vec4 v_colour; void main() { gl_Position = a_position; v_colour = a_colour; }
This shader expects two Attributes - a_position and a_colour. This means, it expect a Vertex Position and a Vertex Colour for every vertex it receives.
It also expects them as four vectors, or floats in other words. Simple enough?
We don't do anything to the position, and just pass it directly to OpenGL via the special gl_Position variable.
We also don't do anything to the colour, but we do want to pass it along to the Fragment Shader, so we create a varying, and make that equal to our attribute.
A Simple Fragment Shader
varying vec4 v_colour; uniform sampler2D s_texture; uniform float u_random; void main() { gl_FragColor = v_colour; }
Our Fragment shader is even simpler, we just equal our special gl_FragColor variable to our varying which has come from the Vertex Shader and we're done!
I've also included two uniforms in here, a standard float - u_random, and a texture sampler - s_texture. These don't get used, so will be optimised out by the compiler when it links these two shaders together.
A quick gotcha is that of precision... some GL ES implementations (Pandora's for instance) require a default precision to be set, others have one pre-defined. Desktop GLSL doesn't support default precision either, so you're left with the fun task of marking every sodding thing with lowp, mediump and highp if you run into it... or adding it into your shader compile chain ( which we will do later, I've done a trick in the example instead. )
That wasn't so hard was it?
The Shader Loader
We have our two shaders, now we need to compile them and make them do something.
For this, we shall create a new class so we can store all our fun Shader uniforms and attributes. We shall call this Shader, due to lack of imagination ( you can call it Bob, if you prefer. )
Shader.h
#ifndef _SHADER_H_ #define _SHADER_H_ #include <string> #include <vector> #if defined(GLX) #include "GLee.h" #elif defined(PANDORA) #include <GLES2/gl2.h> #endif namespace GLESGAE { class Shader { public: Shader(); ~Shader(); /// Create a shader from source void createFromSource(const std::string& vertex, const std::string& fragment); /// Create a shader from file void createFromFile(const std::string& vertex, const std::string& fragment); /// Get Attribute Location const GLint getAttribute(const std::string& attribute) const; /// Get Uniform Location const GLint getUniform(const std::string& uniform) const; /// Get Program Id const GLuint getProgramId() const { return mShaderProgram; } protected: /// Actually load and compile the shader source const GLuint loadShader(const std::string& shader, const GLenum type); /// Clear out any shader stuff we may currently have - useful for forcing a recompile of the shader void resetShader(); private: std::vector<std::pair<std::string, GLint> > mUniforms; std::vector<std::pair<std::string, GLint> > mAttributes; GLuint mVertexShader; GLuint mFragmentShader; GLuint mShaderProgram; }; } #endif
Pretty straight forward really.
While Desktops can have Geometry shaders as well, we're aiming for compatibility with GL ES for the most part; which does not have Geometry shaders.. so we'll only deal with Vertex and Fragments.
We also store two arrays; one of uniforms and another of attributes, so we know which location they are stored when we come to bind and use this shader.
Shader.cpp
#include "Shader.h" using namespace GLESGAE; Shader::Shader() : mUniforms() , mAttributes() , mVertexShader(GL_INVALID_VALUE) , mFragmentShader(GL_INVALID_VALUE) , mShaderProgram(GL_INVALID_VALUE) { } Shader::~Shader() { resetShader(); } void Shader::createFromFile(const std::string& vertex, const std::string& fragment) { // TODO: implement } void Shader::createFromSource(const std::string& vertex, const std::string& fragment) { mVertexShader = loadShader(vertex, GL_VERTEX_SHADER); mFragmentShader = loadShader(fragment, GL_FRAGMENT_SHADER); if ((GL_INVALID_VALUE == mVertexShader) || (GL_INVALID_VALUE == mFragmentShader)) { // TODO: check that either shader has come back as GL_INVALID_VALUE and scream! return; } mShaderProgram = glCreateProgram(); glAttachShader(mShaderProgram, mVertexShader); glAttachShader(mShaderProgram, mFragmentShader); glLinkProgram(mShaderProgram); GLint isLinked; glGetProgramiv(mShaderProgram, GL_LINK_STATUS, &isLinked); if (false == isLinked) { GLint infoLen(0U); glGetProgramiv(mShaderProgram, GL_INFO_LOG_LENGTH, &infoLen); if (infoLen > 1) { char* infoLog(new char[infoLen]); glGetShaderInfoLog(mShaderProgram, infoLen, NULL, infoLog); // TODO: something bad happened.. print the infolog and die. delete [] infoLog; } // TODO: Die.. something bad has happened resetShader(); return; } { // Find all Uniforms. GLint numUniforms; GLint maxUniformLen; glGetProgramiv(mShaderProgram, GL_ACTIVE_UNIFORMS, &numUniforms); glGetProgramiv(mShaderProgram, GL_ACTIVE_UNIFORM_MAX_LENGTH, &maxUniformLen); char* uniformName(new char[maxUniformLen]); for (GLint index(0); index < numUniforms; ++index) { GLint size; GLenum type; GLint location; glGetActiveUniform(mShaderProgram, index, maxUniformLen, NULL, &size, &type, uniformName); location = glGetUniformLocation(mShaderProgram, uniformName); std::pair<std::string, GLint> parameter; parameter.first = std::string(uniformName); parameter.second = location; mUniforms.push_back(parameter); } delete [] uniformName; } { // Find all Attributes GLint numAttributes; GLint maxAttributeLen; glGetProgramiv(mShaderProgram, GL_ACTIVE_ATTRIBUTES, &numAttributes); glGetProgramiv(mShaderProgram, GL_ACTIVE_ATTRIBUTE_MAX_LENGTH, &maxAttributeLen); char* attributeName(new char[maxAttributeLen]); for (GLint index(0); index < numAttributes; ++index) { GLint size; GLenum type; GLint location; glGetActiveAttrib(mShaderProgram, index, maxAttributeLen, NULL, &size, &type, attributeName); location = glGetAttribLocation(mShaderProgram, attributeName); std::pair<std::string, GLint> parameter; parameter.first = std::string(attributeName); parameter.second = location; mAttributes.push_back(parameter); } delete [] attributeName; } } const GLuint Shader::loadShader(const std::string& shader, const GLenum type) { GLuint newShader(glCreateShader(type)); const char* shaderSource(shader.c_str()); glShaderSource(newShader, 1, &shaderSource, NULL); glCompileShader(newShader); GLint isCompiled; glGetShaderiv(newShader, GL_COMPILE_STATUS, &isCompiled); if (!isCompiled) { GLint infoLen(0); glGetShaderiv(newShader, GL_INFO_LOG_LENGTH, &infoLen); if (infoLen > 1) { char* infoLog(new char[infoLen]); glGetShaderInfoLog(newShader, infoLen, NULL, infoLog); delete [] infoLog; } glDeleteShader(newShader); // TODO: catch this... this is bad return GL_INVALID_VALUE; } return newShader; } const GLint Shader::getAttribute(const std::string& attribute) const { for (std::vector<std::pair<std::string, GLint> >::const_iterator itr(mAttributes.begin()); itr < mAttributes.end(); ++itr) { if (attribute == itr->first) return itr->second; } // TODO: catch this! obviously this is bad! return GL_INVALID_VALUE; } const GLint Shader::getUniform(const std::string& uniform) const { for (std::vector<std::pair<std::string, GLint> >::const_iterator itr(mUniforms.begin()); itr < mUniforms.end(); ++itr) { if (uniform == itr->first) return itr->second; } // TODO: catch this! obviously this is bad! return GL_INVALID_VALUE; } void Shader::resetShader() { if (GL_INVALID_VALUE != mVertexShader) { glDetachShader(mShaderProgram, mVertexShader); glDeleteShader(mVertexShader); } if (GL_INVALID_VALUE != mFragmentShader) { glDetachShader(mShaderProgram, mFragmentShader); glDeleteShader(mFragmentShader); } if (GL_INVALID_VALUE != mShaderProgram) glDeleteProgram(mShaderProgram); mUniforms.clear(); mAttributes.clear(); mVertexShader = GL_INVALID_VALUE; mFragmentShader = GL_INVALID_VALUE; mShaderProgram = GL_INVALID_VALUE; }
Lots of code, but again, it's pretty straight forward.. I'd highly recommend the OpenGL ES 2.0 Programming Guide to go through if you want more detail. It should be by your side while doing any ES work though, seriously!
Shader Context Additions
Now we have a Shader object, we need to put the infrastructure in to actually use it. This means ShaderBasedContext is getting updated a bit.
ShaderBasedContext.h
#ifndef _SHADER_BASED_CONTEXT_H_ #define _SHADER_BASED_CONTEXT_H_ #if defined(GLX) #include "../GLee.h" #elif defined(PANDORA) #include <GLES2/gl2.h> #endif namespace GLESGAE { class Mesh; class Shader; class ShaderBasedContext { public: ShaderBasedContext(); virtual ~ShaderBasedContext(); protected: /// Reset all attribute location links void resetAttributes(); /// Draw a Mesh using the Shader Based Pipeline void drawMeshShaderBased(Mesh* const mesh); /// Bind a shader void bindShader(const Shader* const shader); private: const Shader* mCurrentShader; GLuint a_position; GLuint a_colour; GLuint a_normal; GLuint a_texCoord0; GLuint a_texCoord1; GLuint a_custom0; GLuint a_custom1; GLuint a_custom2; }; } #endif
Not much to this, really...
One oddity is that I'm caching the default set of attributes - a_position, a_colour, a_normal, a_texCoord0, a_texCoord1, a_custom0, a_custom1 and a_custom2. This is so we don't need to regrab them every object if they share the same shader.
To make it more generic, you could just store an array of GLuints instead, and run through them in order. I shall leave that as a reader exercise for the moment.
ShaderBasedContext.cpp
#include "ShaderBasedContext.h" #include "../IndexBuffer.h" #include "../Material.h" #include "../Mesh.h" #include "../Texture.h" #include "../VertexBuffer.h" #include "../Shader.h" #include <cstdio> using namespace GLESGAE; ShaderBasedContext::ShaderBasedContext() : mCurrentShader(0) , a_position(GL_INVALID_VALUE) , a_colour(GL_INVALID_VALUE) , a_normal(GL_INVALID_VALUE) , a_texCoord0(GL_INVALID_VALUE) , a_texCoord1(GL_INVALID_VALUE) , a_custom0(GL_INVALID_VALUE) , a_custom1(GL_INVALID_VALUE) , a_custom2(GL_INVALID_VALUE) { } ShaderBasedContext::~ShaderBasedContext() { mCurrentShader = 0; } void ShaderBasedContext::bindShader(const Shader* const shader) { if (mCurrentShader != shader) { mCurrentShader = shader; glUseProgram(shader->getProgramId()); resetAttributes(); } } void ShaderBasedContext::resetAttributes() { a_position = GL_INVALID_VALUE; a_colour = GL_INVALID_VALUE; a_normal = GL_INVALID_VALUE; a_texCoord0 = GL_INVALID_VALUE; a_texCoord1 = GL_INVALID_VALUE; a_custom0 = GL_INVALID_VALUE; a_custom1 = GL_INVALID_VALUE; a_custom2 = GL_INVALID_VALUE; if (0 != mCurrentShader) { a_position = mCurrentShader->getAttribute("a_position"); a_colour = mCurrentShader->getAttribute("a_colour"); a_normal = mCurrentShader->getAttribute("a_normal"); a_texCoord0 = mCurrentShader->getAttribute("a_texCoord0"); a_texCoord1 = mCurrentShader->getAttribute("a_texCoord1"); a_custom0 = mCurrentShader->getAttribute("a_custom0"); a_custom1 = mCurrentShader->getAttribute("a_custom1"); a_custom2 = mCurrentShader->getAttribute("a_custom2"); } if (GL_INVALID_VALUE != a_position) glEnableVertexAttribArray(a_position); if (GL_INVALID_VALUE != a_colour) glEnableVertexAttribArray(a_colour); if (GL_INVALID_VALUE != a_normal) glEnableVertexAttribArray(a_normal); if (GL_INVALID_VALUE != a_texCoord0) glEnableVertexAttribArray(a_texCoord0); if (GL_INVALID_VALUE != a_texCoord1) glEnableVertexAttribArray(a_texCoord1); if (GL_INVALID_VALUE != a_custom0) glEnableVertexAttribArray(a_custom0); if (GL_INVALID_VALUE != a_custom1) glEnableVertexAttribArray(a_custom1); if (GL_INVALID_VALUE != a_custom2) glEnableVertexAttribArray(a_custom2); } void ShaderBasedContext::drawMeshShaderBased(Mesh* const mesh) { const IndexBuffer* const indexBuffer(mesh->getIndexBuffer()); const VertexBuffer* const vertexBuffer(mesh->getVertexBuffer()); const Material* const material(mesh->getMaterial()); unsigned int currentTextureUnit(0U); bindShader(material->getShader()); const std::vector<VertexBuffer::Format>& meshFormat(vertexBuffer->getFormat()); for (std::vector<VertexBuffer::Format>::const_iterator itr(meshFormat.begin()); itr < meshFormat.end(); ++itr) { switch (itr->getType()) { // Position case VertexBuffer::FORMAT_POSITION_2F: glVertexAttribPointer(a_position, 2, GL_FLOAT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_POSITION_3F: glVertexAttribPointer(a_position, 3, GL_FLOAT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_POSITION_4F: glVertexAttribPointer(a_position, 4, GL_FLOAT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_POSITION_2S: glVertexAttribPointer(a_position, 2, GL_SHORT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_POSITION_3S: glVertexAttribPointer(a_position, 3, GL_SHORT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_POSITION_4S: glVertexAttribPointer(a_position, 4, GL_SHORT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_POSITION_2B: glVertexAttribPointer(a_position, 2, GL_BYTE, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_POSITION_3B: glVertexAttribPointer(a_position, 3, GL_BYTE, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_POSITION_4B: glVertexAttribPointer(a_position, 4, GL_BYTE, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; // Normal case VertexBuffer::FORMAT_NORMAL_3F: glVertexAttribPointer(a_normal, 3, GL_FLOAT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_NORMAL_3S: glVertexAttribPointer(a_normal, 3, GL_SHORT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_NORMAL_3B: glVertexAttribPointer(a_normal, 3, GL_BYTE, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; // Colour case VertexBuffer::FORMAT_COLOUR_4F: glVertexAttribPointer(a_colour, 4, GL_FLOAT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_COLOUR_3F: glVertexAttribPointer(a_colour, 3, GL_FLOAT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_COLOUR_4S: glVertexAttribPointer(a_colour, 4, GL_SHORT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_COLOUR_3S: glVertexAttribPointer(a_colour, 3, GL_SHORT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_COLOUR_4UB: glVertexAttribPointer(a_colour, 4, GL_UNSIGNED_BYTE, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; case VertexBuffer::FORMAT_COLOUR_3UB: glVertexAttribPointer(a_colour, 3, GL_UNSIGNED_BYTE, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break; default: break; }; } switch (indexBuffer->getFormat()) { case IndexBuffer::FORMAT_FLOAT: glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_FLOAT, indexBuffer->getData()); break; case IndexBuffer::FORMAT_UNSIGNED_BYTE: glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_BYTE, indexBuffer->getData()); break; case IndexBuffer::FORMAT_UNSIGNED_SHORT: glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_SHORT, indexBuffer->getData()); break; default: break; }; }
If you've come from the Fixed Function guide, the above will look rather familiar. The only real difference is we specify a Shader handle to bind the Vertex Pointer to, but other than that, what was said there still applies here. If you haven't read it, or have forgotten, you might want to - GLESGAE:Fixed Function Rendering Contexts
One thing we don't deal with yet is texturing.
Texturing is a bit weird on Shader systems, so we're holding off on that just now.. but we'll sort it out soon.
Feeding The Shader
We're not going to be setting up a uniform system just yet.. so our shaders are going to be rather basic for now.
We do have the ability to pull uniforms out the shader and bind stuff to them as it stands, however it's far better to have some sort of system that can take care of the grunt work for you.
We'll be looking at a uniform system soon.
A Simple Test
Our test app is getting a bit cluttered now...
#include <cstdio> #include <cstdlib> #include "../../Graphics/GraphicsSystem.h" #include "../../Graphics/Context/FixedFunctionContext.h" #include "../../Events/EventSystem.h" #include "../../Input/InputSystem.h" #include "../../Input/Keyboard.h" #include "../../Input/Pad.h" #include "../../Graphics/Mesh.h" #include "../../Graphics/VertexBuffer.h" #include "../../Graphics/IndexBuffer.h" #include "../../Graphics/Material.h" #include "../../Graphics/Shader.h" #include "../../Maths/Matrix4.h" using namespace GLESGAE; void updateRender(Mesh* const mesh); Mesh* makeSprite(Shader* const shader); Shader* makeSpriteShader(); int main(void) { EventSystem* eventSystem(new EventSystem); InputSystem* inputSystem(new InputSystem(eventSystem)); GraphicsSystem* graphicsSystem(new GraphicsSystem(GraphicsSystem::SHADER_BASED_RENDERING)); if (false == graphicsSystem->initialise("GLESGAE Shader Test", 800, 480, 16, false)) { //TODO: OH NOES! WE'VE DIEDED! return -1; } Mesh* mesh(makeSprite(makeSpriteShader())); eventSystem->bindToWindow(graphicsSystem->getWindow()); Controller::KeyboardController* myKeyboard(inputSystem->newKeyboard()); while(false == myKeyboard->getKey(Controller::KEY_ESCAPE)) { eventSystem->update(); inputSystem->update(); graphicsSystem->beginFrame(); graphicsSystem->drawMesh(mesh); graphicsSystem->endFrame(); } delete graphicsSystem; delete inputSystem; delete eventSystem; delete mesh; return 0; } Mesh* makeSprite(Shader* const shader) { float vertexData[32] = {// Position - 16 floats -1.0F, 1.0F, 0.0F, 1.0F, 1.0F, 1.0F, 0.0F, 1.0F, 1.0F, -1.0F, 0.0F, 1.0F, -1.0F, -1.0F, 0.0F, 1.0F, // Colour - 16 floats 0.0F, 1.0F, 0.0F, 1.0F, 1.0F, 0.0F, 0.0F, 1.0F, 0.0F, 0.0F, 1.0F, 1.0F, 1.0F, 1.0F, 1.0F, 1.0F}; unsigned int vertexSize = 32 * sizeof(float); unsigned char indexData[6] = { 0, 1, 2, 2, 3, 0 }; unsigned int indexSize = 6 * sizeof(unsigned char); VertexBuffer* newVertexBuffer = new VertexBuffer(reinterpret_cast<unsigned char*>(&vertexData), vertexSize); newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_POSITION_4F, 4U); newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_COLOUR_4F, 4U); IndexBuffer* newIndexBuffer = new IndexBuffer(reinterpret_cast<unsigned char*>(&indexData), indexSize, IndexBuffer::FORMAT_UNSIGNED_BYTE); Material* newMaterial = new Material; newMaterial->setShader(shader); Matrix4* newTransform = new Matrix4; return new Mesh(newVertexBuffer, newIndexBuffer, newMaterial, newTransform); } #if defined(GLX) #include "../../Graphics/GLee.h" #endif Shader* makeSpriteShader() { std::string vShader = "attribute vec4 a_position; \n" "attribute vec4 a_colour; \n" "varying vec4 v_colour; \n" "void main() \n" "{ \n" " gl_Position = a_position; \n" " v_colour = a_colour; \n" "} \n"; std::string fShader = #ifdef GLES2 "precision mediump float; \n" // setting the default precision for Pandora #endif "varying vec4 v_colour; \n" "void main() \n" "{ \n" " gl_FragColor.grb = v_colour.rgb;\n" "} \n"; #ifndef GLES1 Shader* newShader(new Shader()); newShader->createFromSource(vShader, fShader); return newShader; #else return 0; #endif }
Notice that the ES1 stuff has been ripped out?
ES1 does not support shaders, so there's no point in having that cluttering up our file. I've also removed the Pandora buttons from the Input example and just checking on Q being pressed.
Additionally, this is near enough exactly the same as the Fixed Function example, just with a shader being created and bound to the Material as well.
Building the Example
In the SVN there are Makefiles already setup for you.. just trigger make -f MakefileES2.pandora or whatever your chosen configuration is, and it'll happily build for you and spit out a GLESGAE.pandora binary for you to run.
Alternatively, if you use CodeLite, there's a Workspace/Project set for you preconfigured.
Next Time
Textures, followed by drawing via Vertex Buffer Object (VBO), then we Meet the Matrices.
GLESGAE - The Transform Stack
Introduction
I was going to just blindly march onto getting transform matrices running so you could position and move things about on screen.
That presents a few problems though, as it required a bit more knowledge of how OpenGL deals with things, and I'm trying to go about explaining stuff as plainly as possible. It also didn't really touch on the fact there are a few different Transform Stacks to deal with.
Well, I say stack, but in general only the Fixed Function pipeline retains the stack, the Shader Based pipeline can do whatever it likes... and generally to keep things simple, we only ever use one or two levels of the "stacks."
The added bonus of going through all this, is that we can also implement cameras at the same time that we get things to move about on screen, so it's a kind of win-win situation here!
Meet the Stacks
In a Fixed Function pipeline, you have four main Transform Stacks, each corresponding to View, Projection, Model and Texture manipulation.
These literally are stacks in that you can push additional matrices on top of them, and they will be concatenated in sequence for the final draw of whatever object you're currently rendering. This is particularly useful in the case of the Model Stack in that it makes sense in a hierarchical modelling view.
The stereotypical example being an arm... you'd draw the shoulder, then the upper arm, elbow, lower arm, wrist and hand. When you rotate the shoulder object, the rest of the objects would move and rotate correctly with it. Well, "correctly" is a bit of a misnomer as there'd be no limits on the joint, so you could have it spin round and round like crazy if you wanted at this point... but that's a matter of Physics which we'll get to at some point!
The Texture Stack is also particularly useful for doing quick and dirty effects, such as scrolling textures across models. A good example being indicator arrows following the mesh to direct the player where to go. Just whack a transform matrix that indicates this onto the stack when you want to draw the texture, and OpenGL will do the rest for you.
The View and Projection Stacks deal with the camera. Generally you have two camera types; 2D - or Orthographic - and 3D - generally termed Perspective. These are defined with specific View and Projection matrices, and we'll be implementing these two camera types for both Fixed Function and Shader Based systems very soon.
Gotcha: I've mentioned a separate View Transform Stack. This generally gets multiplied with the Model Stack to become the ModelView Stack, which it's more commonly referenced as. Confused? The View Transform deals with the clipping of the final projection to the screen. Think of it in terms of a Camera; the view finder cuts things out - this is the View transform; the lens can warp, zoom and otherwise manipulate how the scene looks - this is the Projection matrix; and the scene itself, being composed of objects, each have their own Model transform. I reference it as a stack because you could, for example, be taking a picture through a car or train window. That window is another View transform... so by dealing with Views in a stack as everything else, it just works a bit better. It is simpler, honest!
Model Transforms
We'll look at Model transforms first, as these are the easiest and make the most sense.
Every object in the world has a position and rotation. They also have a scale, but usually we leave this alone; the art assets should generally already be the correct size anyway.
These three things get concatenated into a Transform Matrix.. which is usually a 4x4 matrix.
We can actually split these up and deal with them seperately if we like - which is a process called decomposing - whereby we can split this 4x4 matrix into it's components of a 3x3 rotation matrix, a position vector, and a scale vector. In general, they sortof correspond to this thing:
Rotation00 * ScaleX, Rotation01, Rotation02, PositionX Rotation10, Rotation11 * ScaleY, Rotation12, PositionY Rotation20, Rotation21, Rotation22 * ScaleZ, PositionZ 0 0 0 1
Gotcha: It's never easy, is it? There are two main conventions to matrices - Row-Major and Column-Major. The above matrix is Column-Major... IE: our Position Vector is in the last column. If you transpose this, you swap it around and it becomes Row-Major and our Position Vector therefore goes in the bottom row, where we've currently got [0, 0, 0, 1]. OpenGL tends to use Column-Major, whereas DirectX uses Row-Major for instance.
Gotcha: Picking Scale out of a Rotation Matrix can be very painful.. it's something I like to avoid as much as possible, and you should too! Ensure your art assets are either of the correct size, or additionally store a separate scale vector so you can demultiply it to get the original rotation matrix should you need it.
Anyway, every object in the world will generally have one of these. It's much more efficient to store a Transform matrix rather than separate Position and Rotation matrices as OpenGL and DirectX are going to be dealing with 4x4 matrices anyway..so you'd have to composea 4x4 every time you want to draw, which all mount up to become something that's not trivially light processing!
I'm not going to get into creating rotation matrices, and rotation objects in space just yet... that'll be later on.. we just need to know what a Transform matrix is, and what it's composed of just now.
View Matrices
View Matrices are a bit more fun.
Rather than just being independent things in space, a View Matrix requires an eye to see through. This does make sense, as you view through an eye - or a window, camera lens, etc.. - and what you can't see gets clipped out of view. They still exist, though. Like a certain cat, just because you can't see inside the box, doesn't mean it's not necessarily there, or when you're swinging a Wiimote about; just because you can't see that expensive lamp behind you, doesn't mean you're not about to smash it to pieces by swinging back.
We also need two other vectors - an Up and Centre vector. This gives us three vectors; an Eye Vector, a Centre Vector, and an Up Vector.
Computing a View Matrix isn't all that hard, it's just a bit lengthy, and again, we store this as a 4x4 Matrix.
Matrix4 createViewMatrix(const Vector3& eye, const Vector3& centre, const Vector3& up) { Vector3 forwardVector(centre - eye); forwardVector.normalise(); Vector3 rightVector(forwardVector.cross(up)); rightVector.normalise(); // We recompute the Up vector to make sure that everything is exactly perpendicular to one another. // Otherwise, there can be rounding errors and things go a bit mad! // We can also do this to assert and check that our up vector actually matches up properly. Vector3 upVector(rightVector.cross(forwardVector)); Matrx4 viewMatrix; viewMatrix(0, 0) = rightVector.x; viewMatrix(0, 1) = upVector.x; viewMatrix(0, 2) = -forwardVector.x; viewMatrix(0, 3) = 0.0F; viewMatrix(1, 0) = rightVector.y; viewMatrix(1, 1) = upVector.y; viewMatrix(1, 2) = -forwardVector.y; viewMatrix(1, 3) = 0.0F; viewMatrix(2, 0) = rightVector.z; viewMatrix(2, 1) = upVector.z; viewMatrix(2, 2) = -forwardVector.z; viewMatrix(2, 3) = 0.0F; viewMatrix(3, 0) = 0.0F; viewMatrix(3, 1) = 0.0F; viewMatrix(3, 2) = 0.0F; viewMatrix(3, 3) = 1.0F; // We then need to translate our eye back to the origin, and use this as effectively the View's position. // An eye does have a position in space, after all! As does a Window. viewMatrix(3, 0) = (viewMatrix(0, 0) * -eye.x + viewMatrix(1, 0) * -eye.y + viewMatrix(2, 0) * -eye.z); viewMatrix(3, 1) = (viewMatrix(0, 1) * -eye.x + viewMatrix(1, 1) * -eye.y + viewMatrix(2, 1) * -eye.z); viewMatrix(3, 2) = (viewMatrix(0, 2) * -eye.x + viewMatrix(1, 2) * -eye.y + viewMatrix(2, 2) * -eye.z); return viewMatrix; }
So, that's a nice bit of code, but what does it actually *do*?
Well, our Eye vector is the position of the View... our Camera Lens, our Window, our Player's Eyes. As we know, we can decompose our Object's Model Transform and grab it's position, so this should be easy enough to understand what it is.
Our Up Vector is a unit vector ( meaning it's values are only between 0 and 1) which tell us which way is up. In general, Y is up. Sometimes, Z is up. It depends on which side of Mathematics you live on, really. As I tend to think in 2D with a bit of depth, Y is up for me, and is easier for me to think in, so my Up vector is [ 0, 1, 0 ]. Of course, this can change if you're floating about in space, or if you tilt your head back to look up - your eye's up vector has changed as it's effectively pointing straight out the top of your head, which you've now tilted, so is probably pointing towards the wall behind you now.
The Centre Vector is another odd one. It's calculated by taking the Eye - or Position - Vector and adding the Front Vector to it.
In other words; look forward. Your eye position is in your head ( I'd assume, anyway! ) and your Front Vector is dead ahead. Without turning your head, look to the right. Your eye position hasn't changed, but your forward vector has - it's now to the right. Now, hold up a piece of paper and imagine it's a window ( or hold up a piece of glass or whatever instead. ) Imagine this is your eyes and make it "look" forward. It should effectively just be flat to you. Now make it look right as you would do with your eyes by rotating it. This is exactly what's going on here!
This is also why we translate our eye back to the origin... as think about it, you don't know you're looking through a Window unless you step back and you can see the Window. We've stepped back to observe our Viewing area, and now we want to view through it, so we move it towards us. Our perfect example being a camera view finder; we look at it from the outside to see how much it's going to clip out, then we place our eye to it to view through it.
Simples!
Extra Fun
The centre vector can actually be used in another way.
Generally, you want a camera to face "forward".. however, you may also want a "tracking" camera.. which in this case, you replace the centre vector with the object you want to track.
We'll write some examples soon so you can mess about and see all this in action, don't worry!
Projection Fiddling
Projection Matrices are what give the world a semblance of depth, and can be used to distort the world. I'll show you the two classical projection matrices - one for 2d, and one for 3d - and you can have fun mucking about with them to create your own if you feel like it.
An amusing irony I've found is that the 3d projection matrix is often a bit easier to understand and deal with, so we'll go through that one first.
Matrix4 create3dProjectMatrix(const float nearClip, const float farClip, const float fov, const float aspectRatio) { const float radians = fov / 2.0F * PI / 180.0F; const float zDelta = farClip - nearClip; const float sine = sin(radians); const float cotangent = cos(radians) / sine; Matrix4 projectionMatrix; projectionMatrix(0, 0) = cotangent / aspectRatio; projectionMatrix(0, 1) = 0.0F; projectionMatrix(0, 2) = 0.0F; projectionMatrix(0, 3) = 0.0F; projectionMatrix(1, 0) = 0.0F; projectionMatrix(1, 1) = cotangent; projectionMatrix(1, 2) = 0.0F; projectionMatrix(1, 3) = 0.0F; projectionMatrix(2, 0) = 0.0F; projectionMatrix(2, 1) = 0.0F; projectionMatrix(2, 2) = -(farClip + nearClip) / zDelta; projectionMatrix(2, 3) = -1.0F; projectionMatrix(3, 0) = 0.0F; projectionMatrix(3, 1) = 0.0F; projectionMatrix(3, 2) = -2.0F * nearClip * farClip / zDelta; projectionMatrix(3, 3) = 0.0F; return projectionMatrix; }
Simple, really.
We need a near clip, a far clip, a field of view, and an aspect ratio.
Our near clip is how far forward our view begins. Noticed how in some games if you get close to a wall, it disappears? This is usually caused by the near clip. Conversely, things that "pop" in to the screen as you view far ahead are caused by the far clip being too close. While it would be tempting to set the near clip as close as we can, and the far clip as large as possible, this causes a lot of issues depending on the resolution of our depth buffer.
If you look at two objects far away in the distance, it can be tricky to tell what's in front of the other. Your graphics card has the same issue. However, whereas our perception can just make things look a bit strange, the graphics card gets confused. It has a finite amount of precision to deal with this, and you can sometimes get "Z-Fighting" whereby two objects are effectively "fighting" to get in front of each other, and the screen flickers between them. It's very annoying. Therefore, give your far and near clip ranges sensible values!
The field of view controls how wide an angle ( in degrees ) you can see through your viewing matrix. For example, looking through a keyhole has quite a narrow field of view as it's more or less looking dead ahead. Your eyes, on the other hand, have quite a wide field of view as even when you look dead ahead, you can generally make out shapes to the left and right of you without having to look in that direction. Now, while you can have it anywhere between 0 and 360, you probably want it somewhere between 30 and 120.
Finally, our aspect ratio is the screen's aspect ratio. This can be either the actual physical device's aspect ratio, or perhaps some in-game monitor. Either way, it's just width / height. Easy.
The calculations themselves are fairly straight forward. If you don't quite understand them, then feel free to pick up a maths book. It's a bit beyond what I want to discuss here - I just want to tell you what's going on, not proof it! But with the above code, you'll get yourself a nice 3d projection matrix.
Now onto the 2d one...
Matrix4 create2dProjectionMatrix(const float left, const float bottom, const float right, const float top, const float nearClip, const float farClip) { const float xScale = 2.0F / (right - left); const float yScale = 2.0F / (top - bottom); const float zScale = -2.0F / (farClip - nearClip); const float x = -(right + left) / (right - left); const float y = -(top + bottom) / (top - bottom); const float z = -(farClip + nearClip) / (farClip - nearClip); Matrix4 projectionMatrix; projectionMatrix(0, 0) = xScale; projectionMatrix(0, 1) = 0.0F; projectionMatrix(0, 2) = 0.0F; projectionMatrix(0, 3) = 0.0F; projectionMatrix(1, 0) = 0.0F; projectionMatrix(1, 1) = yScale; projectionMatrix(1, 2) = 0.0F; projectionMatrix(1, 3) = 0.0F; projectionMatrix(2, 0) = 0.0F; projectionMatrix(2, 1) = 0.0F; projectionMatrix(2, 2) = zScale; projectionMatrix(2, 3) = 0.0F; projectionMatrix(3, 0) = x; projectionMatrix(3, 1) = y; projectionMatrix(3, 2) = z; projectionMatrix(3, 3) = 1.0F; return projectionMatrix; }
Looks easier doesn't it? Why I have difficulties getting my head around this sometimes, I'm not sure.. probably because it doesn't quite fit any camera analogy that works so well with the 3d view-projection stuff.
Anyway, we know about the near clip and far clip, and the same rules apply to the 2d projection matrix.
What we're interested in here are the left, top, bottom, and right values.
As you can see from the calculations, these control the scale, so generally these are unit values: Left and Bottom being 0, and Top and Right being 1.
This makes sense as these are normalised values... and OpenGL's 2d origin is on the bottom left corner of your screen. Therefore, Right is 1.0, as that's "full right".. you can't get further than that. Of course, you might want to muck about with the scale, and therefore doubling the projection - so you only see half of what's there - would mean you set Top and Right to 2.0 instead of 1.0. Or to shrink it in half - 0.5.
That's really all there is to it.
Putting It All Together
So, how does all this work? How do we get from drawing an object at x, y, z to it appearing on the screen somewhere?
Well, if we're just drawing one object - that has no parent or hierarchical transforms - directly to the screen, we need one Transform matrix, one View matrix and one Projection matrix. All this combined becomes our ModelViewProjection matrix, and it's literally all multiplied in that order - model * view * projection.
If our Object does have parent transformations, we concatenate them first, so it's finalTransform = parentTransform * objectTransform; and we effectively just go through the chain from bottom up ( so that parentTransform would've been it's finalTransform, and so on... ) till we get to our object. Then we multiply in the view and projection matrices.
Again, as I stated, the view and projection matrices can be stacks.. this works in exactly the same manner as the object transforms - we multiply bottom up to get the final ViewMatrix and final ProjectionMatrix to multiply with; finalTransform * finalView * finalProjection.
Seriously, that's it.
It isn't complicated, it's just a bit strange and you have to think a bit to fully understand what's going on. But once you sit down and break it apart, it's quite straight forward, I think.
Now to actually write some code into the Render Systems to draw things properly!
Checking out the SVN
There isn't actually any useful code for this Chapter that would make it standalone.
Building the Example
There is no example this time, as there'd be nothing to show!
Instead, jump to either GLESGAE:Fixed Function Transformations or GLESGAE:Shader Based Transformations for dealing with the Transform stacks in the correct manner.
Next Time
We're going to actually implement some Transform operations.
GLESGAE - Fixed Function Transformations
Introduction
The previous part is essential reading before you get here: GLESGAE:The Transform Stack
I won't be going over what's already been covered, I'll instead jump into the technical implementation.
While we'll be doing a lot of the same things as the Shader-Based transformation setup, I've split it up as ES 1 does have specific Matrix Stacks internally that you need to push onto. ES 2 does not. So the implementations do end up a bit different.
Fast Track
We're on SVN revision 5 now. This includes everything in this article, plus the extra bits of maths we need. svn co -r 5 http://svn3.xp-dev.com/svn/glesgae/trunk/ glesgae
Meet the Stacks
Deja Vu, perhaps?
OpenGL ES 1 ( and indeed, OpenGL 1.4 ) has a set of Matrix Modes, which are set via glMatrixMode, so that any matrix manipulations you do affect that set of matrices alone.
For our purposes, we want to mess with the following: GL_PROJECTION, GL_MODELVIEW, and GL_TEXTURE.
Gotcha: Hold the phone, we didn't talk about the Texture Stack in the last article! That's because it _only_ has any relevance in a Fixed Function pipeline as shaders are free to do as they please. Don't worry, we shall go through it in this article. Additionally, we've got a combined Model and View Matrix to deal with here. Life is hard.
We'll deal with the actual screen view and projection bits first, as they go directly into the Graphics System itself.
A View To The World
We shall be creating the Camera Object today.
This shall be a separate object, which shall deal with all it's own matrices so we can have multiple cameras all independent of one another. Particularly handy for doing effects such as Render To Texture and other Portal effects ( reflections in mirrors, for example. )
So, we take the three functions we devised in the previous article, and slap them into our Camera object.
We also need it to store it's own Transform matrix, as well as the View and Projection matrices unique to it, and various other little bits and pieces required to calculate these matrices. We therefore end up with an interface looking somewhat like this:
#ifndef _CAMERA_H_ #define _CAMERA_H_ #include "../Maths/Vector3.h" #include "../Maths/Matrix4.h" namespace GLESGAE { class Camera { public: enum CameraType { CAMERA_2D , CAMERA_3D }; Camera(const CameraType& type); /// Get the Type of Camera this is const CameraType getType() { return mType; } /// Update the camera's View and Projection matrices while looking ahead. void update(); /// Update the camera's View and Projection matrices while looking at something. void update(const Vector3& target); /// Set Near Clip void setNearClip(const float nearClip) { mNearClip = nearClip; } /// Set Far Clip void setFarClip(const float farClip) { mFarClip = farClip; } /// Set 2d Parameters void set2dParams(const float left, const float bottom, const float right, const float top) { m2dLeft = left; m2dRight = right; m2dTop = top; m2dBottom = bottom; } /// Set 3d Parameters void set3dParams(const float fov, const float aspectRatio) { mFov = fov; mAspectRatio = aspectRatio; } /// Set the Transform Matrix void setTransformMatrix(const Matrix4& transformMatrix) { mTransformMatrix = transformMatrix; } /// Set the View Matrix void setViewMatrix(const Matrix4& viewMatrix) { mViewMatrix = viewMatrix; } /// Set the Projection Matrix void setProjectionMatrix(const Matrix4& projectionMatrix) { mProjectionMatrix = projectionMatrix; } /// Get the Transform Matrix Matrix4& getTransformMatrix() { return mTransformMatrix; } /// Get the View Matrix Matrix4& getViewMatrix() { return mViewMatrix; } /// Get the Projection Matrix Matrix4& getProjectionMatrix() { return mProjectionMatrix; } /// Get Near Clip float getNearClip() const { return mNearClip; } /// Get Far Clip float getFarClip() const { return mFarClip; } /// Get Field of View float getFov() const { return mFov; } /// Create a viewMatrix static Matrix4 createViewMatrix(const Vector3& eye, const Vector3& centre, const Vector3& up); /// Create a 2d projection matrix static Matrix4 create2dProjectionMatrix(const float left, const float bottom, const float right, const float top, const float nearClip, const float farClip); /// Create a 3d projection matrix static Matrix4 create3dProjectionMatrix(const float nearClip, const float farClip, const float fov, const float aspectRatio); private: CameraType mType; float mNearClip; float mFarClip; float m2dTop; float m2dBottom; float m2dLeft; float m2dRight; float mFov; float mAspectRatio; Matrix4 mTransformMatrix; Matrix4 mViewMatrix; Matrix4 mProjectionMatrix; }; } #endif
Nice and straight forward so far. We're just storing some matrices and values, providing access to them, and throwing our view and projection matrix creation functions on it.
We do have two update functions though - one which takes a target, and another which just updates itself. This is so we can have this camera follow a spline but look at something else, for example. Though this of course only makes much sense in a 3d world.. trying to use it on a 2d camera makes for some interesting results!
We specify what type of Camera this is in the constructor. This is so that when we trigger the update function, it regenerates the correct projection matrix for us. I've also put default values in the constructor so that in effect, you can just do Camera* camera(new Camera(Camera::CAMERA_2D)); and be done with it; call camera->update(); if you're moving it about, then just pass it in to the Graphics System.
Of course, full code is in the repository.. and as there's a lot of it, that's where it'll stay rather than being plastered here.
However, the update function is interesting, so let's go through it as you've already seen the view and projection matrix functions and they're no different ( bar some minor changes as I updated my Vector and Matrix classes. )
void Camera::update() { Matrix3 rotation; Vector3 eye; mTransformMatrix.decompose(&rotation, &eye); const Vector3 target(eye + rotation.getFrontVector()); const Vector3 up(rotation.getUpVector()); mViewMatrix = createViewMatrix(eye, target, up); switch (mType) { case CAMERA_2D: mProjectionMatrix = create2dProjectionMatrix(m2dLeft, m2dBottom, m2dRight, m2dTop, mNearClip, mFarClip); break; case CAMERA_3D: mProjectionMatrix = create3dProjectionMatrix(mNearClip, mFarClip, mFov, mAspectRatio); break; default: break; }; }
Really nothing to it, is there?
The other update function, which takes a target vector, is exactly the same; minus the fact that target is already calculated.
All we do, is decompose our transform into position and rotation, then grab a few vectors out of the rotation matrix.
As our matrices are column-major, our right, up and front vectors correspond to the first, second and third columns of the rotation matrix. We then just pass in all our variables to generate our view and projection matrices for the correct mode and we're sorted.
Models and Projections and Views, oh my!
With our Camera object created, we need to pass it through to our Fixed Function Rendering Context. As we've abstracted everything out, this involves passing it to the Graphics System, then through to the Platform Context ( GLES1RenderContext or GLXRenderContext for instance ) and finally to the FixedFunctionRenderContext itself. This is just trivial interface work, and it's in the repository if you're curious.
What we're really interested in, is what goes into the Fixed Function context itself...
A Camera is a funny thing, in that generally it'll always be moving. Therefore, there is absolutely no point in optimising the calls to check if the camera has moved. With this in mind, we can write our setCamera function as follows:
void FixedFunctionContext::setFixedFunctionCamera(Camera* const camera) { glMatrixMode(GL_PROJECTION); glLoadMatrixf(camera->getProjectionMatrix().getData()); glMatrixMode(GL_MODELVIEW); glLoadMatrixf(camera->getViewMatrix().getData()); mCamera = camera; }
Nice and simple.
Now, as you notice, I've not stored any of this in a stack. I'm just manipulating the top matrices in the Projection and ModelView stack. While I did go on about having a stack in the last article, the truth of the matter is that relying on OpenGL to do your matrices properly can get you into bother. The GL specs state that you are only guaranteed a minimum of two matrices in the Projection and Texture stacks, and sixteen in the ModelView stack. This might be enough for you, or it might not. Either way, dealing with them ourselves is much more preferable.
Our Camera object is also rather simple, and there's nothing stopping us from creating multiple Cameras dotted about the place, and switching the views around as and when we like. We also have the ability to go in and fiddle with the matrices once the Cameras have updated themselves, anyway... so stack manipulation isn't really a big deal for us here. We can safely ignore it.
Anyway, now we have set our Projection and View matrices, we need to concatenate the Model's matrix over the top of the View matrix so that our meshes render where we think they will.
This is much easier than it sounds, don't worry!
In our drawMeshFixedFunction function, we just add the following little bits.
We pull out the transform matrix from the Mesh when we pull out the Index and Vertex Buffers, and Material.
Then, when we're about to draw the object itself, we change the whole bottom segment to this:
glPushMatrix(); glMultMatrixf(transform->getTranspose().getData()); disableFixedFunctionTexturing(currentTextureUnit); // Disable any excess texture units switch (indexBuffer->getFormat()) { case IndexBuffer::FORMAT_FLOAT: glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_FLOAT, indexBuffer->getData()); break; case IndexBuffer::FORMAT_UNSIGNED_BYTE: glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_BYTE, indexBuffer->getData()); break; case IndexBuffer::FORMAT_UNSIGNED_SHORT: glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_SHORT, indexBuffer->getData()); break; default: break; }; mFixedFunctionLastTexUnit = currentTextureUnit; glPopMatrix();
All we've done is add three lines.. very important lines, however!
We tell OpenGL to give us a new matrix, pushing the current one down a level.This is our View matrix. We then multiply this with our current Transform matrix - or more specifically, the Transpose of our current Transform matrix. Now we draw as normal. Finally, we pop our modified Transform matrix off the stack, so we're back to the View matrix sitting on top again.
So, why the transpose of the Transform matrix? It's to do with matrix multiplication. If we multiply two matrices, A and B to get C, the value C(3, 4) is going to be the row 3 of A multiplied by column 4 of B. So this means that potentially we're fiddling about in the matrices in the wrong areas. And you can test this by removing the getTranspose() call and then trying to translate the mesh. It sortof does this weird warping thing instead! Not ideal. See this wiki page for more info, along with a nice illustration of the matrix multiplication I've described - http://en.wikipedia.org/wiki/Matrix_multiplication#Illustration
Anyway, yes.. we transpose it, which effectively flips the values about so that A(2, 3) becomes A(3, 2) and then when we multiply it, the rows and columns match up to the correct values - so we multiply the position with the position, rather than part of the rotation.
Right, what about the Texture matrices?
That's even simpler than dealing with the camera matrices.
void FixedFunctionContext::setFixedFunctionTextureMatrix(Matrix4* const matrix) { glMatrixMode(GL_TEXTURE); glLoadMatrixf(matrix->getData()); glMatrixMode(GL_MODELVIEW); }
Now, this only has any effect on a Fixed Function pipeline, really. We also can't really use it for now as we have no Texture support.. but literally, all we're doing is setting the Matrix to the Texture Matrix, replacing it with our specified matrix, and returning to the ModelView.
We'll have a play with the texture matrix when we get to loading up textures and displaying them.
Surprisingly, that's us done.
We just need to fiddle with the example code a bit so it does something a bit more useful, now.
Cameras are Fun
I'm going to paste the full example code here, then we'll go through it.. and I'll highlight the interesting little gotchas involved.
#include <cstdio> #include <cstdlib> #include "../../Graphics/GraphicsSystem.h" #include "../../Graphics/Context/FixedFunctionContext.h" #include "../../Events/EventSystem.h" #include "../../Input/InputSystem.h" #include "../../Input/Keyboard.h" #include "../../Input/Pad.h" #include "../../Graphics/Camera.h" #include "../../Graphics/Mesh.h" #include "../../Graphics/VertexBuffer.h" #include "../../Graphics/IndexBuffer.h" #include "../../Graphics/Material.h" #include "../../Maths/Matrix4.h" using namespace GLESGAE; void controlCamera(Camera* const camera, Controller::KeyboardController* const keyboard); Mesh* makeSprite(); int main(void) { EventSystem* eventSystem(new EventSystem); InputSystem* inputSystem(new InputSystem(eventSystem)); GraphicsSystem* graphicsSystem(new GraphicsSystem(GraphicsSystem::FIXED_FUNCTION_RENDERING)); if (false == graphicsSystem->initialise("GLESGAE Fixed Function Test", 640, 480, 16, false)) { //TODO: OH NOES! WE'VE DIEDED! return -1; } Mesh* mesh(makeSprite()); Camera* camera(new Camera(Camera::CAMERA_3D)); camera->getTransformMatrix().setPosition(Vector3(0.0F, 0.0F, -5.0F)); #ifndef GLES2 FixedFunctionContext* const fixedContext(graphicsSystem->getFixedContext()); if (0 != fixedContext) { fixedContext->enableFixedFunctionVertexPositions(); fixedContext->enableFixedFunctionVertexColours(); } #endif eventSystem->bindToWindow(graphicsSystem->getWindow()); Controller::KeyboardController* myKeyboard(inputSystem->newKeyboard()); while(false == myKeyboard->getKey(Controller::KEY_ESCAPE)) { controlCamera(camera, myKeyboard); eventSystem->update(); inputSystem->update(); graphicsSystem->beginFrame(); graphicsSystem->setCamera(camera); graphicsSystem->drawMesh(mesh); graphicsSystem->endFrame(); } delete graphicsSystem; delete inputSystem; delete eventSystem; delete mesh; return 0; } Mesh* makeSprite() { float vertexData[32] = {// Position - 16 floats -1.0F, 1.0F, 0.0F, 1.0F, 1.0F, 1.0F, 0.0F, 1.0F, 1.0F, -1.0F, 0.0F, 1.0F, -1.0F, -1.0F, 0.0F, 1.0F, // Colour - 16 floats 0.0F, 1.0F, 0.0F, 1.0F, 1.0F, 0.0F, 0.0F, 1.0F, 0.0F, 0.0F, 1.0F, 1.0F, 1.0F, 1.0F, 1.0F, 1.0F}; unsigned int vertexSize = 32 * sizeof(float); unsigned char indexData[6] = { 0, 1, 2, 2, 3, 0 }; unsigned int indexSize = 6 * sizeof(unsigned char); VertexBuffer* newVertexBuffer = new VertexBuffer(reinterpret_cast<unsigned char*>(&vertexData), vertexSize); newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_POSITION_4F, 4U); newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_COLOUR_4F, 4U); IndexBuffer* newIndexBuffer = new IndexBuffer(reinterpret_cast<unsigned char*>(&indexData), indexSize, IndexBuffer::FORMAT_UNSIGNED_BYTE); Material* newMaterial = new Material; Matrix4* newTransform = new Matrix4; return new Mesh(newVertexBuffer, newIndexBuffer, newMaterial, newTransform); } void controlCamera(Camera* const camera, Controller::KeyboardController* const keyboard) { Vector3 newPosition; camera->getTransformMatrix().getPosition(&newPosition); if (true == keyboard->getKey(Controller::KEY_ARROW_DOWN)) newPosition.z() -= 0.01F; if (true == keyboard->getKey(Controller::KEY_ARROW_UP)) newPosition.z() += 0.01F; if (true == keyboard->getKey(Controller::KEY_ARROW_LEFT)) newPosition.x() -= 0.01F; if (true == keyboard->getKey(Controller::KEY_ARROW_RIGHT)) newPosition.x() += 0.01F; camera->getTransformMatrix().setPosition(newPosition); camera->update(); }
We haven't really changed much from the stock example still.
We've added some camera controls, so that when we press the arrow keys, our camera translates in the x and z axes.
We also start our camera -5 units from the centre of the world... and this is one of the gotchas. Why is this a gotcha? Make your right hand into a fist, point your index finger straight in front of you, your thumb directly up and your middle finger to the left. This is what your camera is like in OpenGL. Now, move your whole hand to the left. Imagine you were looking down it, you would see your objects moving right. Fair enough. But our camera seems to go the opposite direction! This is because OpenGL's right vector points in a different direction from where ours does.. this is the Right Handed and Left Handed systems biting us in the bum. You can either get used to it, and handle it within your code, or you can go back to the Camera's view matrix creation, and negate the right vector when we set it's values to the matrix.
Oh and a quick note... DirectX and OpenGL disagree on which way the right vector is ( and which way to point down the Z axis! ).. so conversion between the two handed systems is always a good skill to have when dealing with multiple APIs.
Now for the second gotcha.
Change the camera to a 2D camera and run the example again.
Why is the quad larger than the full screen?
Look back at the mesh description we have:
float vertexData[32] = {// Position - 16 floats -1.0F, 1.0F, 0.0F, 1.0F, 1.0F, 1.0F, 0.0F, 1.0F, 1.0F, -1.0F, 0.0F, 1.0F, -1.0F, -1.0F, 0.0F, 1.0F, // Colour - 16 floats 0.0F, 1.0F, 0.0F, 1.0F, 1.0F, 0.0F, 0.0F, 1.0F, 0.0F, 0.0F, 1.0F, 1.0F, 1.0F, 1.0F, 1.0F, 1.0F};
Our mesh points are between -1.0F and 1.0F.. which is a full screen quad when we're rendering in 3d mode, and our origin is the centre of the screen. ( Ok, well technically even that is wrong, it would be -0.5 and 0.5... but in our pre view and projection matrix phase, it did match up. )
In 2d mode, the origin is at the bottom left corner of the screen.. so -1.0 ( or -0.5 for the nit picky ) is way beyond the left of the screen where you can't see it. So, if we change this to between 0.0 and 1.0, we should have it working ( or 0.0 and -1.0 in our case due to the right vector nonsense. )
You might be wondering why I haven't "fixed" the right hand vector to actually point in the correct direction.. this is because I haven't decided on a loadable mesh format yet... some mesh formats have Z as up, instead of Y for example. Some use positive Z and others use negative Z. Granted, we should define what the engine uses, and then convert whatever formats we load up to our format, but as I haven't really decided what our format actually is yet, I'm leaving things open! At any rate, you know exactly what's going on and why, and you know where to fix it if it annoys you.
I also suggest reading through the Shader Based Transforms article.. as we explain a bit more of what the camera is up to there: GLESGAE:Shader Based Transformations
Building the Example
In the SVN there are Makefiles already setup for you.. just trigger make -f MakefileES1.pandora or whatever your chosen configuration is, and it'll happily build for you and spit out a GLESGAE.pandora binary for you to run.
Next Time
We're going to do much the same for the Shader Based pipeline.. this requires a bit more work to our shader system, but we can reuse our Camera object as-is.
GLESGAE - Shader Based Transformations
Introduction
Whereas most Fixed Function vs Shader Based stuff does go off in different directions, I'm going to have to ask you to read the Fixed Function Transformations article first: GLESGAE:Fixed Function Transformations
Reason being, we went through some of the Camera code, and the whole Right Vector issue... both of which are still relevant; the right vector issue remaining until we sort out a proper mesh format - which'll happen when we switch over to VBOs.
Fast Track
We're be moving to SVN revision 6... svn co -r 6 http://svn3.xp-dev.com/svn/glesgae/trunk/ glesgae
Meet the... lack of Stacks?
ES 2.0 has no Matrix Stacks.
You don't have to deal with switching matrix mode, and pushing and popping anything on a stack - they don't exist.
You're expected to calculate the final matrix and send it over to a shader for processing, and this is where the ubiquitous mvp transform comes in - the ModelViewProjection big daddy matrix.
Shaders are good in that they offload from the CPU to the GPU, but there are still times the CPU is king at performing certain tasks; or indeed in this case, is the only thing that can access the memory required. If we were to send over the three matrices manually, and have the GPU calculate them in whatever variants it needed, it'd actually be slower than doing it once on the CPU and sending it over in the pre-built manner. In this case it comes down to the sending of three matrices rather than one. Such is the balancing act you have to perform when dealing with shader based rendering.
But first, we need to write a uniform system....
Shader Uniform Management
As we've gone over before when writing the initial Shader Render Context, the Shader is king of the renderer.
It describes what attributes we're going to use, as well as what it does to them before they get to the screen.
Uniforms are used to give extra information from the engine to the shader, so that we can do more things with it.
There are several ways in which we can deal with this, but my preferred method is to have a bunch of updaters for each uniform we're interested in. This way, we keep code duplication down, and we decouple what's actually in the shaders from the engine. We can go all out and have funky caching mechanisms to ensure we only update when we really need to... but for us, we'll just go with a simple system that each shader will run through a map of updaters to update their own uniforms. Keeps things nice and straight forward. You can also learn more about a method such as this from Game Engine Gems 2, if you're interested, which also describes deferred OpenGL calls amongst other handy gems of information.
So, let's create our Shader Uniform Updater interface:
#ifndef _SHADER_UNIFORM_UPDATER_H_ #define _SHADER_UNIFORM_UPDATER_H_ #if defined(GLX) #include "GLee.h" #elif defined(GLES1) #if defined(PANDORA) #include <GLES/gl.h> #endif #elif defined(GLES2) #if defined(PANDORA) #include <GLES2/gl2.h> #endif #endif namespace GLESGAE { class Camera; class Material; class Matrix4; class ShaderUniformUpdater { public: virtual ~ShaderUniformUpdater() {} /// Pure virtual to ensure you overload the update function! virtual void update(const GLint uniformId, const Camera* const camera, const Material* const material, const Matrix4* const transform) = 0; protected: /// Protected constructor to ensure you derive from this, and don't create empty updaters. ShaderUniformUpdater() {} }; } #endif
Another nice and simple class. See a pattern here? Engine code should be simple and get out the way, so that you can extend it and do what you need when writing your application or game!
Anyway... only partial oddity here is that we do define an include for GLES1 as a "just in case" measure. If the header gets pulled in by accident, it'll still compile. We're not doing anything particularly fancy here anyway, as we just define an interface method that takes in a GLint, Camera, Material and Transform.
We then add a map to the Shader Based Render Context, and a couple of functions:
public: /// Add a uniform updater void addUniformUpdater(const std::string& uniformName, ShaderUniformUpdater* const updater); /// Clear uniform updaters void clearUniformUpdaters() { mUniformUpdaters.clear(); } protected: /// Update all uniforms void updateUniforms(Material* const material, Matrix4* const transform); private: std::map<std::string, ShaderUniformUpdater*> mUniformUpdaters;
Obviously stick them in the right parts of the file.
We make the add and clear functions public so that we can access them directly, and protect the updateUniforms call as we'll be handling this in-class. Of course, our actual map itself is private, out the way of meddling hands.
The addUniformUpdater call does exactly what you think it does.. it just adds the updater to the map with the uniformName as the key. Again, I'm being lazy and not checking anything, but you really should... and we shall have a clean-up day soon which'll involve writing a logging system and error checking macros. Makes things easier when certain platforms log things in peculiar ways, or when asserting could actually destroy the call stack before you get to see it!
The updateUniforms function is relatively straight forward too.. I'll print it up here though and we can go through it:
void ShaderBasedContext::updateUniforms(Material* const material, Matrix4* const transform) { std::vector<std::pair<std::string, GLint> > uniforms(mCurrentShader->getUniformArray()); for (std::vector<std::pair<std::string, GLint> >::iterator itr(uniforms.begin()); itr < uniforms.end(); ++itr) mUniformUpdaters[itr->first]->update(itr->second, mCamera, material, transform); }
We take the material and transform in because we don't actually store these as member data. We do store the Camera as member data, however, so we can access this from wherever we like. Again, naughty points for not checking pointers.. you should! Same for ensuring that the uniform updater we're looking for actually exists.
But in essence, all we do here is run through the uniform array on our current shader, match up with an updater, and call it's update function.
And we call this function just after our bindShader call in the drawMesh function.
Wait, what Camera?!
I've mentioned storing the Camera as member data, except we haven't written any code to pass it through yet!
We shall do that now.
public: /// Set the Camera void setShaderBasedCamera(Camera* const camera) { mCamera = camera; } private: Camera* mCamera;
Done!
What, you were expecting something a bit more substantial like in the Fixed Function Context?
Remember, in a Shader Based Context, you have to do everything through shaders.. your entire rendering pipeline is flexible and up to you to decide what to do with it. There are no matrix stacks to deal with, no fixed light count, and no easy way out. If you want to render something, you push it through a shader, and feed that shader with enough information to make it do what you want... which in our case will be the Camera matrices which we had pushed back on to the Projection and ModelView stacks last time.
Of course, you'll need to set the Graphics System to call the correct setCamera function, but this is trivial, and in the repository for the curious.
The MVP Uniform
Now we need to send our Camera matrices to our shader.
As we've mentioned, we'll precompute this so we don't have to send three matrices over all the time.
#ifndef _MVP_UNIFORM_UPDATER_H_ #define _MVP_UNIFORM_UPDATER_H_ #include "../../Graphics/ShaderUniformUpdater.h" class MVPUniformUpdater : public GLESGAE::ShaderUniformUpdater { public: MVPUniformUpdater() : GLESGAE::ShaderUniformUpdater() {} void update(const GLint uniformId, const GLESGAE::Camera* const camera, const GLESGAE::Material* const material, const GLESGAE::Matrix4* const transform); }; #endif
You might wonder where on earth this file is, considering the include definition is a bit mad.
This updater lives in the example folder - away from the core engine. Uniform updaters are generally app specific, so we store them outside the engine. Granted, a ModelViewProjection updater is probably going to be the same for every application, but note the "probably" .. if we didn't have the ability to change it, we'd likely need to hack around it if we needed to!
Anyway, our object file is nice and simple, as it just contains the one method:
#include "MVPUniformUpdater.h" #if defined(GLX) #include "../../Graphics/GLee.h" #elif defined(GLES2) #if defined(PANDORA) #include <GLES2/gl2.h> #endif #endif #include "../../Maths/Matrix4.h" #include "../../Graphics/Camera.h" #include "../../Graphics/Material.h" using namespace GLESGAE; void update(const GLint uniformId, Camera* const camera, Material* const material, Matrix4* const transform) { const Matrix4& view(camera->getViewMatrix()); const Matrix4& projection(camera->getProjectionMatrix()); const Matrix4 modelViewProjection(transform->getTranspose() * view * projection); glUniformMatrix4fv(uniformId, 1U, false, modelViewProjection.data()); }
While we technically don't need the GL includes again as our base ShaderUniformUpdater class pulls them in, it's good practice to show what files we are actually using.. so we'll pull them in again.
Again, this file lives in our Example folder, hence the odd include paths.
There isn't much here that should be unfamiliar to you; we grab the view and projection matrix, then multiply them all together for the concatenated ModelViewProjection matrix.
The only odd bit - and the most important in this case - is the glUniformMatrix4fv call.
OpenGL has a bunch of uniform functions to send over matrices, vectors, and single variables of float, byte, int, bools, etc... it can also set "samplers" ( used for texturing, which we'll get to soon enough ) and attributes as we've seen before when setting up the Render Context. I suggest you read either the Open GL/ES spec sheet from Khronos, or either the Red Book or the OpenGL ES 2.0 Programming Guide for more information on what they do, and what they are. In this case, we're sending over a single 4x4 matrix of float values - hence the "Matrix4fv" notation.
Now we just update the example code, and we're done!
#include <cstdio> #include <cstdlib> #include "../../Graphics/GraphicsSystem.h" #include "../../Graphics/Context/ShaderBasedContext.h" #include "../../Events/EventSystem.h" #include "../../Input/InputSystem.h" #include "../../Input/Keyboard.h" #include "../../Input/Pad.h" #include "../../Graphics/Camera.h" #include "../../Graphics/Mesh.h" #include "../../Graphics/VertexBuffer.h" #include "../../Graphics/IndexBuffer.h" #include "../../Graphics/Material.h" #include "../../Graphics/Shader.h" #include "../../Maths/Matrix4.h" #include "MVPUniformUpdater.h" using namespace GLESGAE; void controlCamera(Camera* const camera, Controller::KeyboardController* const keyboard); Mesh* makeSprite(Shader* const shader); Shader* makeSpriteShader(); int main(void) { EventSystem* eventSystem(new EventSystem); InputSystem* inputSystem(new InputSystem(eventSystem)); GraphicsSystem* graphicsSystem(new GraphicsSystem(GraphicsSystem::SHADER_BASED_RENDERING)); if (false == graphicsSystem->initialise("GLESGAE Shader Based Test", 800, 480, 16, false)) { //TODO: OH NOES! WE'VE DIEDED! return -1; } graphicsSystem->getShaderContext()->addUniformUpdater("u_mvp", new MVPUniformUpdater); Mesh* mesh(makeSprite(makeSpriteShader())); Camera* camera(new Camera(Camera::CAMERA_2D)); camera->getTransformMatrix().setPosition(Vector3(0.0F, 0.0F, -5.0F)); eventSystem->bindToWindow(graphicsSystem->getWindow()); Controller::KeyboardController* myKeyboard(inputSystem->newKeyboard()); while(false == myKeyboard->getKey(Controller::KEY_ESCAPE)) { controlCamera(camera, myKeyboard); eventSystem->update(); inputSystem->update(); graphicsSystem->beginFrame(); graphicsSystem->setCamera(camera); graphicsSystem->drawMesh(mesh); graphicsSystem->endFrame(); } delete graphicsSystem; delete inputSystem; delete eventSystem; delete mesh; return 0; } Mesh* makeSprite(Shader* const shader) { float vertexData[32] = {// Position - 16 floats -1.0F, 1.0F, 0.0F, 1.0F, 1.0F, 1.0F, 0.0F, 1.0F, 1.0F, -1.0F, 0.0F, 1.0F, -1.0F, -1.0F, 0.0F, 1.0F, // Colour - 16 floats 0.0F, 1.0F, 0.0F, 1.0F, 1.0F, 0.0F, 0.0F, 1.0F, 0.0F, 0.0F, 1.0F, 1.0F, 1.0F, 1.0F, 1.0F, 1.0F}; unsigned int vertexSize = 32 * sizeof(float); unsigned char indexData[6] = { 0, 1, 2, 2, 3, 0 }; unsigned int indexSize = 6 * sizeof(unsigned char); VertexBuffer* newVertexBuffer = new VertexBuffer(reinterpret_cast<unsigned char*>(&vertexData), vertexSize); newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_POSITION_4F, 4U); newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_COLOUR_4F, 4U); IndexBuffer* newIndexBuffer = new IndexBuffer(reinterpret_cast<unsigned char*>(&indexData), indexSize, IndexBuffer::FORMAT_UNSIGNED_BYTE); Material* newMaterial = new Material; newMaterial->setShader(shader); Matrix4* newTransform = new Matrix4; return new Mesh(newVertexBuffer, newIndexBuffer, newMaterial, newTransform); } void controlCamera(Camera* const camera, Controller::KeyboardController* const keyboard) { Vector3 newPosition; camera->getTransformMatrix().getPosition(&newPosition); if (true == keyboard->getKey(Controller::KEY_ARROW_DOWN)) newPosition.z() -= 0.01F; if (true == keyboard->getKey(Controller::KEY_ARROW_UP)) newPosition.z() += 0.01F; if (true == keyboard->getKey(Controller::KEY_ARROW_LEFT)) newPosition.x() -= 0.01F; if (true == keyboard->getKey(Controller::KEY_ARROW_RIGHT)) newPosition.x() += 0.01F; camera->getTransformMatrix().setPosition(newPosition); camera->update(); } #if defined(GLX) #include "../../Graphics/GLee.h" #endif Shader* makeSpriteShader() { std::string vShader = "attribute vec4 a_position; \n" "attribute vec4 a_colour; \n" "varying vec4 v_colour; \n" "uniform mat4 u_mvp; \n" "void main() \n" "{ \n" " gl_Position = u_mvp * a_position; \n" " v_colour = a_colour; \n" "} \n"; std::string fShader = #ifdef GLES2 "precision mediump float; \n" #endif "varying vec4 v_colour; \n" "void main() \n" "{ \n" " gl_FragColor.grb = v_colour.rgb; \n" "} \n"; #ifndef GLES1 Shader* newShader(new Shader()); newShader->createFromSource(vShader, fShader); return newShader; #else return 0; #endif }
Most of this you should be familiar with now.. the bits we're interested in are in the vertex shader:
std::string vShader = "attribute vec4 a_position; \n" "attribute vec4 a_colour; \n" "varying vec4 v_colour; \n" "uniform mat4 u_mvp; \n" "void main() \n" "{ \n" " gl_Position = u_mvp * a_position; \n" " v_colour = a_colour; \n" "} \n";
We've added a new uniform - a Mat4, or 4x4 matrix - called u_mvp, and then we multiply this by the position attribute we get passed in. This effectively multiplies each vertex of the mesh but the ModelViewProjection matrix, which is pretty much what the fixed function pipeline does for you.
Of course, the other function of relevance is graphicsSystem->getShaderContext()->addUniformUpdater("u_mvp", new MVPUniformUpdater); which actually adds the uniform updater in the first place, else nothing works!
Cameras are More Fun
If you run it with the 2d camera, you get the giant quad from last time ( albeit with different colours as we swizzle the RGB values in the fragment shader; just like we did in the Shader Render Context example. ) We know why this is, so we can safely ignore it.
Our camera's "right" is still in the wrong direction. Again, we know this, and again we can ignore it.
However... set the camera to be 3D and run again.
Where's the quad?
It's behind you! ... no, seriously, it is.. press the up arrow for a bit and it'll come into view.
You've got the right to go "what the smeggity smeg is going on?!" about now.
What's happened is that our camera is now pointing down the Z axis in the opposite direction than what it was in the fixed function pipeline. How? We haven't touched the camera code! And we're doing everything the Fixed Function pipeline does... right?
Actually, no...
Matrix Bashing
OpenGL multiplies in reverse order from what you expect.
So our model * view * projection matrix, actually ends up projection * view * model.
Well actually, again, it doesn't... remember in the Fixed Function pipeline there's a bunch of stacks, and importantly, your model and view matrices all sit on the same stack. If you multiply these in reverse order from top to bottom, you DO get model * view ( or model * model * model * view, etc... ) which means our final transform is really projection * modelview. Remembering that in matrix land, A * B is not necessarily B * A, we have extended fun as if we do just go ahead and shove that into our MVP updater, we still don't get anything on screen due to transpose issues.
Therefore, our "fixed" MVP updater is actually this:
void MVPUniformUpdater::update(const GLint uniformId, const Camera* const camera, const Material* const material, const Matrix4* const transform) { const Matrix4& view(camera->getViewMatrix()); const Matrix4& projection(camera->getProjectionMatrix()); const Matrix4 modelView((*transform).getTranspose() * view.getTranspose()); const Matrix4 modelViewProjection(projection.getTranspose() * modelView); glUniformMatrix4fv(uniformId, 1U, false, modelViewProjection.getTranspose().getData()); }
Lots of transposing, so you're right to automatically assume that this is not ideal, and probably rather heavy to calculate.
However, we can get rid of the final transpose by changing the vertex shader to gl_Position = a_position * u_mvp; because, again, A*B doesn't equal B*A and this also holds true when multiplying against vectors.
So, what can we do, instead?
If we don't transpose all the matrices and have them in the order *we* think is right ( model * view * projection ) the camera system is in left hand mode, which is the opposite of what OpenGL expects, but our Z is right in what we think ( positive down the Z axis as things get further away. )
However, if we do, the camera system is in right hand mode, which is what OpenGL expects and you can test this, as if you do use gluPerspective and glFrustum in place of my camera code, you still end up with the same result as having to transpose all these matrices.
So.. to ensure that our Z points the way we want in both Fixed Function and Shader Based contexts, we need to go fix the Fixed Function Context:
void FixedFunctionContext::setFixedFunctionCamera(Camera* const camera) { glMatrixMode(GL_PROJECTION); glLoadMatrixf(camera->getProjectionMatrix().getData()); glMatrixMode(GL_MODELVIEW); Matrix4& viewMatrix(camera->getViewMatrix()); viewMatrix(0U, 2U) = -viewMatrix(0U, 2U); viewMatrix(1U, 2U) = -viewMatrix(1U, 2U); viewMatrix(2U, 2U) = -viewMatrix(2U, 2U); viewMatrix(3U, 2U) = -viewMatrix(3U, 2U); glLoadMatrixf(viewMatrix.getData()); mCamera = camera; }
And now our Fixed Function and Shader Pipelines match up!
Whew! Fun times, eh?
If you have issues understanding what's going on here, I suggest you check the code out and play about.
The engine is still nice and simple, so you should be easily able to mess about, break things, and fix things again to see how they work.
Building the Example
In the SVN there are Makefiles already setup for you.. just trigger make -f MakefileES2.pandora or whatever your chosen configuration is, and it'll happily build for you and spit out a GLESGAE.pandora binary for you to run.
Next Time
We're going to make a start on loading up textures and displaying them on our lovely test quad, then make a start at turning our current vertex array format into a vertex buffer object format.
GLESGAE - Dealing with Textures
Introduction
Textures are nice and simple to deal with.
Well... when they're in the format you want, they're nice and simple!
There are many many texture compression formats, from ETC1 and PVRTC to the usual S3TC set of DXT1, DXT3 and DXT5.
Not every platform supports the same set of compression formats either.
Our Pandoras will support ETC1 due to it being an OpenGL ES standard, and PVRTC due to the PowerVR chipset.
It'll also support uncompressed textures such as RGB and RGBA - and these are what we'll load up today in the good 'ol BMP format.
Fast Track
We're now on to SVN revision 7... svn co -r 7 http://svn3.xp-dev.com/svn/glesgae/trunk/ glesgae
Loading a BMP
There are oodles of pages and documentation on the BMP format... so we'll just get this done quickly.
Technically, this breaks our original design choice of having a pipeline feed us data as we're manually loading up data and converting it ourselves.. however, to get something up and running quickly, this'll do.
We still have a lot to get through before dealing with our data pipeline, and it's much more interesting to get something working now in a state we can upgrade later, than write screeds of code we can't test!
So yes, our quick BMP loader.
Texture.h
#ifndef _TEXTURE_H_ #define _TEXTURE_H_ #if defined(GLX) #include "GLee.h" #elif defined(PANDORA) #if defined(GLES1) #include <GLES/gl.h> #elif defined(GLES2) #include <GLES2/gl2.h> #endif #endif #include <string> namespace GLESGAE { class Texture { public: enum TextureFormat { INVALID_FORMAT , RGBA , RGB }; Texture() : mId(), mData(0), mWidth(), mHeight(), mType(INVALID_FORMAT) {} /// Load as BMP void loadBMP(const std::string& fileName); /// Retrieve this Texture's GL id GLuint getId() const { return mId; } /// Get Width unsigned int getWidth() const { return mWidth; } /// Get Height unsigned int getHeight() const { return mHeight; } protected: /// Create GL Id void createGLid(); private: GLuint mId; unsigned char* mData; unsigned int mWidth; unsigned int mHeight; TextureFormat mType; }; } #endif
Not much going on here.. we're storing a data pointer, the GLuint reference, the width and height, and the type of format we've loaded - be it RGB or RGBA.
To the meat!
Texture.cpp
#include "Texture.h" #include <cstdio> using namespace GLESGAE; void Texture::loadBMP(const std::string& fileName) { FILE *file; unsigned long size; // size of the image in bytes. unsigned short int planes; // number of planes in image (must be 1) unsigned short int bpp; // number of bits per pixel (must be 24) // make sure the file is there. if ((file = fopen(fileName.c_str(), "rb"))==NULL) return; // seek through the bmp header, up to the width/height: fseek(file, 18, SEEK_CUR); if (1 != fread(&mWidth, 4, 1, file)) return; // read the height if (1 != fread(&mHeight, 4, 1, file)) return; // read the planes if (1 != fread(&planes, 2, 1, file)) return; if (1 != planes) // Only supporting single layer BMP just now return; // read the bpp if (1 != fread(&bpp, 2, 1, file)) return; if (24 == bpp) { size = mWidth * mHeight * 3U; // RGB mType = RGB; } else if (32 == bpp) { size = mWidth * mHeight * 4U; // RGBA mType = RGBA; } // seek past the rest of the bitmap header. fseek(file, 24, SEEK_CUR); // read the data. mData = new unsigned char[size]; if (mData == 0) return; if (1 != fread(mData, size, 1, file)) return; if (24 == bpp) { for (unsigned int index(0U); index < size; index += 3U) { // reverse all of the colors. (bgr -> rgb) unsigned char temp(mData[index]); mData[index] = mData[index + 2U]; mData[index + 2U] = temp; } } else if (32 == bpp) { for (unsigned int index(0U); index < size; index += 4U) { // reverse all of the colors. (bgra -> rgba) unsigned char temp(mData[index]); mData[index] = mData[index + 2U]; mData[index + 2U] = temp; } } createGLid(); delete [] mData; } void Texture::createGLid() { glGenTextures(1, &mId); glBindTexture(GL_TEXTURE_2D, mId); // Enable some filtering glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // Load up our data into the texture reference switch (mType) { case RGB: glTexImage2D(GL_TEXTURE_2D, 0U, GL_RGB, mWidth, mHeight, 0U, GL_RGB, GL_UNSIGNED_BYTE, mData); break; case RGBA: glTexImage2D(GL_TEXTURE_2D, 0U, GL_RGBA, mWidth, mHeight, 0U, GL_RGBA, GL_UNSIGNED_BYTE, mData); break; case INVALID_FORMAT: default: break; }; }
Honestly, there's not much going on here.
We load up the file, we skip the header to the width and height info and read them out.
Then, we check how many layers there are and ensure there's only the one.
After that, we take the Bits Per Pixel value out.. we can load up RGB - or 24bit - and RGBA - or 32bit - and we check for these and adjust our size accordingly as 24bit has 3 components - R, G and B - and 32bit has 4 components - R, G, B and A.
We skip the rest of the header as we're not interested in it, then load up the actual data itself.
Now the fun bit.
BMP actually stores information in BGR/BGRA format. We need it as RGB/RGBA so we need to swizzle the texture.
We do this by running through the texture, and literally swapping the values about manually.
Once that's done, we trigger GL to actually create the Texture ID and upload the texture to it.
Check the GL guide for what's going on here and the parameters, as there's quite a few.
Once GL has our texture though, we can delete it from our memory.
Updating Material
Materials are generally the objects which define how things get drawn. How shiny they are, their colours, etc... and of course, their textures.
We therefore need to add some code to Material so we can add and fiddle with our textures!
/// Add a Texture void addTexture(Texture* const texture) { mTextures.push_back(texture); } /// Grab a Texture Texture* const getTexture(unsigned int index) const { return mTextures[index]; } /// Grab amount of Textures we have unsigned int getTextureCount() const { return mTextures.size(); }
And that's all we need, really.
I had already done bits of Material's texture support already, so it contains a vector of Texture* objects, and all it really needed was addTexture and getTextureCount.
The whole file is in SVN anyway.
Mesh Fiddling
Of course, sending a texture over is nice and all, but we need to tell GL how on Earth to draw the thing - and that's where Texture Co-ordinates come in. These map the 2D image to the vertex points on your mesh.
Let's look at our little quad as it stands:
float vertexData[32] = {// Position - 16 floats -1.0F, 1.0F, 0.0F, 1.0F, 1.0F, 1.0F, 0.0F, 1.0F, 1.0F, -1.0F, 0.0F, 1.0F, -1.0F, -1.0F, 0.0F, 1.0F, // Colour - 16 floats 0.0F, 1.0F, 0.0F, 1.0F, 1.0F, 0.0F, 0.0F, 1.0F, 0.0F, 0.0F, 1.0F, 1.0F, 1.0F, 1.0F, 1.0F, 1.0F};
So we're going from top left, to top right, to bottom right, then bottom left in our Position. Let's remove the Colour values and replace it with some texture co-ordinates to match up.
float vertexData[32] = {// Position - 16 floats -1.0F, 1.0F, 0.0F, 1.0F, 1.0F, 1.0F, 0.0F, 1.0F, 1.0F, -1.0F, 0.0F, 1.0F, -1.0F, -1.0F, 0.0F, 1.0F, // Tex Coords - 8 floats 1.0F, 1.0F, // top right 0.0F, 1.0F, // top left 0.0F, 0.0F, // bottom left 1.0F, 0.0F}; // bottom right
Hold on, we're going the wrong way with the Texture Co-ordinates! Or are we?
The other fun thing about BMP is it stores it upside down, so I'm compensating for it here.
We also only store two floats as they're unit X/Y values from the texture... this means we need to change the data description as well:
VertexBuffer* newVertexBuffer = new VertexBuffer(reinterpret_cast<unsigned char*>(&vertexData), vertexSize); newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_POSITION_4F, 4U); newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_TEXTURE_2F, 4U); // replacing the Colour one we used to have.
We also need to add a Texture to the Material here, so we'll modify the function to take this in, and then add it to the Material:
Mesh* makeSprite(Shader* const shader, Texture* const texture) { ... ... Material* newMaterial = new Material; newMaterial->setShader(shader); newMaterial->addTexture(texture); Matrix4* newTransform = new Matrix4;
We dummy out the shader stuff on Fixed Function pipelines, so we can safely use it without issue.
Shaders and Textures
We'll get the harder system done and out the way first...
Our shaders are going to need updated to deal with the fact we'll be sending in Texture Co-ordinates and a Texture itself.
So let's go and do just that:
std::string vShader = "attribute vec4 a_position; \n" "attribute vec2 a_texCoord0; \n" "varying vec2 v_texCoord0; \n" "uniform mat4 u_mvp; \n" "void main() \n" "{ \n" " gl_Position = u_mvp * a_position; \n" " v_texCoord0 = a_texCoord0; \n" "} \n"; std::string fShader = #ifdef GLES2 "precision mediump float; \n" #endif "varying vec2 v_texCoord0; \n" "uniform sampler2D s_texture0; \n" "void main() \n" "{ \n" " gl_FragColor = texture2D(s_texture0, v_texCoord0); \n" "} \n";
In the vertex shader, we're expecting a vec2 - a float vector of two components - with the name a_texCoord0. We then just copy it into the varying without touching it to pass it through to the fragment shader.
The fragment shader picks this up, and looks for a uniform of the special 2D Sampler type called s_texture0.
Then the interesting bit; we call a built-in function - texture2D - with our texture uniform and texture co-ordinate, and pass it to the fragment colour. That's it.. we'll get textures assuming we push in the bits we need.. so let's do that!
Remember our Uniform System?
Well, we've added a new unform here, so we need another updater:
#ifndef _TEXTURE0_UNIFORM_UPDATER_H_ #define _TEXTURE0_UNIFORM_UPDATER_H_ #include "../../Graphics/ShaderUniformUpdater.h" class Texture0UniformUpdater : public GLESGAE::ShaderUniformUpdater { public: Texture0UniformUpdater() : GLESGAE::ShaderUniformUpdater() {} void update(const GLint uniformId, const GLESGAE::Camera* const camera, const GLESGAE::Material* const material, const GLESGAE::Matrix4* const transform); }; #endif
Pretty empty header file.. most of these will be like this really.
#include "Texture0UniformUpdater.h" #if defined(GLX) #include "../../Graphics/GLee.h" #elif defined(GLES2) #if defined(PANDORA) #include <GLES2/gl2.h> #endif #endif #include "../../Maths/Matrix4.h" #include "../../Graphics/Camera.h" #include "../../Graphics/Material.h" #include "../../Graphics/Texture.h" using namespace GLESGAE; void Texture0UniformUpdater::update(const GLint uniformId, const Camera* const camera, const Material* const material, const Matrix4* const transform) { Texture* const texture(material->getTexture(0U)); glUniform1i(texture->getId(), 0U); }
Actually, this isn't much different!
We're only doing two things - taking out the Texture object from the material at index 0 ( this is texture 0 after all... ) then we bind the id to the uniform. The 0 signifies that this is using texture co-ordinate set 0.
Pretty straight forward so far, isn't it?
However, we have no texture support in the Shader Based Context.. so we're best adding that now.
Shader Based Context Fiddling
We're going to do just a little bit of fiddling to the Shader Based Context to get it rendering our texture just now.
We will need to revisit this soon, but we need to get VBOs in first as that changes the renderer again.
So, in the header file, we'll add a new function and a new member variable:
public: /// Update Textures void updateTextures(const Material* const material); private: GLenum mLastTextureUnit;
We store the last texture unit we fiddled with as an optimization. It's bad form to constantly turn texture units on and off, especially if you only just messed with the same one in the previous frame!
Now let's look at that updateTextures function:
void ShaderBasedContext::updateTextures(const Material* const material) { const unsigned int textureCount(material->getTextureCount()); for (unsigned int currentTexture(0U); currentTexture < textureCount; ++currentTexture) { GLenum currentTextureUnit(GL_TEXTURE0 + currentTexture); if (currentTextureUnit != mLastTextureUnit) { glActiveTexture(currentTextureUnit); mLastTextureUnit = currentTextureUnit; } Texture* const texture(material->getTexture(currentTexture)); glBindTexture(GL_TEXTURE_2D, texture->getId()); } }
All we're doing here, is running through all the textures we may have on our material, and ensuring we have enough texture units available for them.
If we've already activated the texture unit, we leave it alone, but we always bind the texture as it may have changed. Nice and simple.
We call this function in the draw loop.. around here will do:
bindShader(material->getShader()); updateUniforms(material, transform); updateTextures(material);
And we can remove the variable that was storing currentTextureUnit in this function, as we don't need it any more.
Unfortunately, we didn't add in the Texture Co-ordinate bits to our big switch statement, and due to the whole attribute binding part, it's a bit trickier than we'd like.
We're only going to quickly push it in to get something showing, but we'll need to reinvestigate this area next time for VBOs anyway.
Either way, we just need to add the following to the switch statement:
// Texture case VertexBuffer::FORMAT_TEXTURE_2F: glVertexAttribPointer(a_texCoord0, 2, GL_FLOAT, GL_FALSE, 0, vertexBuffer->getData() + itr->getOffset()); break;
And the last little bit that we need, which is rather important, is adding this to the constructor:
glEnable(GL_TEXTURE_2D);
Which enables texturing in the first place!
Fixed Function Requirements
Amusingly, we've already done everything we need to do for the Fixed Function pipeline to render Textures.
That was easy, wasn't it?
However, we do need to update our mesh description in main.cpp, so we'll do that:
#ifndef GLES2 FixedFunctionContext* const fixedContext(graphicsSystem->getFixedContext()); if (0 != fixedContext) { fixedContext->enableFixedFunctionVertexPositions(); } #endif
That's us.. only had to remove the Vertex Colours description as we're not sending them over any more, and Texture Co-ordinates are done slightly differently from everything else, so we catch them dynamically when the mesh hits the context.
All source is in the SVN if you need a reminder as to what we've done.
A Concatenated Example
And now let's edit our example code to pull in our texture and display it.
Most of this should be familiar, and we've already changed it but for completeness sake:
#include <cstdio> #include <cstdlib> #include "../../Graphics/GraphicsSystem.h" #include "../../Graphics/Context/FixedFunctionContext.h" #include "../../Graphics/Context/ShaderBasedContext.h" #include "../../Events/EventSystem.h" #include "../../Input/InputSystem.h" #include "../../Input/Keyboard.h" #include "../../Input/Pad.h" #include "../../Graphics/Camera.h" #include "../../Graphics/Mesh.h" #include "../../Graphics/VertexBuffer.h" #include "../../Graphics/IndexBuffer.h" #include "../../Graphics/Material.h" #include "../../Graphics/Shader.h" #include "../../Graphics/Texture.h" #include "../../Maths/Matrix4.h" #include "MVPUniformUpdater.h" #include "Texture0UniformUpdater.h" using namespace GLESGAE; void controlCamera(Camera* const camera, Controller::KeyboardController* const keyboard); Mesh* makeSprite(Shader* const shader, Texture* const texture); Shader* makeSpriteShader(); int main(void) { EventSystem* eventSystem(new EventSystem); InputSystem* inputSystem(new InputSystem(eventSystem)); GraphicsSystem* graphicsSystem(new GraphicsSystem(GraphicsSystem::FIXED_FUNCTION_RENDERING)); if (false == graphicsSystem->initialise("GLESGAE Texturing Test", 800, 480, 16, false)) { //TODO: OH NOES! WE'VE DIEDED! return -1; } #ifndef GLES2 FixedFunctionContext* const fixedContext(graphicsSystem->getFixedContext()); if (0 != fixedContext) { fixedContext->enableFixedFunctionVertexPositions(); } #endif #ifndef GLES1 ShaderBasedContext* const shaderContext(graphicsSystem->getShaderContext()); if (0 != shaderContext) { shaderContext->addUniformUpdater("u_mvp", new MVPUniformUpdater); shaderContext->addUniformUpdater("s_texture0", new Texture0UniformUpdater); } #endif Texture* texture(new Texture()); texture->loadBMP("Texture.bmp"); Mesh* mesh(makeSprite(makeSpriteShader(), texture)); Camera* camera(new Camera(Camera::CAMERA_3D)); camera->getTransformMatrix().setPosition(Vector3(0.0F, 0.0F, 5.0F)); eventSystem->bindToWindow(graphicsSystem->getWindow()); Controller::KeyboardController* myKeyboard(inputSystem->newKeyboard()); while(false == myKeyboard->getKey(Controller::KEY_ESCAPE)) { controlCamera(camera, myKeyboard); eventSystem->update(); inputSystem->update(); graphicsSystem->beginFrame(); graphicsSystem->setCamera(camera); graphicsSystem->drawMesh(mesh); graphicsSystem->endFrame(); } delete graphicsSystem; delete inputSystem; delete eventSystem; delete mesh; return 0; } Mesh* makeSprite(Shader* const shader, Texture* const texture) { float vertexData[32] = {// Position - 16 floats -1.0F, 1.0F, 0.0F, 1.0F, 1.0F, 1.0F, 0.0F, 1.0F, 1.0F, -1.0F, 0.0F, 1.0F, -1.0F, -1.0F, 0.0F, 1.0F, // Tex Coords - 8 floats 1.0F, 1.0F, // top right 0.0F, 1.0F, // top left 0.0F, 0.0F, // bottom left 1.0F, 0.0F}; // bottom right unsigned int vertexSize = 25 * sizeof(float); unsigned char indexData[6] = { 0, 1, 2, 2, 3, 0 }; unsigned int indexSize = 6 * sizeof(unsigned char); VertexBuffer* newVertexBuffer = new VertexBuffer(reinterpret_cast<unsigned char*>(&vertexData), vertexSize); newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_POSITION_4F, 4U); newVertexBuffer->addFormatIdentifier(VertexBuffer::FORMAT_TEXTURE_2F, 4U); IndexBuffer* newIndexBuffer = new IndexBuffer(reinterpret_cast<unsigned char*>(&indexData), indexSize, IndexBuffer::FORMAT_UNSIGNED_BYTE); Material* newMaterial = new Material; newMaterial->setShader(shader); newMaterial->addTexture(texture); Matrix4* newTransform = new Matrix4; return new Mesh(newVertexBuffer, newIndexBuffer, newMaterial, newTransform); } void controlCamera(Camera* const camera, Controller::KeyboardController* const keyboard) { Vector3 newPosition; camera->getTransformMatrix().getPosition(&newPosition); if (true == keyboard->getKey(Controller::KEY_ARROW_DOWN)) newPosition.z() -= 0.01F; if (true == keyboard->getKey(Controller::KEY_ARROW_UP)) newPosition.z() += 0.01F; if (true == keyboard->getKey(Controller::KEY_ARROW_LEFT)) newPosition.x() -= 0.01F; if (true == keyboard->getKey(Controller::KEY_ARROW_RIGHT)) newPosition.x() += 0.01F; camera->getTransformMatrix().setPosition(newPosition); camera->update(); } #if defined(GLX) #include "../../Graphics/GLee.h" #endif Shader* makeSpriteShader() { std::string vShader = "attribute vec4 a_position; \n" "attribute vec2 a_texCoord0; \n" "varying vec2 v_texCoord0; \n" "uniform mat4 u_mvp; \n" "void main() \n" "{ \n" " gl_Position = u_mvp * a_position; \n" " v_texCoord0 = a_texCoord0; \n" "} \n"; std::string fShader = #ifdef GLES2 "precision mediump float; \n" #endif "varying vec2 v_texCoord0; \n" "uniform sampler2D s_texture0; \n" "void main() \n" "{ \n" " gl_FragColor = texture2D(s_texture0, v_texCoord0); \n" "} \n"; #ifndef GLES1 Shader* newShader(new Shader()); newShader->createFromSource(vShader, fShader); return newShader; #else return 0; #endif }
If we're compiling for ES1 we need to ensure we're using FIXED_FUNCTION_RENDERING, else with ES2 it'll be SHADER_BASED_RENDERING.
Other than that, they should be exactly identical!
You might wonder why we don't get alpha working properly.. we haven't set any blending modes up.. we'll get to that, but for now, we have textures and the reign of terror of the coloured quad is at an end! All hail Mr Smiley Face!
Building the Example
In the SVN there are Makefiles already setup for you.. just trigger make -f MakefileES2.pandora or whatever your chosen configuration is, and it'll happily build for you and spit out a GLESGAE.pandora binary for you to run.
Next Time
We've one more bit of core functionality to add - VBOs.. and that'll be next.
GLESGAE - Making another Mesh - Vertex Buffer Objects
Introduction
We've been using Vertex Arrays up till now as they're nice and quick to get up and going with.
However, they have a penalty in that their data is still stored client side, and must be sent to the OpenGL server to be processed before drawing. This is where Vertex Buffer Objects come into play as they've already been processed and sit server-side, with only an identifier being kept in the client.
We will therefore convert our current Vertex Array implementation into a Vertex Buffer Object - VBO - implementation to gain some speed.
Fast Track
We're now on to SVN revision 8... svn co -r 8 http://svn3.xp-dev.com/svn/glesgae/trunk/ glesgae
Creating the Buffers
The creation of VBOs is really just a logical step onwards from what we already have. We need data to put into a VBO in the first place, and we already have this within our two Buffer classes, so wrapping it within a VBO becomes immensely easy.
Both classes we need to add a new variable to store the VBO Id:
unsigned int mVboId;
Along with an accessor:
unsigned int getVboId() const { return mVboId; }
And that's pretty much it apart from the constructors.
Vertex Buffer VBO constructors
Our Vertex Buffer class contains two main constructors - one for when we know the format, and another for generating it on the fly.
Both of these must be edited to intialise our mVboId to 0, and then have the following code in the function body:
glGenBuffers(1U, &mVboId); glBindBuffer(GL_ARRAY_BUFFER, mVboId); glBufferData(GL_ARRAY_BUFFER, mSize, mData, GL_STATIC_DRAW);
And that's it. Our Vertex Buffer automatically generates a VBO for us.
Index Buffer VBO constructor
This is much the same, apart from being an ELEMENT buffer, so the code changes to:
glGenBuffers(1U, &mVboId); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mVboId); glBufferData(GL_ELEMENT_ARRAY_BUFFER, mSize, mData, GL_STATIC_DRAW);
Nice and easy!
Rendering VBOs
We're lucky in that our current Vertex Array method is pretty close to how a VBO is actually used.. we therefore only need to do very small changes to our Fixed Function and Shader Based contexts.
We need to bind the correct VBO before we start sending over any vertex or index information, and this is done by the glBindBuffer( ... ); call again.. so for the Vertex Buffer it would be:
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer->getVboId());
Just before the giant Format switch in both Contexts, and just before the Index Format switch, the Index Buffer would be bound like so:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer->getVboId());
This just leaves tidying up how we send the information over.
VBOs are already sent over and live server-side in the GL context, therefore any reference to the data client side is null and void.
Therefore, where we may have something like:
glColorPointer(4, GL_UNSIGNED_BYTE, 0, vertexBuffer->getData() + itr->getOffset());
In the Fixed Function Context, we need to change this to:
glColorPointer(4, GL_UNSIGNED_BYTE, 0, reinterpret_cast<char*>(itr->getOffset()));
As the data is already sent over, and the final parameter expects a data pointer format ( which for us, is a char pointer. )
The final draw command which deals with the indices is changed in much the same manner.
Where we currently may have:
glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_SHORT, indexBuffer->getData());
It needs to be:
glDrawElements(GL_TRIANGLES, indexBuffer->getSize(), GL_UNSIGNED_SHORT, 0);
As again, the buffer is already sitting in the GL Server ready to be used - we don't need to send anything else over.
And that's it.
We now render using VBOs.
Whoa, hold up! What was this GL_STATIC_DRAW thing?
When we were creating our buffers, we specified GL_STATIC_DRAW in the last parameter of glBufferData. This is to indicate to GL that this is a static buffer - we won't be changing this - but we'll be drawing it many many times.
You also have GL_DYNAMIC_DRAW for when you are going to be changing the contents of the buffer... but take note that as soon as you call glBufferData with a data pointer, that data is copied to the server as it is at that instance.. so if you mess with the data in your program and don't update it with OpenGL, it'll draw whatever you originally set it to! So be careful!
There are a few other parameters you can specify instead, but I suggest you read the Red Book for more information on them.
Building the Example
There is no example this time, as it's just changing how we store and render our Vertex and Index Buffers, so it's all internal code.
Next Time
We take a look at how to manage the mass of objects we now have, and the importance of getting it right.
GLESGAE - Managing Resources Overview
Introduction
Now we have some basic graphics rendering going on, we need to look at managing the resources we're starting to assemble before we start on anything more complicated - such as rendering 3D models, processing some logic so they do something, loading in more textures, and so on...
Sadly, this is where things get complicated, as we're now out of the realms of simple demos and into what can literally make or break an engine.
Too many really good engines end up unusable as their resource management is poor or non-existant and relies too much on the programmer to handle everything when they may not be aware as to what is going on under the surface with their resources. Freeing something that you may be finished with, but something internal isn't, can cause massive problems.
We're also programming in C++ so we have no automatic garbage collection and therefore need to deal with everything ourselves.
Of course, the biggest resource to manage is that of Memory, which also happens to be the most complicated resource to deal with as there are many ways in which to make a complete mess. A lot of engines overload the defacto new and delete methods so they can ensure that everything goes through their own memory manager. Other engines provide a large Singleton Memory Manager which you ask for memory, and use the space it gives back in whatever manner you need.
Another resource which tends to be forgotten about is that of logic itself. How you manage your states is just as important as how you manage your memory, as making a mess of how your logic gets called can provide rather drastic performance bottlenecks. Threading is also not the silver bullet that can kill your demons either, as if your logic has not been separated properly, you end up with all manner of issues dealing with deadlocks, threads overwriting memory all over the place, synchronisation issues, and other hellish nightmares you never want to have. Therefore, proper management of logic and states makes things much easier as anything that's self contained can get thrown on a thread and processed in parallel.
Finally, there's the art of debugging. It's much better to code defensively and assert as often as you can, than be left dealing with bugs near the end of the project that you need to track down. As such, having some helper functions to test for asserts, break if something goes wrong, and log warnings at various levels can be your lifeline in ensuring your logic is sane and any issues can be caught during development, rather than by the end user!
The Resource Manager
The first thing we'll be looking at in this section will be The Resource Manager. This is our all important gateway into maintaining the potentially vast amount of data that we'll be pushing through the engine.
Our Resource Manager is going to perform two primary tasks - File IO and the actual runtime management. This requires some careful fiddling however, as we need to ensure our Tools use the same data headers as the game. However, as all of our Tools will be using the same engine as the game, we'll be fine. It is something to be aware of, however, should you want to write Tools outside of the engine. If you want to output files for your game that the Resource Manager will load up, you'll need to ensure the file formats are the same.
The Resource Manager itself will manage multiple Resource Banks. These banks will only manage one type of data; a perfect example of using templates to reuse code! While this might seem restrictive, it's incredibly handy and very optimised. For example, when we start dealing with Entities we'll be able to just set a pointer on the start of the Resource Bank for our Entity set, and just let it run through them all. They'll all be in contiguous memory, so the processor will be able to cache it much more efficiently and speed through them all. That is, of course, assuming that each Entity contains all the data it needs to process directly, and it doesn't require the processor to go fetching pointers to other data elsewhere.
Each Resource will be of a template class, which itself acts like a smart pointer. What is a smart pointer? Well, whereas you could just new or malloc a random pointer anywhere you like, you need to also delete or free it when you're finished. By using a smart pointer, the deletion is taken care of for you. This is especially important for Resources as you may not be aware of something else that is using the Resource at the same time - especially in a threaded environment - and if you go and delete it while something else is depending on it, you've got yourself a fun bug hunt!
So let's not have bug hunts, and use a smart pointer.
The State Manager
Then, we will be looking at The State Manager. Each major area of an application is usually defined as a specific state. For example, you may have an Intro State, a Menu State and a Game State. The Intro leads into the Menu, which leads into the Game. The Game then might jump back to the Menu and loop back and forth until something quits. By separating the logic into these groupings, it makes it much more obvious what is going on while coding.
We'll also be employing the use of a State Stack. This will allow us to push and pop States onto one another; such as pushing a Pause State onto our Game State when we hit pause, and popping it back off when we un-pause. This allows us to keep all the information for the Game State active - but not processed - while the Pause State is in effect.
As you may have worked out from the previous paragraph, each State will be managing it's own objects as well. This means when we pop a State from the stack, it should clean itself up and release all resources it may have taken to initialise itself and run.
The Memory Manager
We'll have a look at implementing a Memory Manager, and wire up the State and Resource Managers to use it.
Our Memory Manager is going to be of the basic heap variety, in that you will ask it for a heap of memory and you will be free to use that as you please. This can create some odd syntax in that you need to allocate some memory from your heap, and then allocate your object into the heap, but it allows us to give anything that needs some memory it's own little portion to do whatever it pleases. As most libraries have their own memory management system that you can overload, this idea works perfectly in that we just generate a large heap for it, and wire up it's allocation utils to use it and we keep it out the way of everything else.
The Debug and Logger System
As mentioned, we should always code as defensively as possible - checking everything we do to ensure we are working with valid data before acting upon it.
When we need more information as to what's going on, we need to log out to a file or console to see the internals as we don't always have the luxury of running in a debugger - and indeed when threads collide, it's not always apparent which thread caused the issue! So a means of logging information is necessary.
File Formats and Data Structures
Before we start loading things up, we need to actually start declaring structures that we can load. This invariably involves fixed structures for faster loading, or variable structures for flexibility but the requirement to parse and slow down the load times. There are additional pros and cons for each method, but that's effectively what it boils down to.
We won't be creating File Formats and Data Structures for everything under the Sun as that's rather dependent upon what the application is up to, but we will be generating some rules to ensure that we can load and manage them properly, as well as define them properly as and when we need them.
The Thread Manager
Oh yes, there will be threads.
Threading is a black art in that there are many places that it can go wrong - even with the simplest of code. However, threading also gives us the benefit of running code in parallel which generally gives us an instant speed boost.
We'll primarily be using mutexes in order to ensure our code locks and unlocks memory that is required, and we shall also discuss what kind of code we should thread, what code we should leave alone, and how to ensure we write code that's as thread safe as possible. This will all be combined into a Thread manager to deal with the operation and management of threads, ensuring they're all cleaned up on exit and providing utility functions for our mutex handling.
The Model Loader
Once the boring stuff is out of the way, we'll start having some fun again. We shall be creating a model loader that will manage our own model format.
Of course, to get to our model format, we'll need to write a tool to do so, and this will be where everything discussed previously should fall together.
We'll be writing our tool to deal with both the COLLADA format and the Alias|Wavefront Obj format, so we should have everything covered.
The Logic Loop
Finally, we'll combine all the knowledge we've learned into one big demo for the end of this section and discuss the main loop at the same time.
We'll be creating ourselves a quick and simple little Model Viewer, something rather important considering we'll have created our own Model Format, so will need our own tool to view our own stuff!
Conclusion
This will be a big section, and it covers some very scary bits and pieces, all to do with managing resources - be it data or code.
However, it puts us in good stead to discover the ways of which our logic can be processed, and how to manage our logic through the use of Entities and Scripting.
We'll also be able to generate file formats and data structures for anything we feel like, know how to load them and save them, and all sorts of fun stuff that forms the crux of our engine.
GLESGAE - The Resource Manager
Overview
Resources are everything that gets loaded, or generated, in your application. These can be anything from media objects - such as sound and textures - to more abstract things like level data, entities, or even the graphics system itself! So we need some way to describe what a Resource is, and a set of common functionality for dealing with them. Additionally, we probably want to manage these things, and loading/saving them would be a nicety too, if possible.
This section is all about that. We'll be building up a Resource Manager ( which has taken a fair amount of beating from myself to get right ) and a set of classes to describe Resources and Resource Banks - groups, if you will - that can be used to store and categorise our Resources.
Quick Start
We're upto revision 9 on the SVN now: svn co https://svn.xp-dev.com/svn/glesgae/trunk -r 9 ( though it's not currently live... - Stuckie 7th Feb 2012 )
For the brave, there's also the bleedin' edge git repository available: https://github.com/stuckie/glesgae though this may not always compile on anything but Linux.
What Is A Resource Manager?
Traditionally, there's usually just the one lone Resource Manager. This is generally tied in to the system's core file i/o and manages the loading and saving of pretty much everything. For systems such as Android where the entire GL Context can be lost at any time, having a Resource Manager which knows what has been loaded up - and more importantly in this case, how to reload it - is almost a requirement to stave off insanity.
Depending upon your view on Resources, the Manager can also tend to categorise Resources for easier access; for example keeping all Resources for rendering a Mesh close to each other so that the Renderer isn't hopping all over the memory to access things.
The GLESGAE Resource Manager
Our Resource Manager is going to be optional. It can be used, or it can be ignored. This is mostly because we haven't really written a set of file i/o utilities yet, so we're still either generating everything in code, or we're having Texture load up it's own bitmaps.. but also because there are times when a full blown Resource Manager is just too much for what we want to do. Having options is always nice.
Our Manager is going to manage Resource Banks - or groups of Resources, if you will. It'll be primarily responsible for the creation and management of banks of resources. This makes things much easier to deal with as when we're finished with file i/o and have all our formats defined, we can just tell the resource manager to load up huge blocks of data, and categorise them as needed.
Resource Banks
As stated, our Resource Banks will effectively act as fancy arrays of data. Each Bank is templated to a specific type of Resource, and can contain various groups, categorised into whatever we like. This means that we can have one Bank of Mesh objects, and group them specific to each Model, Level, whatever... so when we pull out a group to iterate over, they're all specific for that item. This will become important for when we get on to Entities and Components, as we'll be wanting to turn Components on and off, and the fastest way of processing anything to see if it's on or off, is to just not include the off objects in the same array as the on objects... effectively, a Resource Bank where the groups are Active and Inactive.
The Resource
The Resource itself is a relatively simple template class. It does follow a smart pointer-style setup, in that it tracks how many instances of itself there are, and only deletes the actual data it's holding when all instances are gone. It will also allow you to recast the Resource into other things - very handy because a Resource of a base class is a wholly different type than a Resource of a derived class from that base class.. so being able to recast to get back and forth is particularly handy - especially if the derived class has functionality the base does not.
Smart Pointers
Smart Pointers are named-so due to the fact they tend to track their instantiation, and only completely kill themselves when all instances are removed.
The Boost Library has shared_ptr, which is what our Resource is partly modelled on. You may be wondering why I didn't just keep Resource simple, and use smart pointers throughout anyway... this is because I do believe that while smart pointers are great for debugging, they're a bit large for normal use - and especially on more constrained hardware. So, I created Resource to imitate the bits I need.
Building our Resource
Our Resource Class needs to be a template. We have to be able to new our object directly into it, and then we can effectively forget about it. We also need to be able to track and find it again if we stash it somewhere. Of course, we can should also be able to ignore most of this functionality if need be.
The Hiccup
The current Resource code that's in the SVN is broken.. or at least, led to a more broken system. As such, we're going to be a bit confusing here and talk about the fixed system instead. The fixed system requires a few utility classes first, however... so we'll have a quick overview of them first.
The HashString
A HashString is effectively a number that's been generated from a string. As string methods can generally be rather expensive, we convert them to a number.. as equating numbers are much faster than strings, for instance. There's a caveat to this however, in that the conversion is a one-way process. For the most part, that's not a problem, it just makes debugging a bit more interesting when printing out a HashString gives out "60263687" instead of "Bob" ( for example )
There are many equations that can be used for HashStrings - from CRC, MD5, etc... though some have a bigger runtime hit than others. The one I settled upon is a slightly modified version of djb2. You can find it, some others, and more information here: http://www.cse.yorku.ca/~oz/hash.html
The HashString class can be found in the repository.
The Logger
I've also resurrected an old Logger class I did for a previous engine. It can spit out HTML formatted text, standard text, or to the console, and has INFO, DEBUG and ERROR log levels - to which DEBUG only shows if the DEBUG define is set. It could do with some sprucing up, but it's fine for our purposes for now.
The Logger class can also be found in the repository.
The Base Resource
As Resource itself is a templated class, we can't easily store a pointer to it. So, we do our usual trick of having a Base class that we can grab a pointer to, and redirect to the Template class anything that doesn't require the actual type.
We also store our Location information here, so we can find Resources again should we organise them into Groups and Banks.
#ifndef _BASE_RESOURCE_H_ #define _BASE_RESOURCE_H_ #include "../GLESGAETypes.h" #include "../Utils/HashString.h" namespace GLESGAE { namespace Resources { typedef HashString Type; typedef unsigned int Id; typedef unsigned int Count; typedef unsigned int Group; struct Locator { Id bank; Type type; Group group; Id resource; Locator() : bank(INVALID), type(INVALID_HASHSTRING), group(INVALID), resource(INVALID) {} }; // System Resources extern Type Camera; extern Type Controller; extern Type IndexBuffer; extern Type Material; extern Type Matrix2; extern Type Matrix3; extern Type Matrix4; extern Type Mesh; extern Type Shader; extern Type ShaderUniformUpdater; extern Type Texture; extern Type Timer; extern Type Vector2; extern Type Vector3; extern Type Vector4; extern Type VertexBuffer; } class BaseResourceBank; class BaseResource { public: virtual ~BaseResource(); /// Get the Type of this Resource Resources::Type getType() const { return mType; } /// Get which Group this Resource belongs to Resources::Group getGroup() const { return mGroup; } /// Get the Id of this Resource Resources::Id getId() const { return mId; } /// Get the Instance Count of this Resource Resources::Count getCount() const { return *mCount; } protected: /// Set the count void setCount(const Resources::Count& count) { *mCount = count; } /// Private constructor as this is a derived class only BaseResource(const Resources::Group group, const Resources::Type type, const Resources::Id id); /// Overloaded Copy Constructor, so we keep track of how many instances we have. BaseResource(const BaseResource& resource); /// Overloaded Assignment Operator, so we can keep track of everything. BaseResource& operator=(const BaseResource& resource) { mGroup = resource.mGroup; mType = resource.mType; mId = resource.mId; mCount = resource.mCount; return *this; } Resources::Group mGroup; Resources::Type mType; Resources::Id mId; mutable Resources::Count* mCount; }; } #endif
So, what have we got in here....
We have a Locator struct. This'll be for sending into a Group or Bank so we can find a Resource again if need be. The more information we can fill in, the quicker the search will be.
We also define some types - Type, Id, Group and Count - which we define in the class itself; marking mCount as mutable so we can change it in const functions, the reason for which will become apparent in a minute!
Finally, we extern a bunch of Types for various system types which we have in the engine.. this is so we don't need to recalculate the HashString for it at run time - something which can turn out to be a costly procedure!
The Resource
With our BaseResource defined, we need our actual Resource class itself, which is where the magic happens.
#ifndef _RESOURCE_H_ #define _RESOURCE_H_ #include "BaseResource.h" #include <cassert> namespace GLESGAE { template <typename T_Resource> class ResourceBank; template <typename T_Resource> class Resource : public BaseResource { // Resource Bank is a friend to access purge. friend class ResourceBank<T_Resource>; // It's a friend to itself to deal with the recast functionality. template <typename T_ResourceCast> friend class Resource; public: /// Dummy Constructor for creation of empty Resources. Resource() : BaseResource(INVALID, INVALID_HASHSTRING, INVALID), mResource(0) {} ~Resource() { remove(); } /// Constructor for taking ownership over raw pointers. explicit Resource(T_Resource* const resource) : BaseResource(INVALID, INVALID_HASHSTRING, INVALID), mResource(resource) { instance(); } /// Pointer Operator overload to return the actual resource. T_Resource* operator-> () { return mResource; } /// Const Pointer Operator overload. T_Resource* operator-> () const { return mResource; } /// Dereference Operator overload to return the actual resource. T_Resource& operator* () { return *mResource; } /// Const Dereference operator overload. const T_Resource& operator* () const { return *mResource; } /// Recast Copy into a another Resource. template <typename T_ResourceCast> Resource<T_ResourceCast> recast() { Resource<T_ResourceCast> newResource(mGroup, mType, mId, reinterpret_cast<T_ResourceCast*>(mResource)); delete newResource.mCount; newResource.mCount = mCount; instance(); return newResource; } /// Recast Copy into a another Resource. template <typename T_ResourceCast> const Resource<T_ResourceCast> recast() const { Resource<T_ResourceCast> newResource(mGroup, mType, mId, reinterpret_cast<T_ResourceCast*>(mResource)); delete newResource.mCount; newResource.mCount = mCount; instance(); return newResource; } /// Overloaded Copy Constructor, so we keep track of how many instances we have. Resource(const Resource& resource) : BaseResource(resource) , mResource(resource.mResource) { instance(); } /// Overloaded Assignment Operator to ensure we keep track of everything properly. Resource& operator=(const Resource& resource) { if (this != &resource) { // if someone's being daft and assigning ourselves, do nothing else we're likely to commit suicide. remove(); BaseResource::operator=(resource); mResource = resource.mResource; instance(); } return *this; } /// Overloaded Equals Operator for pointer checking. bool operator==(const Resource& resource) const { return (mResource == resource.mResource); } /// Overloaded Equals Operator for 0 pointer checking. bool operator==(const void* rhs) const { return (reinterpret_cast<void*>(mResource) == rhs); } /// Overloaded Not Equals Operator for pointer checking. bool operator!=(const Resource& resource) const { return !(*this == resource); } /// Overloaded Not Equals Operator for 0 pointer checking. bool operator!=(const void* rhs) const { return !(*this == rhs); } /// Increase the instance count of this Resource. /// Be exceptionally careful with using this manually, you will need to call remove manually as well! /// This is however, useful for anything that has to be sent a raw pointer which may leave C-scope. /// For example, Physics Engines and Scripting Languages. void instance() const { assert(mCount); ++(*mCount); } /// Remove an instance count of this Resource, and if there are no more instances, purge it. /// Calling this manually should be used with caution, and only on a Resource which has been manually instanced. /// Otherwise, you will get into a situation whereby you've deleted something which still has a reference. /// Again, this is primarily useful for Physics Engines and Scripting Languages only. void remove() { assert(mCount); if ((*mCount) > 0U) --(*mCount); if ((*mCount) == 0U) purge(); } protected: /// Protected Constructor so we can't create Managed Resources all over the place. explicit Resource(const Resources::Group group, const Resources::Type type, const Resources::Id id, T_Resource* const resource) : BaseResource(group, type, id) , mResource(resource) { instance(); } /// Delete the actual resource. void purge() { if (0 != mResource) { delete mResource; mResource = 0; } if (0 != mCount) { delete mCount; mCount = 0; } } private: T_Resource* mResource; }; } #endif
Now, this is a beast of a class, so let's slowly walk through what we're doing here.
We have a bunch of constructors that do various things.
If we want to create an empty resource, for example, we can do something like Resource<Material> myMaterial; and myMaterial will effectively be a null pointer. This has many benefits as we can pre-allocate arrays of them, and not fill them in as yet, and use them as class variables that get filled later on and not necessarily in the constructor.
We can also feed a new Resource an already existing pointer: Material* myRawMaterial(new Material); Resource<Material> myMaterial(myRawMaterial); and Resource now manages myRawMaterial. The caveat here is that we should not delete myRawMaterial, as Resource will do it automatically for us when myMaterial goes out of scope.
We have overloaded the copy constructor and assignment operator so we can keep track of how many instances we have.
We also overload the pointer operator to give direct access to our data within, and do something slightly special with our equals operator, in that one of them takes a const void pointer. This is so that we can check whether our data is null or not.
Additionally, we can instance and remove ourselves if need be - sort of like the Obj-C retain and release ideology - but this can cause issues so should only be used if you know what you're doing!
Finally, we have a couple of special functions known as recast... which we use to recast a pointer to another. This is primarily for recasting up or down a class hierarchy - such as a Base Class to a Derived Class - as a Resource<BaseClass> is a completely different pointer to Resource<DerivedClass> even if DerivedClass is derived from BaseClass. This could be handy when DerivedClass offers additional functionality not found on BaseClass, but is platform specific.
And that's it really.. not that much of a scary class, it just does a lot of things.
You'll also see that the reason we made mCount mutable back in BaseResource, is that we need to modify it in our const recast function.
Building our Resource Banks
Resource Banks act as Managers for groups of Resources. Effectively, glorified arrays of specific types. They're again split into a Base class and a derived Template class.. so let's look at the Base class first.
The Base Resource Bank
#ifndef _BASE_RESOURCE_BANK_H_ #define _BASE_RESOURCE_BANK_H_ #include "BaseResource.h" namespace GLESGAE { class BaseResourceBank { friend class BaseResource; friend class ResourceManager; public: virtual ~BaseResourceBank() {} /// Get the Type of this Resource Bank Resources::Type getType() const { return mType; } /// Get the If of this Resource Bank Resources::Id getId() const { return mId; } /// Create a new resource group. virtual Resources::Group newGroup() = 0; /// Remove Group virtual void removeGroup(const Resources::Group groupId) = 0; protected: /// Private constructor as this is a derived class only BaseResourceBank(const Resources::Id id, const Resources::Type type) : mId(id) , mType(type) { } private: // No Copying Allowed BaseResourceBank(const BaseResourceBank&); BaseResourceBank& operator=(const BaseResourceBank&); Resources::Id mId; Resources::Type mType; }; } #endif
Quite a simple class, and bears much resemblance to BaseResource, in that it stores an Id and Type and not much else. One thing it does do, is provide interface ( pure virtual ) functions to create and remove groups which we will overload next.
The Resource Bank
#ifndef _RESOURCE_BANK_H_ #define _RESOURCE_BANK_H_ #include "Resource.h" #include "BaseResourceBank.h" #include <vector> #include <cassert> namespace GLESGAE { template <typename T_Resource> class ResourceBank : public BaseResourceBank { // Resource is a friend to access instance. friend class Resource<T_Resource>; public: ResourceBank(const Resources::Id id, const Resources::Type type) : BaseResourceBank(id, type), mResources() {} ~ResourceBank(); /// Create a new resource group. Resources::Group newGroup(); /// Get an entire Resource group. const std::vector<Resource<T_Resource> >& getGroup(const Resources::Group groupId) const; /// Remove Group void removeGroup(const Resources::Group groupId); /// Add a single Resource manually, and return the Resource version. Resource<T_Resource>& add(const Resources::Group groupId, const Resources::Type typeId, T_Resource* const resource); /// Add a group of Resources, and return the Group Id Resources::Group addGroup(const std::vector<Resource<T_Resource> >& resourceGroup); /// Get a Resource immediately Resource<T_Resource>& get(const Resources::Group groupId, const Resources::Id resourceId); private: // Scary stuff... an array ( which we can access via Group Id) holding another array. // Second array holds the actual resource, where the id is it's array index. std::vector<std::vector<Resource<T_Resource> > > mResources; }; template <typename T_Resource> ResourceBank<T_Resource>::~ResourceBank() { for (unsigned int index(0U); index < mResources.size(); ++index) removeGroup(index); } template <typename T_Resource> Resources::Group ResourceBank<T_Resource>::newGroup() { std::vector<Resource<T_Resource> > resourceArray; mResources.push_back(resourceArray); return mResources.size() - 1U; } template <typename T_Resource> const std::vector<Resource<T_Resource> >& ResourceBank<T_Resource>::getGroup(const Resources::Group groupId) const { // TODO: Scream if that groupId isn't valid, or doesn't exist... if (groupId == GLESGAE::INVALID) { assert(0); return; } return mResources[groupId]; } template <typename T_Resource> void ResourceBank<T_Resource>::removeGroup(const Resources::Group groupId) { // TODO: Scream if that groupId isn't valid or doesn't exist... if (groupId == GLESGAE::INVALID) return; std::vector<Resource<T_Resource> >& resourceArray(mResources[groupId]); for (typename std::vector<Resource<T_Resource> >::iterator itr(resourceArray.begin()); itr < resourceArray.end(); ++itr) { if (itr->getCount() > 1U) { // TODO: scream bloody mary that there's still something using this resource. } } resourceArray.clear(); } template <typename T_Resource> Resource<T_Resource>& ResourceBank<T_Resource>::add(const Resources::Group groupId, const Resources::Type typeId, T_Resource* const resource) { std::vector<Resource<T_Resource> >& resourceArray(mResources[groupId]); const Resources::Id resourceId(resourceArray.size()); resourceArray.push_back(Resource<T_Resource>(groupId, typeId, resourceId, resource)); return resourceArray[resourceId]; } template <typename T_Resource> Resources::Group ResourceBank<T_Resource>::addGroup(const std::vector<Resource<T_Resource> >& resourceGroup) { const Resources::Group groupId(mResources.size()); mResources.push_back(resourceGroup); return groupId; } template <typename T_Resource> Resource<T_Resource>& ResourceBank<T_Resource>::get(const Resources::Group groupId, const Resources::Id resourceId) { assert(groupId != GLESGAE::INVALID); assert(resourceId != GLESGAE::INVALID); return mResources[groupId][resourceId]; } } #endif
The actual Resource Bank itself is a bit more complicated, as it does all the work.
You'll also notice usage of assert everywhere, this is good defensive programming and should be used often! This is why the Resource system ended up in such a mess in the first place - I wasn't using enough asserts to catch things, and memory was being overwritten all over the place.
Anyway, the basic crux of the Resource Manager, is that it stores an array of arrays.
These are indexed firstly by the groupId to find the correct group, followed by the resourceId to find the correct Resource. If you remember our Locator struct we defined in BaseResource, that's how we can find things when needed.
One interesting thing is we also take in the Type and an Id for the Resource Bank. This is because we can have many Resource Banks attached to the Resource Manager, so each needs it's own Id. The Type is used for double checking, even though we template it to a specific class/struct type, we need to know it's HashString type to be able to search for them without requiring to know it's template type.
Building our Resource Manager
Our last piece of the system, is the Resource Manager itself.
This isn't much different from the Resource Bank above, in that it's more a wrapper around managing arrays.
The Resource Manager
#ifndef _RESOURCE_MANAGER_H_ #define _RESOURCE_MANAGER_H_ #include "ResourceBank.h" #include <string> #include <map> #include <cassert> namespace GLESGAE { class ResourceManager { friend class BaseResource; public: ResourceManager() : mResourceBanks() {} ~ResourceManager() { assert(mResourceBanks.empty()); } /// Create a new Resource Bank of the specified template <typename T_Resource> ResourceBank<T_Resource>& createBank(const Resources::Type bankType); /// Retrieve a Resource Bank template <typename T_Resource> ResourceBank<T_Resource>& getBank(const Resources::Id bankId, const Resources::Type bankType); /// Delete a Resource Bank template <typename T_Resource> void removeBank(const Resources::Id bankId, const Resources::Type bankType); /// Load Resource Bank from Disk template <typename T_Resource> void loadBankResources(const std::string& bankSet, const Resources::Id bankId, const Resources::Type bankType); /// Save Resource Bank to Disk template <typename T_Resource> void saveBankResources(const std::string& bankSet, const Resources::Id bankId, const Resources::Type bankType); private: // A map denoting the name of a resource bank, and the resource bank pointer itself. std::map<Resources::Id, BaseResourceBank*> mResourceBanks; }; template <typename T_Resource> ResourceBank<T_Resource>& ResourceManager::createBank(const Resources::Type bankType) { // TODO: check bank doesn't already exist. Resources::Id bankId = mResourceBanks.size(); mResourceBanks[bankId] = new ResourceBank<T_Resource>(bankId, bankType); return *(reinterpret_cast<ResourceBank<T_Resource>*>(mResourceBanks[bankId])); } template <typename T_Resource> ResourceBank<T_Resource>& ResourceManager::getBank(const Resources::Id bankId, const Resources::Type) { // TODO: check bank actually exists. assert(bankId != INVALID); return *(reinterpret_cast<ResourceBank<T_Resource>*>(mResourceBanks[bankId])); } template <typename T_Resource> void ResourceManager::removeBank(const Resources::Id bankId, const Resources::Type /*typeId*/) { std::map<Resources::Id, BaseResourceBank*>::iterator bank(mResourceBanks.find(bankId)); if (bank != mResourceBanks.end()) { // TODO: Check that the bankType matches up! delete reinterpret_cast<ResourceBank<T_Resource>*>(bank->second); bank->second = 0; } // TODO: Error that we can't find this bank! } /* template <typename T_Resource> void ResourceManager::loadBankResources(const std::string& bankSet, const Resources::Id bankId, const Resources::Type bankType) { } template <typename T_Resource> void ResourceManager::saveBankResources(const std::string& bankSet, const Resources::Id bankId, const Resources::Type bankType) { } */ } #endif
There are a few oddities with this class.
Firstly, it's a bit incomplete ( the load/save functionality ) as we still have to write proper File utilities.
Secondly, the destructor doesn't actually wipe out any Resource Banks that may still be left around. The reason for this is that you really should be doing this manually. We do assert if it's not empty, however... and further functionality will have us going over this class to add in the load/save as well as outputting which banks have been left behind for the user to fix and clean up.
Conclusion
A bit of a long one this time.. but we needed to get the entire Resource System out in one go.
As we can see.. we can use Resource<T> on it's own, or use it in conjunction with the just Resource Banks, or the entire Resource System.
We have a helper struct to be able to pass around where Resources can be found if we need to find them elsewhere. This may seem a bit useless for now, but as it's a POD-style struct, we can use these for when we do start to load and save the Resource Banks.
Next up, we'll be looking into the State System, and then onto some Scripting.