• Welcome to The Wyrmkeep Entertainment Co. Forum.
 

News:

The forum returns! Still working on tweaks.
Please contact techsupport@wyrmkeep.com to get a forum account.

Main Menu

Game Concept: Technical

Started by Kay, Sep 02, 2004, 07:38 PM

Previous topic - Next topic

Kay

That's a set of standard "modes" with fixed behaviors for each, with the AI switching modes according to certain conditions like the chasing/fleeing ghosts in "Pac-Man," right?

Not really. The core of it is a list of goals. The main program runs a function called Go() for each AI. Go() picks a goal from its list, tries to execute it, then does other stuff like sense its environment. Goals are things like AddGoal("say","How are you?") or AddGoal("walk","n",1). They can be added by anything, so I could for instance write a system command that forcibly adds a goal to an AI to say something.

AI gets "say" goal, AI chooses "say" goal, AI executes goal by sending message "say foo" to its Controller, Controller sends message to World, World handles command by broadcasting message "foo" to the Views of all characters in the room, Views either display text on the screen or call an AI's SenseInput() function depending on whether they're NPCs, Johnson passes to Smith and... Touchdown!

My AI is like the print function, only much more complicated.

The dialogue system is kind of cheating because it always generates a Say goal. (sing old video game startup sound: "Say-goal...") As a result, if AI #1 says something and AI #2 hears it, 2 gets a SenseInput about it, recognizes it as dialogue, generates a response, talks, triggers 1 to respond, etc.. I don't know what to do about this.

Currently messing with a 2D emotion system and how to display it, using some graphics from "La Pucelle." Douglas Adams said that if a robot were given the ability to be happy or bored, it could figure out the rest itself. Good enough theory, though modern cog. sci. people favor a six-dimensional system of human emotion.

(Using this for unhappy, not excited/bored)

Threed

I think your method is called the "reactive paradigm," wherein stimulous determines the reaction.

Finite state-machine based is based on the theory that you can only be in one state at a time; some states are transient, and once they end they fall back to the last state they were in: You might change the state of an AI currently in DefendState to MoveState, and once they have finished moving they fall back to the last state they were in provided that nothing nearby (enemies, etc) trigger a new state like AttackState or BustAGrooveState.

"Enemy sighted! Moving to intercept."
(MoveState. There are counters to determine how far the AI will go and wheither it will get bored or not).

"Enemy is within attacking range. Moving to attack!"
(AttackState. Attack while the enemy is within range.)

"Enemy is moving away! Moving to intercept!"
(Movestate, triggered by the enemy moving out of range.)

Essentially using "states" means that they control the flow of the AI by triggering other states, though if the user ever figures out what values determine a state change they can probably trigger them at will if its as easy as wielding another weapon and turning six degrees to the left.

---

How do you handle concurrency in tasks without hardcoding responses? AKA, "I need to pick up fruit: so, I need to move six feet forward, and -- oh crap, someone just attacked me!" and the issue of reactance? What happens when the AI runs out of goals to do, are new ones randomly generated or what?


QuoteAs a result, if AI #1 says something and AI #2 hears it, 2 gets a SenseInput about it, recognizes it as dialogue, generates a response, talks, triggers 1 to respond, etc.. I don't know what to do about this.

Well, for one, you should probably use weighted values to determine the response to SenseInput(), the same way you'd do in a state machine - if AI chatter is causing too much of a headache, start weighting values: if AI #1 has a low-threshold for yakking, increment, say, "boredom" + 1 every time the AI says something. Then, once boredom > 5 or something similar, trigger another AI Goal (this would normally be a state change in a state machine); this could be something as simple as walking away or stop talking or just randomly attacking something. Or it could say something offensive that would hopefully make AI #2 know its role!

Threed

Alternatively, you could use a random element to determine the odds of the AI doing something "else" in response to the stimulous instead of the normal response.

I think you'd need to start using abstract classes if you did either of my suggestions, though, so you could still have different AI models.

Kay

bust_a_groove_state = True

QuoteHow do you handle concurrency in tasks without hardcoding responses? AKA, "I need to pick up fruit: so, I need to move six feet forward, and -- oh crap, someone just attacked me!" and the issue of reactance? What happens when the AI runs out of goals to do, are new ones randomly generated or what?

By my model:
-AI has goal "pick up fruit." It's not in reach. Add goal "walk towards fruit."
-Start executing goal "walk towards fruit."
-Enemy attacks! Receive sensory inputs (self, "feels", "pain"), ("ninja","is","description").
-Next iteration of ProcessSensoryInputs: "pain" triggers a new goal with higher priority than "walk towards fruit," though I don't know what that goal would be.

Each goal has a priority, and it picks the highest one, so attacking ninjas could pre-empt fruit collection.

When there are no goals, the AI sits there until its sensory input or physical needs trigger some. Currently I have PhysNeeds calling AddGoal("wander"), so every so often the AIs consider randomly moving around. When I hooked PhysNeeds back up, I found that the program crashed because the AIs were trying to call some forgotten eating code in response to their draining energy... that was cool. I eventually got it working, so every so often they whip some Tofu (description: "evil") out of nowhere and eat it, recovering energy. If I changed their Pulse rate or had movement drain their energy faster, they'd eat more often.

Re: dialogue, yes. I'll probably have to increment boredom (decrement "alert") or something like that to make them shut up.

Will release a demo after trying another feature and figuring out why Niss keeps yakking about the "machine" topic.

The emoticon system makes me daydream about having an AI do a LiveJournal... It'd be full of entries like "I spent all day staring at an acorn!" though. I know it can write text files and make Net connections. Maybe it can send e-mails and be a true Spambot?

A productive day! Lots of coding plus actually understanding some more of Civil Procedure law!

I am, by the way, going to Midwest Furfest next weekend.

(Reaction to Civil Procedure and coding errors caused by indentation)

Kay

New demo is up!
NISS Home Page

Here the AIs are moving around some, and you can talk with them and order them around... some. Still cruddy, but improving.


Kay

AI
With the practical goal of a playable game in mind, this is the order in which I'd get game AI working if I had a specific graphics engine to work with:

-AI Level 1: Standard RPG villagers. Features: "Stand there," "wander randomly," "wander in fixed ares," "move in fixed pattern"; single-response dialogue system. Equivalent to most console RPGs even today.
-Level 2: Morrowind-type. Features: "Stand there," "wander," "chase/follow target," "avoid"; topic-based dialogue system (lets you ask about various topics linked to NPCs' race/class/etc. and PC's reputation). Have basically built this level in Python and some of Level 3. Level 2 is probably good enough for finished game, to be improved upon as time permits?
-Level 3: NISS-type. Features: The above plus "daily work schedule," "seek food/entertainment/etc.," "converse with other NPCs"; some sort of building-block dialogue system to have more complex conversations without having to parse arbitrary English sentences.
-Level 4: Insert-apocalyptic-name-type. Features: The above plus the option to parse any dialogue the PC types; link to speech recognition/synthesis software?! More advanced than we can really expect; less important than the rest of the game.

Pending the discussion about graphics engines, when I next work on my code I'll focus on using a crude graphical world again so I can make Level 1 characters. That is, I'll turn off some AI features and get the MVC architecture working with a 2D world instead of text-based rooms.

These might be obvious, but in some other departments we'd want:

Graphics
-Graphics Level 1: Placeholder graphics; sprites of some kind walking around in a 2D world.
-Level 2: Height differences, ie. buildings sticking up from the ground, characters walking up stairs and slopes. Portals linking game areas, ie. doors to interiors if they're to be separate areas.
-Level 3: Objects. Barrels, boxes, shrubs, lampposts, chests, trees. A "script" system that notifies objects when they're hit/grabbed/etc. so they can run certain actions; data on each object (like weight) so PC can try to pick up, eat, or break anything in sight! Objects act as obstructions to block paths and to walk on; you can build a staircase out of crates. Real art for characters and world.
-Level 4: Fancy camerawork, lighting effects, cool backgrounds.

Gameplay
-Gameplay Level 1: You can walk around.
-Level 2: Basic menus (save/load; use item; talk). Movement through portals (eg. to/from world map); activation of game scripts linked to stepping on certain spots (eg. stepping through the castle gate triggers a scene where someone appears and talks to you).
-Level 3: Interact with objects and characters (hit/take/use/talk/throw?/others?)
-Level 4: Customize character: name/title/race/sex/clothes/equipment. Store data on these and on PC's reputation and history.

Writing/World-Building
-Writing Level 1: Figure out the basic theme and plot! Eg. are we using the weather-control device as a central plot and gameplay mechanic?
-Level 2: Pick locations to build, start developing specific plot events. Build Spartan but playable game areas.
-Level 3: Game areas that look nice; plot events written and worked into game.
-Level 4: Subplots and side-quests; polishing.

Suule

QuoteGraphics
-Graphics Level 1: Placeholder graphics; sprites of some kind walking around in a 2D world.
-Level 2: Height differences, ie. buildings sticking up from the ground, characters walking up stairs and slopes. Portals linking game areas, ie. doors to interiors if they're to be separate areas.
-Level 3: Objects. Barrels, boxes, shrubs, lampposts, chests, trees. A "script" system that notifies objects when they're hit/grabbed/etc. so they can run certain actions; data on each object (like weight) so PC can try to pick up, eat, or break anything in sight! Objects act as obstructions to block paths and to walk on; you can build a staircase out of crates. Real art for characters and world.
-Level 4: Fancy camerawork, lighting effects, cool backgrounds.
Level 1: No problem. I'm currently working on it.
Level 2: (WORKING) 2D - I think that the system used in Adventure game studio with few mods could just be it. The portals and such won't ba a problem too. (DESIGNING) 2D Isometric - Height diffrences can be a bother here since we'll have to do diffrent layers that overlap one over another (like Fallout: BOS)
Level 3: (DESIGNING) I think we should adapt something from visual programming like 'event handling'. Every object should've data fields 'On***' (give/hit/grabbed/picked up/etc.) with a pointer to a script that should be run.
Level 4: I haven't thought of any yet (except may be lightning effects or backgrounds)

Well we should do an Alpha first with let's say 'Stick Man' that walks around the screen using the mouse.

Oh and a very important question: WHAT resolution should we use?

Quote-Gameplay Level 1: You can walk around.
-Level 2: Basic menus (save/load; use item; talk). Movement through portals (eg. to/from world map); activation of game scripts linked to stepping on certain spots (eg. stepping through the castle gate triggers a scene where someone appears and talks to you).
-Level 3: Interact with objects and characters (hit/take/use/talk/throw?/others?)
-Level 4: Customize character: name/title/race/sex/clothes/equipment. Store data on these and on PC's reputation and history.

Level 1: Working on it
Level 2: Menus: I think here we should adapt some of the joys of SCUMM-like interface. The 'hot spots' and 'transitions' should be coded in level 1 I think, while the scripting language that supports them should be done in level 2.
Level 3: As I said: do EventHandles for scripts 'On***' for example OnUse stores a number 321 which is the number of script that should be triggered when the object is used.
Level 4: That'll be the worst part. I can use take something from my Roguelike and convert it to C++

Quote
Writing/World-Building
-Writing Level 1: Figure out the basic theme and plot! Eg. are we using the weather-control device as a central plot and gameplay mechanic?
-Level 2: Pick locations to build, start developing specific plot events. Build Spartan but playable game areas.
-Level 3: Game areas that look nice; plot events written and worked into game.
-Level 4: Subplots and side-quests; polishing.
Level 1: I presented the basic plot outline in the second topic.
Level 2: I'm doing some location sketches. The problem with them will be what size should the locations be. Give some basic ideas on the size of them ( should the fixed length backgrounds be scrolled? Will the isometric areas be rather big or small?
Level 3, 4: We can discuss that after doing a workable alpha.

Kay

Level 1 AI
I whipped up a demo of one this morning. It's state-based and has only "standthere" and "wander" states at the moment, but could be hooked into a graphics engine to demo characters walking around.
http://www.xepher.net/~kschnee/aidemo.txt
(From latest main-AI dialogue demo: "Tofu is an evil thing." I'll know it's time to stop the main AI project when my diagrams of it start looking like the kabbalistic sephiroth.)

Graphics
@Suule: "Adventure Game Studio" doesn't look very good; I hope you just mean in that general style (and only for some scenes) rather than actually using that engine. I think we want at least 800x600 resolution with at least 16-bit colors. We could go higher, but at that point the quality of the art matters more than the resolution. (Bad 1024x768 is slower than bad 800x600 without being prettier.)

@Threed & Suule:
It's great that you're already working on an engine, Suule, but is it going to be able to do Cool Stuff? It seems like there are some substantial things like camera tilting/rotation and automatic lighting that are at least hard with homebrew. The most obvious way I (at least) can think of to do isometrics would limit the camera angles to the four diagonal directions, and would force artists to draw wall textures in little parallelograms. And what language is that in?

:blink: I just found the SDL page's link to Python bindings. SDL + Python = Pygame! It sounds like using SDL to write an engine that works with Python would mean basically doing what I've already done poorly: make our own multi-layer tiled graphics engine. But then I guess we'd be doing that in OGRE too. For that, should we be looking at the "Yake Engine" or "Crazy Eddie's GUI" or any of the other pre-made things under OGRE? I note that PyOGRE seems to rely on .NET.

What do you think about tile size, or how many tiles "wide" the screen will be? If we've got a mobile camera we can zoom in and out, but still have to pick a texture size and a size for characters. I suggest making the 3D blocks 1/3 or even 1/4 the height of their width and length, so that a character can climb stairs that look like a plausible height. Have to think about vertical texture size, though; wouldn't want to have a wooden wall that obviously has the same texture printed on it four times per tile-sized section.

I'll have limited Net access from Fri. noon to Mon. afternoon, but will try to check in.  

Threed

PyOGRE is not .NET dependent; it just requires use of the Visual Studio 2003 C++ compiler - which is a free download from Microsoft. PyGame is a Python wrapper for SDL with a few things built on top; its a bit of a higher level abstraction that deals with concepts like sprites instead of raw images, automatically handles movies, dadda dadda dadda.

Crazy Eddie's GUI is an abstract GUI system that happens to have implementations under OGRE, Axiom, and some other engines. Yake, last I checked, was C++ based with using Lua as a scripting language.

Suule

Quote
Graphics
@Suule: "Adventure Game Studio" doesn't look very good; I hope you just mean in that general style (and only for some scenes) rather than actually using that engine. I think we want at least 800x600 resolution with at least 16-bit colors. We could go higher, but at that point the quality of the art matters more than the resolution. (Bad 1024x768 is slower than bad 800x600 without being prettier.)

@Threed & Suule:
It's great that you're already working on an engine, Suule, but is it going to be able to do Cool Stuff? It seems like there are some substantial things like camera tilting/rotation and automatic lighting that are at least hard with homebrew. The most obvious way I (at least) can think of to do isometrics would limit the camera angles to the four diagonal directions, and would force artists to draw wall textures in little parallelograms. And what language is that in?

Yeah it doesn''t. It has some good ideas but NASTY code optimalizations. I think of transmitting those ideas into our 2D-engine and re-writting them to make the code optimized and therefore fast. And what's more. I need the engine myself cause when I started designing an adventure game (based on my old ideas) none of the game-creator would suit my needs.

GRAPHICS:
The ONLY bitdepth I would accept would be 32-bit. Cause it's faster (instead of operating on 5-bit and 6-bit data nibbles we use full bytes and an alpha channel, so we don't have to deal with software-coded transparecny) and easier to implement on modern day cards. 800x600 is fine and I was gonna propose it myself if there wasn't an idea for it. Most of today's games use 800x600 as deafult.

In isometric mode. I though of using OpenGL instead of 2D isometric graphics, since it'll be faster... I tell you what. What if we'll SPILT the engine into various parts and linked them together? We need 2 Engines that have a similar object-handling/scripting system with diffrent views needed. Since we all be basing on SDL as the base of the project (OGRE uses SDL, right?) the only thing that would link our project would be that we would use the same data structures for objects. Threed can take care of Isometric part, while I code the indoor/flat BG part. How does that sound? And the language I would use would be of course C++ (Borland Compiler).  

Kay

As far as I can tell, OGRE isn't built atop SDL. It's written in C++, can use Direct3D and OpenGL for its drawing, and can be commanded by various languages.

800x600 32-bit graphics: Sounds fine.

Daydream: If we have code that casts dynamic light/shadows, and can poll what the light level is at the player's position, that's an easy way to add a bit of "Thief"-style stealth gameplay.

Split engine: Hmm. How are we going to coordinate code? Say you or Threed builds a playable stickman-controlled-by-player demo. How do I get my own AI code into that?

Class design
With NISS and the dumb-AI demo I posted earlier today, I made the PC and NPCs instances of the same class, with the main difference between them being that the PCs have "controlled_by_human = True" and that causes all the learning and decision-making code to be skipped over. I'd advise doing something similar: creating a generic Character class to hold the basic physical attributes like location, name, and race, so that AIs can be a subclass of that. To interact with the world, my AIs need some way to send commands to the world ("step north", "say 'Hello'", "use saxophone"), and some way to sense it. ("Can I see an object of type 'weapon?'" "What's the description of the objects in my field of view?") Currently they're using Views and Controllers that pass text messages back and forth, and when a NISS executes a walk-north command it's actually sending out the message "move north" to the world.

I'll dust off and post some of my item code, though it's not great and I don't have time at the moment to fully comment it. The Cliff's Notes version of the structure is:
-Items have names, types (possibly several), short descriptions that AIs can read, long descriptions that sound cool to humans, possibly other descriptions that come up when the object is "felt" or "tasted" etc., and an attached script #.
-The script # gets called whenever anything interacts with the object, and possibly based on a timer. The running script gets passed the calling object's identity, and the reason it was called (like time or character "Bob" performing action "eat" on it). It then does cases. ("If called because of eating, set the eater's status to "Poisoned.")
-Objects have a weight #. If characters have a Strength rating, this gives you an automatic way to rule that an Ox can throw stuff that a Mouse can't even lift.
-It'd be a good idea to have other tags like "flammable," "fragile," (or Durability, or HP) and "magnetic." Then you don't need a list of what objects can interact with what others; instead a flame spell "naturally" sets appropriate stuff on fire.

The philosophy behind that tagging is in this great interview with one of "Deus Ex"'s creators: building everything as objects that can be interacted with in standard ways creates the possibility of players finding fun, unexpected ways to use the environment. (Ideally, if I saw a tree I'd be able to climb it by "grabbing" or "pulling" it, cut it down by attacking with a weapon, or carve initials in it.)

Quote...I had an opportunity to see the demo for an upcoming game... [Sounds like "Morrowind." -K] The game seems to feature an extensive set of player tools and powers. However, most of them are purely related to inflicting damage. The rest of the environment is modeled in a very simple way. The game uses a traditional paper RPG-style 'spell' system, which should allow for a great number of interesting player expressions, even if you restrict your thinking to the tactical arena. So, during the demo, I inquired about types of spells that, in paper RPG's, are often exploited in interesting ways beyond toe-to-toe combat. For instance: Can the player freeze the water pool (in the cave featured as part of the demo) as a way of creating an alternate path around an enemy? Can the player levitate a lightweight enemy up off the ground and thus get by it without resorting to violence? Can the player take the form of a harmless ambient animal and sneak past the goblin? Can the player create fake sound-generating entities that distract the enemy? I believe the answer to all these questions is "no." The game was designed around pre-planned, emulated relationships between objects. Had the game been designed around a more flexible simulation, these sorts of interactions might have just worked, even if they had never occurred to the designers. (All of this still might be possible in the special case emulation model, but would run the risk of a great deal of inconsistency, would require tons of work and would not as likely produce emergent results.) Had the game been built around more thoroughly simulated game systems, creating more interesting (less combat-centric) tools would have been easier - the game's possibility space would have been greatly enlarged.

Running around a town throwing stuff, breaking stuff, climbing rooftops, etc. sounds fun even in the absence of a story!

From another interview, re: outdoor environments:
Quote...wide, empty landscapes with little density of interest. Games like Ultima VII did an okay job of providing something interesting every few hundred yards in the Great Forest (of Yew?). But other, similar RPG's have provided all this wilderness space and allowed the player to walk forever without providing for anything interesting (and often the player has to backtrack out of some cul-de-sac). And a few clones of Sid Meier's Pirates have, for some reason, assumed that sailing endlessly through empty oceans might be fun. Powerful outdoor terrain features have to be thought through, like everything else--they affect lots of things like the optimum player movement speed, which is also tied to other aspects of the game, like the pacing of play. It's not like you can just throw a switch and generate miles of terrain...

Kay

#41
Item code posted here. The most useful parts to look at might be the definition of the Item class, and the generic script at the end. (And yeah, I was messing with in-game objects that could do things to the real computer.)

Off to the con tomorrow!

@Threed: Lua? Doesn't that require a grass skirt?

Threed

Lua is a scripting language dependent mostly or in large part on a C++ engine that provides it data structures, which can then be manipulated through Lua.

Code whoring: Sule is obviously proficient in C++ (ANSI99 if he's a youngin'), Kay's forte' obviously lies with Python, and mine is typically C# or Boo (AKA, "Python for .NET."). One of these is a language, one of these is a slim language-dependent platform, and one of these is a bulky language-agonistic platform. They each favor different coding styles and conventions; templates in C++, tuples and type inference in Python, tuples, type inference, dadda dadda dadda. Kay's never going to touch C++ - I can tell by the knee-jerk reaction Kay's had to Java and C#, because in terms of complexity, it goes, (least to greatest), Python-->C#/Java-->C++. I write C++ code, but only as a complete and utter last resort - its either Python or Boo or C#, 'cuz I hate dealing with all that mundane shit like pointer arithmetic. Suule... well,  I take it he's hardcore C++ ninja fellah.

Getting Python or C++ to work together (PROPERLY) is very hard; though the Python interpretor is embeddable into C++ code, it requires careful consideration and standards on how you're going to expose C++ data to the python interpretor, and the person doing the python scripting will require YOUR SOUL -- uh, er, they'll require close communication, because in large part how the python code might be structured depends entirely on what's exposed to it by the C++ code. And then, of course, gotta deal with returning values from Python to C++, of which I have no idea how that works; never had to bother. (won't mention C#/Boo, 'cuz I'm biased. Does writing that make me more biased, or less? Maybe if I used 'drugged up on rootbeer and pizza,' would that count?).

Alternatively, it can be written entirely in Python, and thanks to Python's modular approach, speed-intensive modules can be written in C++ and invoked from Python, summoning great DEMONS FROM BEYOND TO CONSUME versatility when you wanna be that way.

*has no real comment towards either, just wanted to make sure everyone understood the issue at hand*

Temple of Elemental Evil used Python for some of its logic - the game was buggy but only because it did not go through QA enough times towards the end. Technology wise, awesome game; grab it for 9.99, download Patch2 and the Circle of Eight patches to make the game playable. Its pretty fun then. Oh, shit, is this a tangent?


Quote-It'd be a good idea to have other tags like "flammable," "fragile," (or Durability, or HP) and "magnetic." Then you don't need a list of what objects can interact with what others; instead a flame spell "naturally" sets appropriate stuff on fire.

For example:

"All flammable GameObjects implement the IFlammable interface, which requires they expose a public method signature called "Burnination(duration as int, damage as int); thus, when you call FlameAttack(player as GameObject, damage, duration), the FlameAttack() method will check to see if player implements IFlammable; if so, call Burnination(), then the standard Attack() methods; else, just call the standard Attack() method."

That?

Threed

Now, having actually read S's reply, I think I can comment some: (=D)

Quote...We need 2 Engines that have a similar object-handling/scripting system with diffrent views needed. Since we all be basing on SDL as the base of the project (OGRE uses SDL, right?) the only thing that would link our project would be that we would use the same data structures for objects. Threed can take care of Isometric part, while I code the indoor/flat BG part. How does that sound? And the language I would use would be of course C++ (Borland Compiler).

I think he's proposing two completely seperate engines that share a common file-format and scripting language like Lua or Python or whatever. Then, whenever the scene -- no, I take that back, have no idea what he's getting at. I thought it was coding two seperate engines and just switching between them dynamically, or two seperate rendering engines that then overlay a rendered bitmap over one another.

I have no idea what I've even said.

Kay

(I'm at a con... Shouldn't I be having more fun? Oh well, it's still Friday. I got so frustrated trying to dig the hotel info out of Firefox's cache that I wrote a quick Python program to search every one of those unlabeled, buried cache files -- and still didn't find it.)

Dialogue Concerning the Two Chief World Systems
I think what Suule meant was to have two systems for drawing backgrounds -- isometric or pre-rendered -- but it sounds like that might also require a different system for movement. If all we need for pre-rendered scenes is to blit a large image to the screen, that's easy in any language, I think.

What about tile-based versus pixel-accurate movement? That is, can you move a fraction of a tile? Half-tile movement could be a compromise, but I don't know how hard that'd be to program. I was thinking of the gameplay as a bit like Zelda, with a basically tile-based world but with the ability to move and throw stuff anywhere. If we have movement in strict increments of one tile, gameplay is probably limited to more of a "Final Fantasy" style and more dependant on plot. Even with strict tile increments we could still have lifting, throwing, and lighting-based stealth, though. How hard is it to do precise movement?

Languages
I'm biased towards using live snakes as the basis for the code. It sounds like we can plug in high-speed ninja C++ or assembly modules for any special needs we have, and for other purposes Pygame lets us use SDL.

Quote"All flammable GameObjects implement the IFlammable interface, which requires they expose a public method signature called "Burnination(duration as int, damage as int); thus, when you call FlameAttack(player as GameObject, damage, duration), the FlameAttack() method will check to see if player implements IFlammable; if so, call Burnination(), then the standard Attack() methods; else, just call the standard Attack() method." That?
Uh... yes?
I thought of it as, "All flammable GameObjects have flag self.flammable = True. When you create fire, a Fire(range, temperature) message is sent to all GameObjects in range; if an object's flammable is True, it catches fire."