The A.I logic of F.E.A.R

The most common techniques (at least by 2006) applied to game’s A.I are A* for paths, and Finite State Machines (FSMs) to assign states to control player behavior, F.EA.R used FSMs, yet, in a game where the A.I logic takes covers, throws grenades, communicates with the squad and a lot more, the FSM had only three states.

F.E.A.R A.I logic
State Machine of F.E.A.R

The states are “Goto”, which moves a character to a certain position, the other relevant state is “Animate”, which tells the character what animation to play, that’s what’s inside the State Machine, then the complex part is creating the conditions to indicate where should the character move or what animation should it play and how.

This logic of decision making was not implemented directly in the State Machine, but on an external ‘planning system’, which gives the A.I the knowledge needed to make its own decisions on when to transition to an state.


FSMs vs Planning in A.I logic

An FSM tells the character how to behave under any situation, on the other hand, the planning system gives the A.I an objective and a bunch of actions, then the A.I chooses the best sequence of actions to reach that defined objective. Some of those actions have ‘pre-conditions’ that must be met before the character A.I considers making the action, each action affects the world in some manner.

Fear A.I logic
Example

As an example, if the character was trapped in a room with a locked wooden door, there’s the goal for the A.I logic to escape the room, there could be two actions that the character could take, one is finding the door’s key, the other one is getting an axe to break the door, yet there’s a pre-condition that needs to be met for the ‘getting an axe’ action being a possibility, for example, the character should have ‘x’ points of ‘strength’ to pick up the axe. The best choice would be to find the key first, if the character can’t find the key for any reason (it was stolen, for example) then the next choice would be to pick up the axe, if the character has met the pre-condition, it can pick the axe and break the door, therefore fulfilling the goal, if the character doesn’t meet the pre-condition, he needs to start another action to meet it, otherwise it will never meet the goal. In a world where the character has both the key and the axe, the best action to take for meeting the goal is using the key.

The goal is a desired state of the world that the A.I wants to reach, the A.I reaches that goal by a sequence of action, each actions might have a precondition for it to be available, and each action has an ‘effect’ that drives the A.I closer to the initial goal.

FEAR
Screenshot from F.E.A.R

The previous case study illustrates the first of three benefits of a planning system. The first benefit is the ability to decouple goals and actions, to allow different types of characters to satisfy goals in different ways. The second benefit of a planning system is facilitation of layering simple behaviors to produce complex observable behavior. The third benefit is empowering characters with dynamic problem solving abilities. – Jeff Orkin

If you have different characters with the same set of goals but different actions, they will try to meet the same goals in different ways. With this system of actions and goals, it becomes modular and therefore very easy to maintain an uncluttered A.I logic per character and also very easy to implement new characters by combining actions of previous characters while maintaining a similar if not equal set of goals.

Each action has a cost assigned to it, so the A.I logic chooses the ‘cheapest action’ to satisfy the goal, as long as it meets the pre-conditions for making that action. As a side-note, I would say that for it being more ‘natural’, the decision of choosing the cheapest action should be a probability and not a rule, especially in situations were the character is under stressful conditions where it needs to think quick. 

Paper

Share this

Leave a Reply