The
Structure of Agents
An intelligent agent is a combination of Agent
Program and Architecture.
Intelligent Agent = Agent Program + Architecture
Agent Program is a function that implements the agent mapping from percepts to actions. There exists a variety of basic agent program designs, reflecting the kind of information made explicit and used in the decision process. The designs vary in efficiency, compactness, and flexibility. The appropriate design of the agent program depends on the nature of the environment.
Intelligent Agent = Agent Program + Architecture
Agent Program is a function that implements the agent mapping from percepts to actions. There exists a variety of basic agent program designs, reflecting the kind of information made explicit and used in the decision process. The designs vary in efficiency, compactness, and flexibility. The appropriate design of the agent program depends on the nature of the environment.
Architecture is a computing device used to
run the agent program.
To perform the mapping task four types of
agent programs are there. They are
1. Simple reflex agents
2. Model-based reflex agents
3. Goal-based agents
4. Utility-based agents
We then explain in general terms how to convert all these into learning agents.
1. Simple reflex agents
2. Model-based reflex agents
3. Goal-based agents
4. Utility-based agents
We then explain in general terms how to convert all these into learning agents.
1.Simple
reflex agents
The simplest kind of agent is the simple reflex
agent. It responds directly to percepts i.e. these agent select actions on the
basis of the current percept, ignoring the rest of the percept history.
An agent describes about how the condition
action rules allow the agent to make the connection from percept to
action.
Condition
action rule: if condition then action
Rectangle to denote the
current internal state of the agents decision process.
Oval to represent the background information in the process.
Oval to represent the background information in the process.
The agent program, which is also very simple, is
shown in the following figure.
function SIMPLE-REFLEX-AGENT (percept) returns an action
static: rules, a set of condition-action rules
state ← INTERPRET – INPUT(percept)
rule ← RULE – MATCH(state, rules)
action ← RULE – ACTION[rule]
return action
function SIMPLE-REFLEX-AGENT (percept) returns an action
static: rules, a set of condition-action rules
state ← INTERPRET – INPUT(percept)
rule ← RULE – MATCH(state, rules)
action ← RULE – ACTION[rule]
return action
INTERRUPT-INPUT –
function generates an abstracted description of the current state from the
percept.
RULE-MATCH
– function returns the first rule in the set of rules that matches the given
state description.
RULE-ACTION –
the selected rule is executed as action of the given percept.
The agent in figure will work only ―if the correct
decision can be made on the basis of only the current percept – that is, only
if the environment is fully observable‖.
Example:
Medical diagnosis system
if the
patient has reddish brown spots then start the treatment for measles.
2.
Model-based reflex agents (Agents that keep track of
the world)
The most effective way to handle partial observability is for the agent ―to keep track of the part of the world it cant see now. That is, the agent which combines the current percept with the old internal state to generate updated description of the current state.
The most effective way to handle partial observability is for the agent ―to keep track of the part of the world it cant see now. That is, the agent which combines the current percept with the old internal state to generate updated description of the current state.
The current percept is combined with the old
internal state and it derives a new current state is updated in the state
description is also. This updation requires two kinds of knowledge in the agent
program. First, we need some information about how the world evolves
independently of the agent. Second, we need some information about how the
agents own actions affect the world.
The above two knowledge implemented in simple
Boolean circuits or in complete scientific theories is called a model of
the world. An agent that uses such a model is called a model- based agent.
The above figure shows the structure of the reflex
agent with internal state, showing how the current percept id combined with the
old internal state to generate the updated description of the current state.
function
REFLEX-AGENT-WITH-STATE (percept) returns an action
static: state, a description of the current world state
rules, a set of condition-action rules
action, the most recent action, initially none
state ← UPDATE-STATE(state, action, percept)
rule ← RULE-MATCH(state, rules)
action ← RULE-ACTION[rule]
return action
static: state, a description of the current world state
rules, a set of condition-action rules
action, the most recent action, initially none
state ← UPDATE-STATE(state, action, percept)
rule ← RULE-MATCH(state, rules)
action ← RULE-ACTION[rule]
return action
A model-based reflex agent. It keeps track of the
current state of the world using an internal model. It then chooses an action
in the same way as the reflex agent.
UPDATE-STATE –
This is responsible for creating the new internal state description by
combining percept and current state description.
3.
Goal-based agents
An agent knows the description of current state and also needs some sort of goal information that describes situations that are desirable. The action matches with the current state is selected depends on the goal state.
An agent knows the description of current state and also needs some sort of goal information that describes situations that are desirable. The action matches with the current state is selected depends on the goal state.
The goal based agent is more flexible for more
than one destination also. After identifying one destination, the new
destination is specified, goal based agent is activated to come up with a new
behavior. Search and Planning are the subfields of AI devoted to finding action
sequences that achieve the agents goals.
The goal-based agent appears less efficient, it is
more flexible because the knowledge that supports its decisions is represented
explicitly and can be modified. The goal-based agent‘s behavior can easily be
changed to go to a different location.
4.
Utility-based agents (Utility – refers to ― the
quality of being useful‖)
An agent generates a goal state with high –
quality behavior (utility) that is, if more than one sequence exists to reach
the goal state then the sequence with more reliable, safer, quicker and cheaper
than others to be selected.
A utility function maps a state (or sequence of
states) onto a real number, which describes the associated degree of happiness.
The utility function can be used for two different cases: First, when there are
conflicting goals, only some of which can be achieved (for e.g., speed and
safety), the utility function specifies the appropriate tradeoff. Second, when
the agent aims for several goals, none of which can be achieved with certainty,
then the success can be weighted up against the importance of the goals.
Learning
agents
The learning task allows the agent to
operate in initially unknown environments and to become more competent than its
initial knowledge.
A learning agent can be divided into four conceptual components, .
A learning agent can be divided into four conceptual components, .
Learning element – This is responsible for
making improvements. It uses the feedback from the critic on how the agent is
doing and determines how the performance element should be modified to do
better in the future.
Performance
element – which is responsible for selecting
external actions and it is equivalent to agent: it takes in percepts and
decides on actions.
Critic –
It tells the learning element how well the agent is doing with respect to a
fixed performance standard.
Problem generator – It is responsible for
suggesting actions that will lead to new and informative experiences.
In summary, agents have a variety of
components, and those components can be represented in many ways within the
agent program, so there appears to be great variety among learning methods.
Learning in intelligent agents can be summarized as a process of modification
of each component of the agent to bring the components into closer agreement
with the available feedback information, thereby improving the overall
performance of the agent (All agents can improve their performance through
learning).
No comments:
Post a Comment