What is it about?

An Artificial Intelligence (AI) agent that plans to accomplish tasks in a (partially) unknown environment needs to learn and revise a model of the environment online, by executing actions and perceiving the action effects in the environment. The environment model should abstract away the details of the environment which are irrelevant for the achievement of the agent goals. In real-world environments, typically an agent is provided with sensors returning low-level perceptions of the environment (e.g. an RGB camera that returns an image of the agent view). This paper provides a general architecture for an AI agent that plan to learn an environment model by interacting with the environment and abstracting sensory data.

Featured Image

Read the Original

This page is a summary of: Planning and learning to perceive in partially unknown environments, Intelligenza Artificiale The international journal of the AIxIA, October 2024, IOS Press,
DOI: 10.3233/ia-240036.
You can read the full text:

Read

Contributors

The following have contributed to this page