Grundlagen Der KI at TU München | Flashcards & Summaries

Suggested languages for you:

# Lernmaterialien für Grundlagen der KI an der TU München

Greife auf kostenlose Karteikarten, Zusammenfassungen, Übungsaufgaben und Altklausuren für deinen Grundlagen der KI Kurs an der TU München zu.

TESTE DEIN WISSEN

You understand the diﬀerence between omniscience, learning, and autonomy.

Lösung anzeigen
TESTE DEIN WISSEN

Omniscient agent

An omniscient agent knows the actual outcome of its actions, which is impossible in reality.

Example: Just imagine you know the outcome of betting money on something.

A rational agent (!= omniscient agent) maximizes expected performance.

Learning

Rational agents are able to learn from perception, i.e., they improve their knowledge of the environment over time.

Autonomy

In AI, a rational agent is considered more autonomous if it is less dependent on prior knowledge and uses newly learned abilities instead.

Lösung ausblenden
TESTE DEIN WISSEN

You know how to categorize task environments and can evaluate the diﬃculty of given tasks

Fully observable vs. partially observable

Lösung anzeigen
TESTE DEIN WISSEN

An environment is fully observable if the agent can detect the complete state of the environment, and partially observable otherwise.

Example: The vacuum-cleaner world is partially observable since the robot only knows whether the current square is dirty.

Fully observable often in games

Lösung ausblenden
TESTE DEIN WISSEN

You know how to categorize task environments and can evaluate the diﬃculty of given tasks

Single agent vs. multi agent

Lösung anzeigen
TESTE DEIN WISSEN

An environment is a multi agent environment if it contains several agents, and a single agent environment otherwise.

Example: The vacuum-cleaner world is a single agent environment. A chess game is a two-agent environment.

Lösung ausblenden
TESTE DEIN WISSEN

You know how to categorize task environments and can evaluate the diﬃculty of given tasks

Deterministic vs. stochastic

Lösung anzeigen
TESTE DEIN WISSEN

An environment is deterministic if its next state is fully determined by its current state and the action of the agent (outcome of an action is known), and stochastic otherwise.

Example: The automated taxi driver environment is stochastic since the behavior of other traﬃc participants is unpredictable. The outcome of a calculator is deterministic.

Lösung ausblenden
TESTE DEIN WISSEN

You know how to categorize task environments and can evaluate the diﬃculty of given tasks

Episodic vs. sequential

Lösung anzeigen
TESTE DEIN WISSEN

An environment is episodic if the actions taken in one episode (in which the robot senses and acts) does not aﬀect later episodes, and sequential otherwise.

Example: Detecting defective parts on a conveyor belt is episodic. Chess and automated taxi driving are sequential.

Lösung ausblenden
TESTE DEIN WISSEN

You know how to categorize task environments and can evaluate the diﬃculty of given tasks

Static vs. dynamic

Lösung anzeigen
TESTE DEIN WISSEN

If an environment only changes based on actions of the agent, it is static, and dynamic otherwise.

Example: The automated taxi driver environment is dynamic. A crossword puzzle / chess is static.

Lösung ausblenden
TESTE DEIN WISSEN

You know how to categorize task environments and can evaluate the diﬃculty of given tasks

Known vs. unknown

Lösung anzeigen
TESTE DEIN WISSEN

An environment is known if the agent knows the outcomes (or outcome probabilities) of its actions, and unknown otherwise. In the latter case, the agent has to learn the environment ﬁrst.

Example: The agent knows all the rules of a card game it should play, thus it is in a known environment. Also autonomous driving is known (proabilities)

Lösung ausblenden
TESTE DEIN WISSEN

You know major categories of types of agents and can group an agent into one of them.

Lösung anzeigen
TESTE DEIN WISSEN

Four categores with increasing generalitiy:

• simple reﬂex agents,
• reﬂex agents with state,
• goal-based agents,
• utility-based agents.

All these can be turned into learning agents.

Lösung ausblenden
TESTE DEIN WISSEN

You understand how real world problems can often be posed as a pure search problem.

Lösung anzeigen
TESTE DEIN WISSEN

Examples of Real-World Problems

• Route-Finding problem: Airline travel planning, video streams in computer networks, etc.
• Touring problem: How to best visit a number of places, e.g., in the map of Romania?
• Layout of digital circuits: How to best place components and their connections on a circuit board?
• Robot navigation: Similar to the route-ﬁnding problem, but in a continuous space.
• Automatic assembly sequencing: In which order should a product be assembled?
• Protein design: What sequence of amino acids will fold into a three-dimensional protein?
Lösung ausblenden
TESTE DEIN WISSEN

You can apply the most important uninformed search techniques:

Depth-Limited Search

Lösung anzeigen
TESTE DEIN WISSEN

Depth-Limited Search: Idea

• Shortcoming in depth-ﬁrst search: Depth-ﬁrst search does not terminate in inﬁnite state spaces. Why?
• Solution: Introduce depth limit l. (could be too low)
• New issue: How to choose the depth-limit?
Lösung ausblenden
TESTE DEIN WISSEN

You understand the diﬀerence between informed and uninformed search.

Lösung anzeigen
TESTE DEIN WISSEN

Uninformed search

• No additional information besides the problem statement (states, initial state, actions, transition model, goal test) is provided.
• Uninformed search can only produce next states and check whether it is a goal state.

Informed search

• Strategies know whether a state is more promising than another to reach a goal.
• Informed search uses measures to indicate the distance to a goal.
Lösung ausblenden
TESTE DEIN WISSEN

You can recall the deﬁnition and understand the basic concept of rational agents.

Lösung anzeigen
TESTE DEIN WISSEN

Rationality

A system is rational if it does the “right thing”, i.e., has an ideal performance (performance measures are not always available).

Rational Agent

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the prior percept sequence and its built-in knowledge.

Lösung ausblenden
• 499857 Karteikarten
• 11041 Studierende
• 471 Lernmaterialien

## Beispielhafte Karteikarten für deinen Grundlagen der KI Kurs an der TU München - von Kommilitonen auf StudySmarter erstellt!

Q:

You understand the diﬀerence between omniscience, learning, and autonomy.

A:

Omniscient agent

An omniscient agent knows the actual outcome of its actions, which is impossible in reality.

Example: Just imagine you know the outcome of betting money on something.

A rational agent (!= omniscient agent) maximizes expected performance.

Learning

Rational agents are able to learn from perception, i.e., they improve their knowledge of the environment over time.

Autonomy

In AI, a rational agent is considered more autonomous if it is less dependent on prior knowledge and uses newly learned abilities instead.

Q:

You know how to categorize task environments and can evaluate the diﬃculty of given tasks

Fully observable vs. partially observable

A:

An environment is fully observable if the agent can detect the complete state of the environment, and partially observable otherwise.

Example: The vacuum-cleaner world is partially observable since the robot only knows whether the current square is dirty.

Fully observable often in games

Q:

You know how to categorize task environments and can evaluate the diﬃculty of given tasks

Single agent vs. multi agent

A:

An environment is a multi agent environment if it contains several agents, and a single agent environment otherwise.

Example: The vacuum-cleaner world is a single agent environment. A chess game is a two-agent environment.

Q:

You know how to categorize task environments and can evaluate the diﬃculty of given tasks

Deterministic vs. stochastic

A:

An environment is deterministic if its next state is fully determined by its current state and the action of the agent (outcome of an action is known), and stochastic otherwise.

Example: The automated taxi driver environment is stochastic since the behavior of other traﬃc participants is unpredictable. The outcome of a calculator is deterministic.

Q:

You know how to categorize task environments and can evaluate the diﬃculty of given tasks

Episodic vs. sequential

A:

An environment is episodic if the actions taken in one episode (in which the robot senses and acts) does not aﬀect later episodes, and sequential otherwise.

Example: Detecting defective parts on a conveyor belt is episodic. Chess and automated taxi driving are sequential.

Q:

You know how to categorize task environments and can evaluate the diﬃculty of given tasks

Static vs. dynamic

A:

If an environment only changes based on actions of the agent, it is static, and dynamic otherwise.

Example: The automated taxi driver environment is dynamic. A crossword puzzle / chess is static.

Q:

You know how to categorize task environments and can evaluate the diﬃculty of given tasks

Known vs. unknown

A:

An environment is known if the agent knows the outcomes (or outcome probabilities) of its actions, and unknown otherwise. In the latter case, the agent has to learn the environment ﬁrst.

Example: The agent knows all the rules of a card game it should play, thus it is in a known environment. Also autonomous driving is known (proabilities)

Q:

You know major categories of types of agents and can group an agent into one of them.

A:

Four categores with increasing generalitiy:

• simple reﬂex agents,
• reﬂex agents with state,
• goal-based agents,
• utility-based agents.

All these can be turned into learning agents.

Q:

You understand how real world problems can often be posed as a pure search problem.

A:

Examples of Real-World Problems

• Route-Finding problem: Airline travel planning, video streams in computer networks, etc.
• Touring problem: How to best visit a number of places, e.g., in the map of Romania?
• Layout of digital circuits: How to best place components and their connections on a circuit board?
• Robot navigation: Similar to the route-ﬁnding problem, but in a continuous space.
• Automatic assembly sequencing: In which order should a product be assembled?
• Protein design: What sequence of amino acids will fold into a three-dimensional protein?
Q:

You can apply the most important uninformed search techniques:

Depth-Limited Search

A:

Depth-Limited Search: Idea

• Shortcoming in depth-ﬁrst search: Depth-ﬁrst search does not terminate in inﬁnite state spaces. Why?
• Solution: Introduce depth limit l. (could be too low)
• New issue: How to choose the depth-limit?
Q:

You understand the diﬀerence between informed and uninformed search.

A:

Uninformed search

• No additional information besides the problem statement (states, initial state, actions, transition model, goal test) is provided.
• Uninformed search can only produce next states and check whether it is a goal state.

Informed search

• Strategies know whether a state is more promising than another to reach a goal.
• Informed search uses measures to indicate the distance to a goal.
Q:

You can recall the deﬁnition and understand the basic concept of rational agents.

A:

Rationality

A system is rational if it does the “right thing”, i.e., has an ideal performance (performance measures are not always available).

Rational Agent

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the prior percept sequence and its built-in knowledge.

### Erstelle und finde Lernmaterialien auf StudySmarter.

Greife kostenlos auf tausende geteilte Karteikarten, Zusammenfassungen, Altklausuren und mehr zu.

## Das sind die beliebtesten StudySmarter Kurse für deinen Studiengang Grundlagen der KI an der TU München

Für deinen Studiengang Grundlagen der KI an der TU München gibt es bereits viele Kurse, die von deinen Kommilitonen auf StudySmarter erstellt wurden. Karteikarten, Zusammenfassungen, Altklausuren, Übungsaufgaben und mehr warten auf dich!