Grundlagen Der KI at TU München | Flashcards & Summaries

Select your language

Suggested languages for you:
Log In Start studying!

Lernmaterialien für Grundlagen der KI an der TU München

Greife auf kostenlose Karteikarten, Zusammenfassungen, Übungsaufgaben und Altklausuren für deinen Grundlagen der KI Kurs an der TU München zu.

TESTE DEIN WISSEN

You understand the difference between omniscience, learning, and autonomy.

Lösung anzeigen
TESTE DEIN WISSEN

Omniscient agent

An omniscient agent knows the actual outcome of its actions, which is impossible in reality.

Example: Just imagine you know the outcome of betting money on something.

A rational agent (!= omniscient agent) maximizes expected performance.


Learning

Rational agents are able to learn from perception, i.e., they improve their knowledge of the environment over time.


Autonomy

In AI, a rational agent is considered more autonomous if it is less dependent on prior knowledge and uses newly learned abilities instead.

Lösung ausblenden
TESTE DEIN WISSEN

You know how to categorize task environments and can evaluate the difficulty of given tasks


Fully observable vs. partially observable

Lösung anzeigen
TESTE DEIN WISSEN

An environment is fully observable if the agent can detect the complete state of the environment, and partially observable otherwise.

Example: The vacuum-cleaner world is partially observable since the robot only knows whether the current square is dirty.

Fully observable often in games

Lösung ausblenden
TESTE DEIN WISSEN

You know how to categorize task environments and can evaluate the difficulty of given tasks


Single agent vs. multi agent

Lösung anzeigen
TESTE DEIN WISSEN

An environment is a multi agent environment if it contains several agents, and a single agent environment otherwise.

Example: The vacuum-cleaner world is a single agent environment. A chess game is a two-agent environment.

Lösung ausblenden
TESTE DEIN WISSEN

You know how to categorize task environments and can evaluate the difficulty of given tasks


Deterministic vs. stochastic

Lösung anzeigen
TESTE DEIN WISSEN

An environment is deterministic if its next state is fully determined by its current state and the action of the agent (outcome of an action is known), and stochastic otherwise.

Example: The automated taxi driver environment is stochastic since the behavior of other traffic participants is unpredictable. The outcome of a calculator is deterministic.

Lösung ausblenden
TESTE DEIN WISSEN

You know how to categorize task environments and can evaluate the difficulty of given tasks


Episodic vs. sequential

Lösung anzeigen
TESTE DEIN WISSEN

An environment is episodic if the actions taken in one episode (in which the robot senses and acts) does not affect later episodes, and sequential otherwise.

Example: Detecting defective parts on a conveyor belt is episodic. Chess and automated taxi driving are sequential.

Lösung ausblenden
TESTE DEIN WISSEN

You know how to categorize task environments and can evaluate the difficulty of given tasks


Static vs. dynamic

Lösung anzeigen
TESTE DEIN WISSEN

If an environment only changes based on actions of the agent, it is static, and dynamic otherwise.

Example: The automated taxi driver environment is dynamic. A crossword puzzle / chess is static.

Lösung ausblenden
TESTE DEIN WISSEN

You know how to categorize task environments and can evaluate the difficulty of given tasks


Known vs. unknown

Lösung anzeigen
TESTE DEIN WISSEN

An environment is known if the agent knows the outcomes (or outcome probabilities) of its actions, and unknown otherwise. In the latter case, the agent has to learn the environment first.

Example: The agent knows all the rules of a card game it should play, thus it is in a known environment. Also autonomous driving is known (proabilities)

Lösung ausblenden
TESTE DEIN WISSEN

You know major categories of types of agents and can group an agent into one of them.

Lösung anzeigen
TESTE DEIN WISSEN

Four categores with increasing generalitiy:

  • simple reflex agents,
  • reflex agents with state,
  • goal-based agents,
  • utility-based agents.

All these can be turned into learning agents.

Lösung ausblenden
TESTE DEIN WISSEN

You understand how real world problems can often be posed as a pure search problem.

Lösung anzeigen
TESTE DEIN WISSEN

Examples of Real-World Problems

  • Route-Finding problem: Airline travel planning, video streams in computer networks, etc.
  • Touring problem: How to best visit a number of places, e.g., in the map of Romania?
  • Layout of digital circuits: How to best place components and their connections on a circuit board?
  • Robot navigation: Similar to the route-finding problem, but in a continuous space.
  • Automatic assembly sequencing: In which order should a product be assembled?
  • Protein design: What sequence of amino acids will fold into a three-dimensional protein?
Lösung ausblenden
TESTE DEIN WISSEN

You can apply the most important uninformed search techniques: 


Depth-Limited Search

Lösung anzeigen
TESTE DEIN WISSEN

Depth-Limited Search: Idea

  • Shortcoming in depth-first search: Depth-first search does not terminate in infinite state spaces. Why?
  • Solution: Introduce depth limit l. (could be too low)
  • New issue: How to choose the depth-limit?
Lösung ausblenden
TESTE DEIN WISSEN

You understand the difference between informed and uninformed search.

Lösung anzeigen
TESTE DEIN WISSEN

Uninformed search

  • No additional information besides the problem statement (states, initial state, actions, transition model, goal test) is provided.
  • Uninformed search can only produce next states and check whether it is a goal state.


Informed search

  • Strategies know whether a state is more promising than another to reach a goal.
  • Informed search uses measures to indicate the distance to a goal.
Lösung ausblenden
TESTE DEIN WISSEN

You can recall the definition and understand the basic concept of rational agents.

Lösung anzeigen
TESTE DEIN WISSEN

Rationality

A system is rational if it does the “right thing”, i.e., has an ideal performance (performance measures are not always available).


Rational Agent

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the prior percept sequence and its built-in knowledge.

Lösung ausblenden
  • 499857 Karteikarten
  • 11041 Studierende
  • 471 Lernmaterialien

Beispielhafte Karteikarten für deinen Grundlagen der KI Kurs an der TU München - von Kommilitonen auf StudySmarter erstellt!

Q:

You understand the difference between omniscience, learning, and autonomy.

A:

Omniscient agent

An omniscient agent knows the actual outcome of its actions, which is impossible in reality.

Example: Just imagine you know the outcome of betting money on something.

A rational agent (!= omniscient agent) maximizes expected performance.


Learning

Rational agents are able to learn from perception, i.e., they improve their knowledge of the environment over time.


Autonomy

In AI, a rational agent is considered more autonomous if it is less dependent on prior knowledge and uses newly learned abilities instead.

Q:

You know how to categorize task environments and can evaluate the difficulty of given tasks


Fully observable vs. partially observable

A:

An environment is fully observable if the agent can detect the complete state of the environment, and partially observable otherwise.

Example: The vacuum-cleaner world is partially observable since the robot only knows whether the current square is dirty.

Fully observable often in games

Q:

You know how to categorize task environments and can evaluate the difficulty of given tasks


Single agent vs. multi agent

A:

An environment is a multi agent environment if it contains several agents, and a single agent environment otherwise.

Example: The vacuum-cleaner world is a single agent environment. A chess game is a two-agent environment.

Q:

You know how to categorize task environments and can evaluate the difficulty of given tasks


Deterministic vs. stochastic

A:

An environment is deterministic if its next state is fully determined by its current state and the action of the agent (outcome of an action is known), and stochastic otherwise.

Example: The automated taxi driver environment is stochastic since the behavior of other traffic participants is unpredictable. The outcome of a calculator is deterministic.

Q:

You know how to categorize task environments and can evaluate the difficulty of given tasks


Episodic vs. sequential

A:

An environment is episodic if the actions taken in one episode (in which the robot senses and acts) does not affect later episodes, and sequential otherwise.

Example: Detecting defective parts on a conveyor belt is episodic. Chess and automated taxi driving are sequential.

Mehr Karteikarten anzeigen
Q:

You know how to categorize task environments and can evaluate the difficulty of given tasks


Static vs. dynamic

A:

If an environment only changes based on actions of the agent, it is static, and dynamic otherwise.

Example: The automated taxi driver environment is dynamic. A crossword puzzle / chess is static.

Q:

You know how to categorize task environments and can evaluate the difficulty of given tasks


Known vs. unknown

A:

An environment is known if the agent knows the outcomes (or outcome probabilities) of its actions, and unknown otherwise. In the latter case, the agent has to learn the environment first.

Example: The agent knows all the rules of a card game it should play, thus it is in a known environment. Also autonomous driving is known (proabilities)

Q:

You know major categories of types of agents and can group an agent into one of them.

A:

Four categores with increasing generalitiy:

  • simple reflex agents,
  • reflex agents with state,
  • goal-based agents,
  • utility-based agents.

All these can be turned into learning agents.

Q:

You understand how real world problems can often be posed as a pure search problem.

A:

Examples of Real-World Problems

  • Route-Finding problem: Airline travel planning, video streams in computer networks, etc.
  • Touring problem: How to best visit a number of places, e.g., in the map of Romania?
  • Layout of digital circuits: How to best place components and their connections on a circuit board?
  • Robot navigation: Similar to the route-finding problem, but in a continuous space.
  • Automatic assembly sequencing: In which order should a product be assembled?
  • Protein design: What sequence of amino acids will fold into a three-dimensional protein?
Q:

You can apply the most important uninformed search techniques: 


Depth-Limited Search

A:

Depth-Limited Search: Idea

  • Shortcoming in depth-first search: Depth-first search does not terminate in infinite state spaces. Why?
  • Solution: Introduce depth limit l. (could be too low)
  • New issue: How to choose the depth-limit?
Q:

You understand the difference between informed and uninformed search.

A:

Uninformed search

  • No additional information besides the problem statement (states, initial state, actions, transition model, goal test) is provided.
  • Uninformed search can only produce next states and check whether it is a goal state.


Informed search

  • Strategies know whether a state is more promising than another to reach a goal.
  • Informed search uses measures to indicate the distance to a goal.
Q:

You can recall the definition and understand the basic concept of rational agents.

A:

Rationality

A system is rational if it does the “right thing”, i.e., has an ideal performance (performance measures are not always available).


Rational Agent

For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the prior percept sequence and its built-in knowledge.

Grundlagen der KI

Erstelle und finde Lernmaterialien auf StudySmarter.

Greife kostenlos auf tausende geteilte Karteikarten, Zusammenfassungen, Altklausuren und mehr zu.

Jetzt loslegen

Das sind die beliebtesten StudySmarter Kurse für deinen Studiengang Grundlagen der KI an der TU München

Für deinen Studiengang Grundlagen der KI an der TU München gibt es bereits viele Kurse, die von deinen Kommilitonen auf StudySmarter erstellt wurden. Karteikarten, Zusammenfassungen, Altklausuren, Übungsaufgaben und mehr warten auf dich!

Das sind die beliebtesten Grundlagen der KI Kurse im gesamten StudySmarter Universum

Grundlagen der IT

Duale Hochschule Baden-Württemberg

Zum Kurs
Grundlagen der KLR

Hochschule Aalen

Zum Kurs
Grundlagen KI

Hochschule Ruhr West

Zum Kurs

Die all-in-one Lernapp für Studierende

Greife auf Millionen geteilter Lernmaterialien der StudySmarter Community zu
Kostenlos anmelden Grundlagen der KI
Erstelle Karteikarten und Zusammenfassungen mit den StudySmarter Tools
Kostenlos loslegen Grundlagen der KI