Pattern Recognition an der TU München

Karteikarten und Zusammenfassungen für Pattern Recognition an der TU München

Arrow Arrow

Komplett kostenfrei

studysmarter schule studium
d

4.5 /5

studysmarter schule studium
d

4.8 /5

studysmarter schule studium
d

4.5 /5

studysmarter schule studium
d

4.8 /5

Lerne jetzt mit Karteikarten und Zusammenfassungen für den Kurs Pattern Recognition an der TU München.

Beispielhafte Karteikarten für Pattern Recognition an der TU München auf StudySmarter:

How is the accuracy defined?

Beispielhafte Karteikarten für Pattern Recognition an der TU München auf StudySmarter:

What does specificity describe? How is defined?

Beispielhafte Karteikarten für Pattern Recognition an der TU München auf StudySmarter:

What are otherwise we can look at a classifiers performance?

Beispielhafte Karteikarten für Pattern Recognition an der TU München auf StudySmarter:

How is F1-measure defined?

Beispielhafte Karteikarten für Pattern Recognition an der TU München auf StudySmarter:

What is k-fold cross-validation? And why do we do it?

Beispielhafte Karteikarten für Pattern Recognition an der TU München auf StudySmarter:

What methods other than "k-fold cv" can we used to avoid overfitting and generalize well on new data?

Beispielhafte Karteikarten für Pattern Recognition an der TU München auf StudySmarter:

How does bias and variance error gets introduced?

Beispielhafte Karteikarten für Pattern Recognition an der TU München auf StudySmarter:

Can you give an example of a classifier with high bias and high variance?

Beispielhafte Karteikarten für Pattern Recognition an der TU München auf StudySmarter:

What are the advantages and disadvantages of using naive bayes for spam detection?

Beispielhafte Karteikarten für Pattern Recognition an der TU München auf StudySmarter:

What three types of outlier exists?

Beispielhafte Karteikarten für Pattern Recognition an der TU München auf StudySmarter:

Describe how isolation trees detect outliers.

Beispielhafte Karteikarten für Pattern Recognition an der TU München auf StudySmarter:

Briefly describe how Naive Bayes words. Where is it normally applied?

Kommilitonen im Kurs Pattern Recognition an der TU München. erstellen und teilen Zusammenfassungen, Karteikarten, Lernpläne und andere Lernmaterialien mit der intelligenten StudySmarter Lernapp. Jetzt mitmachen!

Jetzt mitmachen!

Flashcard Flashcard

Beispielhafte Karteikarten für Pattern Recognition an der TU München auf StudySmarter:

Pattern Recognition

How is the accuracy defined?

accuracy = (TP + TN)/(TP +FP + TN +FN)

Pattern Recognition

What does specificity describe? How is defined?

Specificity describes how reliably negative samples are labeled as such:

specificity = TN/(TN +FP)

Pattern Recognition

What are otherwise we can look at a classifiers performance?

Another way of looking at classifier performance is the predictive value of a label.
As the name implies, this metric describes the probability of a sample actually belonging to class X if it was classified as such:
PPV = Positive Predictive Value = TP/(TP +FP)
NPV = Negative Predictive Value = TN/(TN +FN)

Pattern Recognition

How is F1-measure defined?

F1 measure = 2 ×precision × recall/(precision + recall)

Pattern Recognition

What is k-fold cross-validation? And why do we do it?

With e.g. k = 5, the data are split into 5 equal pieces. In the first fold, pieces 1–4 are used for training and piece 5 for testing; in the second fold, piece 4 is used for testing and 1–3, 5 for training; etc.
• every data point is tested exactly once
• but still expensive

We use it to generalize well on new data and to avoid overfitting on the available training data.

Pattern Recognition

What methods other than "k-fold cv" can we used to avoid overfitting and generalize well on new data?

  1. Random split: Randomly sample a certain proportion (e.g. 50%, 70%) of the data set as the training set, use the remainder for testing.
    • simple
    • potentially wasteful
  2. Leave-one-out, leave-one-pair-out (LOO/LOPO): Training is performed on all data but one point (or one pair of 1 positive, 1 negative sample). This is repeated for every possible test set.
    • very expensive
    • maximizes available training data
  3. Bootstrapping: Uses random splits with replacement.
    • allows estimation of bias and variance

Pattern Recognition

How does bias and variance error gets introduced?

Error due to model complexity is called the variance error. Error introduced due to some biases in the data is called bias error.

Pattern Recognition

Can you give an example of a classifier with high bias and high variance?

High bias means the data is being underfitted. The decision boundary is not usually complex enough. High variance happens due to overfitting, the decision boundary is more complex than what it should be.  

High bias high variance happens when you fit a complex decision boundary that is also not fitting the training set correctly in several places. 

Pattern Recognition

What are the advantages and disadvantages of using naive bayes for spam detection?

  • Disadvantages: Naive Bayes is based on the conditional independence of features assumption – an assumption that is not valid in many real-world scenarios. Hence it sometimes oversimplifies the problem by saying features are independent and gives a sub-par performance. There are chances of under-fitting due to this assumption. 
  • Advantages: However, Naive Bayes is very efficient. It is a model you can train in a single iteration and hence fast to execute. It can be parallelized easily. Naive Bayes works when there are fewer data and lots of features, like bag of words with text data. Due to independence assumption, the number of parameters is less and constant w.r.t data (unlike other algorithms like decision trees). There are fewer chances of overfitting.

Pattern Recognition

What three types of outlier exists?

  • Point outliers are single data points that lay far from the rest of the distribution. 
  • Contextual outliers can be noise in data, such as punctuation symbols when realizing text analysis or background noise signal when doing speech recognition. 
  • Collective outliers can be subsets of novelties in data such as a signal that may indicate the discovery of new phenomena.

Pattern Recognition

Describe how isolation trees detect outliers.

  • To build a tree, the algorithm randomly picks a feature from the feature space and a random split value ranging between the maximums and minimums. This is made for all the observations in the training set. 
  • To build the forest, a tree ensemble is made averaging all the trees in the forest.
  • Then for prediction, it compares an observation against that splitting value in a “node”, that node will have two node children on which another random comparison will be made. The number of “splittings” made by the algorithm for an instance is named: “path length”. 
  • As expected, outliers will have shorter path lengths than the rest of the observations.

Pattern Recognition

Briefly describe how Naive Bayes words. Where is it normally applied?

Naive Bayes is a supervised learning algorithm for classification so the task is to find the class of observation (data point) given the values of features. Naive Bayes classifier calculates the posterior probabilities using Bayes Theorem of it being a specific class when specific features appear. This classifier assumes the features (e.g. words as input) are independent, so the calculation of the class given the features is easier to compute.

It used for

  • Real-time Prediction
  • Text classification/ Spam Filtering
  • Recommendation System


Melde dich jetzt kostenfrei an um alle Karteikarten und Zusammenfassungen für Pattern Recognition an der TU München zu sehen

Singup Image Singup Image
Wave

Andere Kurse aus deinem Studiengang

Für deinen Studiengang Pattern Recognition an der TU München gibt es bereits viele Kurse auf StudySmarter, denen du beitreten kannst. Karteikarten, Zusammenfassungen und vieles mehr warten auf dich.

Zurück zur TU München Übersichtsseite

Was ist StudySmarter?

Was ist StudySmarter?

StudySmarter ist eine intelligente Lernapp für Studenten. Mit StudySmarter kannst du dir effizient und spielerisch Karteikarten, Zusammenfassungen, Mind-Maps, Lernpläne und mehr erstellen. Erstelle deine eigenen Karteikarten z.B. für Pattern Recognition an der TU München oder greife auf tausende Lernmaterialien deiner Kommilitonen zu. Egal, ob an deiner Uni oder an anderen Universitäten. Hunderttausende Studierende bereiten sich mit StudySmarter effizient auf ihre Klausuren vor. Erhältlich auf Web, Android & iOS. Komplett kostenfrei. Keine Haken.

Awards

Bestes EdTech Startup in Deutschland

Awards
Awards

European Youth Award in Smart Learning

Awards
Awards

Bestes EdTech Startup in Europa

Awards
Awards

Bestes EdTech Startup in Deutschland

Awards
Awards

European Youth Award in Smart Learning

Awards
Awards

Bestes EdTech Startup in Europa

Awards

So funktioniert's

Top-Image

Individueller Lernplan

StudySmarter erstellt dir einen individuellen Lernplan, abgestimmt auf deinen Lerntyp.

Top-Image

Erstelle Karteikarten

Erstelle dir Karteikarten mit Hilfe der Screenshot-, und Markierfunktion, direkt aus deinen Inhalten.

Top-Image

Erstelle Zusammenfassungen

Markiere die wichtigsten Passagen in deinen Dokumenten und bekomme deine Zusammenfassung.

Top-Image

Lerne alleine oder im Team

StudySmarter findet deine Lerngruppe automatisch. Teile deine Lerninhalte mit Freunden und erhalte Antworten auf deine Fragen.

Top-Image

Statistiken und Feedback

Behalte immer den Überblick über deinen Lernfortschritt. StudySmarter führt dich zur Traumnote.

1

Lernplan

2

Karteikarten

3

Zusammenfassungen

4

Teamwork

5

Feedback