Old Exam Questions at Universität Tübingen | Flashcards & Summaries

Select your language

Suggested languages for you:
Log In Start studying!

Lernmaterialien für Old exam questions an der Universität Tübingen

Greife auf kostenlose Karteikarten, Zusammenfassungen, Übungsaufgaben und Altklausuren für deinen Old exam questions Kurs an der Universität Tübingen zu.

TESTE DEIN WISSEN

Explain how the strength of TMS stimulation is usually determined (when using TMS to studycognition). Why do we do this and what is the implicit assumption here?

Lösung anzeigen
TESTE DEIN WISSEN

The strength of TMS stimulation is usually determined by selecting an area of the brain where TMS application results in a clearly observable effect (e.g. hand twitch after stimulation of the 'handarea') and then modulating the strength of the TMS stimulation until this behavioural consequence of TMS (e.g. hand twitch) is present approximately 50% of the time. We do this to ensure that the stimulation strength is high enough to elicit neuronal activity in the brain but not higher than necessary to minimize side effects (like for example an epileptic seizure).A strong assumption that is made here is that the TMS application strength needed to elicit neuronal activity at the selected area generalises to the rest of the brain (i.e. the same stimulation strength is equally capable ofeliciting neural activity at other locations).

Lösung ausblenden
TESTE DEIN WISSEN

Y = Xβ+ε ... what is represented by the variables of the general linear model in the case of a single subject analysis?

Lösung anzeigen
TESTE DEIN WISSEN

The vector Y contains the measured time series of BOLD signals in a given voxel. The matrix X contains the predictors that together should explain the timecourse of the measured data in Y. The vector β contains the weights for all predictors in the matrix X. Each β corresponds to one predictor and expresses its contribution to the whole BOLD time series. ε contains the residuals, i.e. the differences between the measured timecourse in Y and the predicted timecourse taken from the combination of X and β.

Lösung ausblenden
TESTE DEIN WISSEN

What is the most important risk for the subject associated with TMS? How can we minimize this risk?

Lösung anzeigen
TESTE DEIN WISSEN

The most important risk is an epileptic seizure. We can minimize this by1) excluding subjects who have a prior history or family history of epilepsy or seizures and2) adhering to established safety guidelines concerning strength, frequency and duration of stimulation.

Lösung ausblenden
TESTE DEIN WISSEN

What is a participant bias, how can it affect your experiment, and how can you control for it?

Lösung anzeigen
TESTE DEIN WISSEN

Our participants always know that they are part of an experiment. Thus, they behave in a way that supposedly would be good for the experiment based on their perception of and experience with the respective experimental atsks. They develop their own ideas and hypotheses while doing their task. Either they catch the true purpose and, consciously or unconsciously, produce the expected results. Or they develop wrong beliefs about a presumed goal of the experiment and start to behave accordingly. Thus, instead of the experimental manipulations themselves, the beliefs of the participants would primarily influence the result. To avoid such effects you should give away as less information about the true purpose of the experiment as possible. Furthermore, you can ask the participants after the experiment about their personal ideas and hypotheses and thereby see whether there were any systematic effects across the participant group. Finally, between-subject designs would be a good choice because less information about the experimental manipulations is available for the individual participant.

Lösung ausblenden
TESTE DEIN WISSEN

Conducting a BOLD-fMRI experiment you typically measure an anatomical and a functional set of data. Crucial steps in data processing are normalisation, motion correction, and between modality co-registration. Please describe these steps of data processing in the right order. Which of the two data sets is processed and what problem is addressed by the respective steps?

Lösung anzeigen
TESTE DEIN WISSEN

Preprocessing starts with motion correction of the functional data series. The participants in fMRI experiments can usually not prevent the occurrence of at least small movements during the measurement. Such small movements could cause artifacts in the final analysis and must be corrected. Next, we coregister the functional and the anatomical images. As both data acquisitions take place at different times they could potentially be in spatial disagreement. To combine signal detection in the functional analysis with the high-resolution anatomical images later, we ensure that both are in spatial agreement. Finally, we normalise all images. We estimate differences between the anatomical scan of a participant with a standard template and manipulate firstly the anatomical image to match the template and then apply the same changes to all functional images.

Lösung ausblenden
TESTE DEIN WISSEN

Discuss the aspect of "causality" in different methods (lesion-behaviour mapping, fMRI,TMS). Given that we now have methods like fMRI and TMS to study the functional neuroanatomy of the brain without having to rely on (rare) patients, why are lesion-behaviour mapping still informative?

Lösung anzeigen
TESTE DEIN WISSEN

Neuroimaging methods like fMRI can only reveal correlations, not causality. Both TMS and lesion behaviour mapping methods can reveal causality, meaning that (lack of) brain activity in an area causes (lack of) behaviour of interest. However, following TMS, cognitive effects tend to be very small. The full-blown cognitive deficits like aphasia, apraxia, neglect etc. seen in lesion patients arenot seen following TMS. Moreover, TMS is anatomically limited: many areas of the brain are notaccessible to TMS. This is not a limitation for lesion-behaviour mapping. As such, lesion-behaviour mapping remains an invaluable tool.

Lösung ausblenden
TESTE DEIN WISSEN

The statistical analysis of an individual's fMRI data results in a 3-D map of a statistical parameter (e.g. a map of t-values) covering the whole measured volume (usually the whole brain). These parameters represent the contribution of a specified regressor to the measured signal within each voxel. Why do we 'threshold' such parameter maps and what is the rationale behind the decision for a specific threshold value? What are the pitfalls of thresholding'? Please explain the "multiple comparison problem" within this context.

Lösung anzeigen
TESTE DEIN WISSEN

We threshold the data to distinguish between voxel that show more, resp. less activity with regard to the task under investigation. This decision is usually made based on the logics of Null Hypothesis significance testing (NHST). Thus, we determine a type-1 error probability that we are willing to accept for our voxel-wise tests of regression parameters and only look at those voxels, whose parameters result in a type-1 error probability lower than that threshold value. One critical issue of thresholding would be that the distinction between significant and non-significant might be rather small in descriptive terms. Thus, a lot of interesting (and important) data might vanish below the threshold. If the threshold is chosen based on NHST, we must be aware of the fact that the determined type-1 error probability occurs with each and every voxel. Thus, to be sure that the analysis of the whole brain (or a group of voxels) shows a single false positive voxel not more often than determined by the threshold, we must adjust the type-1 error probability to the number of individual tests being conducted (i.e. no of voxels)

Lösung ausblenden
TESTE DEIN WISSEN

Explain the term "Reliability". What were the two empirical methods to determine the reliability of a measure that we talked about? Explain the term "Validity". Name two empirical methods to demonstrate the validity of a measurement.

Lösung anzeigen
TESTE DEIN WISSEN

Reliability describes the repeatability of a measurement's result. You can measure the same person at different timepoints with the same experimenter and you can check the inter-rater reliability by letting different rates rate the same experiment/measurement independently, e.g. by showing the same video to several raters. Validity describes the meaningfulness of a measurement. It asks the question: does the experiment really measure what it is designed to measure? You can inquire the convergent validity by looking at the correlation of your measurement and another established measurement for the construct you want to measure. A high correlation is needed here. You can also look at the divergent validity. Here, you check if there is a low correlation with another established measurement that measures something else that does not belong to the construct you want to measure.

Lösung ausblenden
TESTE DEIN WISSEN

What do we understand as sequence effects in within-subject designs? What is a Latin square design and what are its characteristics with respect to the control of sequence effects?

Lösung anzeigen
TESTE DEIN WISSEN

In within-subject designs participants experience multiple conditions in a row. Even in short experiments multiple things can change over the time of such an experiment and affect the experimental conditions differently depending on the position in the temporal sequence. People might get bored or tired over time. They get used to the general experimental setting, e.g. the room, the screen, the position of the buttons etc. They might learn specific things about the experiment from preceding conditions that improve the performance in later conditions. Similar to this, people might develop wrong ideas about the prupose of the experiment and because of that perform worse later . A latin square is an experimental design that allows the systematic investigation of a sub-sample of a large number of possible experimental sequences if the total number of possible sequences cannot be completely covered (~ in experiments with more than 5 conditions). Using the Latin square we produce a group of sequences where each experimental condition appears in one position in the sequence at least once and each experimental condition precedes and follows each other condition at least once.

Lösung ausblenden
TESTE DEIN WISSEN

Simmons et al. Psychol Sci 2011 published an interesting work addressing the validity ofhypothesis tests. In this article they coined the term „researcher degrees of freedom". Whatdid they mean with that term? Give two examples for such degrees of freedom

Lösung anzeigen
TESTE DEIN WISSEN

When the data has been acquired, data analysis starts. In principle, the whole statistical analysis usually finishing with inferential statistics, should have been determined before the first measurements and simply be carried out now. However, data analysis often includes "a bit of exploration". In preclinical studies, data analysis is not supervised by any external supervisor. Thus,researchers are free to decide to conduct a data analysis that deviates from their original plans. To be clear, any such deviations from the original plans invalidates NHST. One possibility could be a post-hoc separation of a large sample in two subgroups, because the respective researcher observed a difference between, e.g., male and female participants. Another decision might be to continue with more measurements beyond the originally planned sample size, until a significant outcome concludes repeated sampling.

Lösung ausblenden
TESTE DEIN WISSEN

When performing a voxel-wise statistical lesion-behaviour mapping analysis, we typicallyperform the same test in many voxels simultaneously. Explain how this affects our probabilityof a false positive and briefly explain the 3 methods used to counteract this.

Lösung anzeigen
TESTE DEIN WISSEN

If each test has the typical false positive rate of 5%, then performing the same test at 100 voxels willresult in 5 (false positive) significant test results purely by chance. That is, as more and more voxelsare assessed, it becomes more and more likely that we will obtain a significant result in at least onevoxel purely by chance. Instead of controlling the probability of a false positive for each voxel, weusually want to control the probability of at least one false positive in all voxels tested. This isknown as controlling the family-wise error rate and can be done in the following ways: 1) Bonferroni correction: divide the acceptable false positive probability (usually 5%) by theamount of tests performed. This then ensures your probability of at least one false positive in allvoxels tested is 5% False Discovery Rate (FDR) correction: control the proportion of false positives amongst foundpositives, i.e. a FDR corrected threshold of p (actually q) = 0.05 means that 5% of your individualsignificant findings (voxels) might be false positives (but not more). Permutation thresholding: use permutation testing to assess whether the difference in behaviourbetween patient groups can be attributed to the voxel status label (lesioned/non-lesioned) or not.We calculate the maximum test statistic with correct assignment of patients to voxel status label,then randomly scramble the assignment of patients to voxel status label many times, calculating themaximum test statistic for each of these permutations. Ultimately, we can plot for each maximumtest statistic, how frequently we obtained this test statistic in the permutations (i.e. under the nullhypothesis that the voxel status label is meaningless). Using this maximum test statistic distribution,we can subsequently obtain a cut-off maximum test statistic value, with maximum test statisticvalues surpassing this cut-off obtained in less than 5% of the permutations. If our originalmaximum test statistic obtained with the correct voxel status labels exceeds this cut-off, it had aprobability of less than 5% of occurring under the null hypothesis. As we always take the maximumstatistic, we ultimately choose a test statistic cut-off that is only exceeded anywhere in the brain inless than 5% of permutations. I.e. our false positive control is as strong as with the moreconservative Bonferroni correction

Lösung ausblenden
TESTE DEIN WISSEN

List the pros and cons of both CT and MRI imaging to visualise lesion location

Lösung anzeigen
TESTE DEIN WISSEN

CT pro: inexpensive, universally tolerated, good for visualising heomarrhagic stroke CT con: subjects the patient to radiation, poor tissue/image contrast, poor spatial resolution MRI pro: good tissue/image contrast, good spatial resolution, no radiation, many differentprotocols so that scanning can be tailored to situation and patient characteristics (i.e. flexible) MRI con: often requires additional imaging session, not universally tolerated (e.g. patients withmetal implants)

Lösung ausblenden
  • 255775 Karteikarten
  • 3320 Studierende
  • 100 Lernmaterialien

Beispielhafte Karteikarten für deinen Old exam questions Kurs an der Universität Tübingen - von Kommilitonen auf StudySmarter erstellt!

Q:

Explain how the strength of TMS stimulation is usually determined (when using TMS to studycognition). Why do we do this and what is the implicit assumption here?

A:

The strength of TMS stimulation is usually determined by selecting an area of the brain where TMS application results in a clearly observable effect (e.g. hand twitch after stimulation of the 'handarea') and then modulating the strength of the TMS stimulation until this behavioural consequence of TMS (e.g. hand twitch) is present approximately 50% of the time. We do this to ensure that the stimulation strength is high enough to elicit neuronal activity in the brain but not higher than necessary to minimize side effects (like for example an epileptic seizure).A strong assumption that is made here is that the TMS application strength needed to elicit neuronal activity at the selected area generalises to the rest of the brain (i.e. the same stimulation strength is equally capable ofeliciting neural activity at other locations).

Q:

Y = Xβ+ε ... what is represented by the variables of the general linear model in the case of a single subject analysis?

A:

The vector Y contains the measured time series of BOLD signals in a given voxel. The matrix X contains the predictors that together should explain the timecourse of the measured data in Y. The vector β contains the weights for all predictors in the matrix X. Each β corresponds to one predictor and expresses its contribution to the whole BOLD time series. ε contains the residuals, i.e. the differences between the measured timecourse in Y and the predicted timecourse taken from the combination of X and β.

Q:

What is the most important risk for the subject associated with TMS? How can we minimize this risk?

A:

The most important risk is an epileptic seizure. We can minimize this by1) excluding subjects who have a prior history or family history of epilepsy or seizures and2) adhering to established safety guidelines concerning strength, frequency and duration of stimulation.

Q:

What is a participant bias, how can it affect your experiment, and how can you control for it?

A:

Our participants always know that they are part of an experiment. Thus, they behave in a way that supposedly would be good for the experiment based on their perception of and experience with the respective experimental atsks. They develop their own ideas and hypotheses while doing their task. Either they catch the true purpose and, consciously or unconsciously, produce the expected results. Or they develop wrong beliefs about a presumed goal of the experiment and start to behave accordingly. Thus, instead of the experimental manipulations themselves, the beliefs of the participants would primarily influence the result. To avoid such effects you should give away as less information about the true purpose of the experiment as possible. Furthermore, you can ask the participants after the experiment about their personal ideas and hypotheses and thereby see whether there were any systematic effects across the participant group. Finally, between-subject designs would be a good choice because less information about the experimental manipulations is available for the individual participant.

Q:

Conducting a BOLD-fMRI experiment you typically measure an anatomical and a functional set of data. Crucial steps in data processing are normalisation, motion correction, and between modality co-registration. Please describe these steps of data processing in the right order. Which of the two data sets is processed and what problem is addressed by the respective steps?

A:

Preprocessing starts with motion correction of the functional data series. The participants in fMRI experiments can usually not prevent the occurrence of at least small movements during the measurement. Such small movements could cause artifacts in the final analysis and must be corrected. Next, we coregister the functional and the anatomical images. As both data acquisitions take place at different times they could potentially be in spatial disagreement. To combine signal detection in the functional analysis with the high-resolution anatomical images later, we ensure that both are in spatial agreement. Finally, we normalise all images. We estimate differences between the anatomical scan of a participant with a standard template and manipulate firstly the anatomical image to match the template and then apply the same changes to all functional images.

Mehr Karteikarten anzeigen
Q:

Discuss the aspect of "causality" in different methods (lesion-behaviour mapping, fMRI,TMS). Given that we now have methods like fMRI and TMS to study the functional neuroanatomy of the brain without having to rely on (rare) patients, why are lesion-behaviour mapping still informative?

A:

Neuroimaging methods like fMRI can only reveal correlations, not causality. Both TMS and lesion behaviour mapping methods can reveal causality, meaning that (lack of) brain activity in an area causes (lack of) behaviour of interest. However, following TMS, cognitive effects tend to be very small. The full-blown cognitive deficits like aphasia, apraxia, neglect etc. seen in lesion patients arenot seen following TMS. Moreover, TMS is anatomically limited: many areas of the brain are notaccessible to TMS. This is not a limitation for lesion-behaviour mapping. As such, lesion-behaviour mapping remains an invaluable tool.

Q:

The statistical analysis of an individual's fMRI data results in a 3-D map of a statistical parameter (e.g. a map of t-values) covering the whole measured volume (usually the whole brain). These parameters represent the contribution of a specified regressor to the measured signal within each voxel. Why do we 'threshold' such parameter maps and what is the rationale behind the decision for a specific threshold value? What are the pitfalls of thresholding'? Please explain the "multiple comparison problem" within this context.

A:

We threshold the data to distinguish between voxel that show more, resp. less activity with regard to the task under investigation. This decision is usually made based on the logics of Null Hypothesis significance testing (NHST). Thus, we determine a type-1 error probability that we are willing to accept for our voxel-wise tests of regression parameters and only look at those voxels, whose parameters result in a type-1 error probability lower than that threshold value. One critical issue of thresholding would be that the distinction between significant and non-significant might be rather small in descriptive terms. Thus, a lot of interesting (and important) data might vanish below the threshold. If the threshold is chosen based on NHST, we must be aware of the fact that the determined type-1 error probability occurs with each and every voxel. Thus, to be sure that the analysis of the whole brain (or a group of voxels) shows a single false positive voxel not more often than determined by the threshold, we must adjust the type-1 error probability to the number of individual tests being conducted (i.e. no of voxels)

Q:

Explain the term "Reliability". What were the two empirical methods to determine the reliability of a measure that we talked about? Explain the term "Validity". Name two empirical methods to demonstrate the validity of a measurement.

A:

Reliability describes the repeatability of a measurement's result. You can measure the same person at different timepoints with the same experimenter and you can check the inter-rater reliability by letting different rates rate the same experiment/measurement independently, e.g. by showing the same video to several raters. Validity describes the meaningfulness of a measurement. It asks the question: does the experiment really measure what it is designed to measure? You can inquire the convergent validity by looking at the correlation of your measurement and another established measurement for the construct you want to measure. A high correlation is needed here. You can also look at the divergent validity. Here, you check if there is a low correlation with another established measurement that measures something else that does not belong to the construct you want to measure.

Q:

What do we understand as sequence effects in within-subject designs? What is a Latin square design and what are its characteristics with respect to the control of sequence effects?

A:

In within-subject designs participants experience multiple conditions in a row. Even in short experiments multiple things can change over the time of such an experiment and affect the experimental conditions differently depending on the position in the temporal sequence. People might get bored or tired over time. They get used to the general experimental setting, e.g. the room, the screen, the position of the buttons etc. They might learn specific things about the experiment from preceding conditions that improve the performance in later conditions. Similar to this, people might develop wrong ideas about the prupose of the experiment and because of that perform worse later . A latin square is an experimental design that allows the systematic investigation of a sub-sample of a large number of possible experimental sequences if the total number of possible sequences cannot be completely covered (~ in experiments with more than 5 conditions). Using the Latin square we produce a group of sequences where each experimental condition appears in one position in the sequence at least once and each experimental condition precedes and follows each other condition at least once.

Q:

Simmons et al. Psychol Sci 2011 published an interesting work addressing the validity ofhypothesis tests. In this article they coined the term „researcher degrees of freedom". Whatdid they mean with that term? Give two examples for such degrees of freedom

A:

When the data has been acquired, data analysis starts. In principle, the whole statistical analysis usually finishing with inferential statistics, should have been determined before the first measurements and simply be carried out now. However, data analysis often includes "a bit of exploration". In preclinical studies, data analysis is not supervised by any external supervisor. Thus,researchers are free to decide to conduct a data analysis that deviates from their original plans. To be clear, any such deviations from the original plans invalidates NHST. One possibility could be a post-hoc separation of a large sample in two subgroups, because the respective researcher observed a difference between, e.g., male and female participants. Another decision might be to continue with more measurements beyond the originally planned sample size, until a significant outcome concludes repeated sampling.

Q:

When performing a voxel-wise statistical lesion-behaviour mapping analysis, we typicallyperform the same test in many voxels simultaneously. Explain how this affects our probabilityof a false positive and briefly explain the 3 methods used to counteract this.

A:

If each test has the typical false positive rate of 5%, then performing the same test at 100 voxels willresult in 5 (false positive) significant test results purely by chance. That is, as more and more voxelsare assessed, it becomes more and more likely that we will obtain a significant result in at least onevoxel purely by chance. Instead of controlling the probability of a false positive for each voxel, weusually want to control the probability of at least one false positive in all voxels tested. This isknown as controlling the family-wise error rate and can be done in the following ways: 1) Bonferroni correction: divide the acceptable false positive probability (usually 5%) by theamount of tests performed. This then ensures your probability of at least one false positive in allvoxels tested is 5% False Discovery Rate (FDR) correction: control the proportion of false positives amongst foundpositives, i.e. a FDR corrected threshold of p (actually q) = 0.05 means that 5% of your individualsignificant findings (voxels) might be false positives (but not more). Permutation thresholding: use permutation testing to assess whether the difference in behaviourbetween patient groups can be attributed to the voxel status label (lesioned/non-lesioned) or not.We calculate the maximum test statistic with correct assignment of patients to voxel status label,then randomly scramble the assignment of patients to voxel status label many times, calculating themaximum test statistic for each of these permutations. Ultimately, we can plot for each maximumtest statistic, how frequently we obtained this test statistic in the permutations (i.e. under the nullhypothesis that the voxel status label is meaningless). Using this maximum test statistic distribution,we can subsequently obtain a cut-off maximum test statistic value, with maximum test statisticvalues surpassing this cut-off obtained in less than 5% of the permutations. If our originalmaximum test statistic obtained with the correct voxel status labels exceeds this cut-off, it had aprobability of less than 5% of occurring under the null hypothesis. As we always take the maximumstatistic, we ultimately choose a test statistic cut-off that is only exceeded anywhere in the brain inless than 5% of permutations. I.e. our false positive control is as strong as with the moreconservative Bonferroni correction

Q:

List the pros and cons of both CT and MRI imaging to visualise lesion location

A:

CT pro: inexpensive, universally tolerated, good for visualising heomarrhagic stroke CT con: subjects the patient to radiation, poor tissue/image contrast, poor spatial resolution MRI pro: good tissue/image contrast, good spatial resolution, no radiation, many differentprotocols so that scanning can be tailored to situation and patient characteristics (i.e. flexible) MRI con: often requires additional imaging session, not universally tolerated (e.g. patients withmetal implants)

Old exam questions

Erstelle und finde Lernmaterialien auf StudySmarter.

Greife kostenlos auf tausende geteilte Karteikarten, Zusammenfassungen, Altklausuren und mehr zu.

Jetzt loslegen

Das sind die beliebtesten StudySmarter Kurse für deinen Studiengang Old exam questions an der Universität Tübingen

Für deinen Studiengang Old exam questions an der Universität Tübingen gibt es bereits viele Kurse, die von deinen Kommilitonen auf StudySmarter erstellt wurden. Karteikarten, Zusammenfassungen, Altklausuren, Übungsaufgaben und mehr warten auf dich!

Das sind die beliebtesten Old exam questions Kurse im gesamten StudySmarter Universum

Questions for the exam

Universität Bochum

Zum Kurs
FRM Exam Questions

TU Dresden

Zum Kurs
exam question 2

California State University, Fullerton

Zum Kurs
exam past questions

University of South Africa

Zum Kurs
exam question

University of the West Indies, Mona

Zum Kurs

Die all-in-one Lernapp für Studierende

Greife auf Millionen geteilter Lernmaterialien der StudySmarter Community zu
Kostenlos anmelden Old exam questions
Erstelle Karteikarten und Zusammenfassungen mit den StudySmarter Tools
Kostenlos loslegen Old exam questions