Ritchie Multi-Strategy Global Trading Ltd

Next, I will switch to cluster randomized cross-over trials. I will describe the respective planning using an example of a pragmatic trial from our activities for the 2nd funding period of the CSCC.

Typisches Anlegerprofil

I will summarize my talk with a discussion of pros and cons of such a design and the related analyses issues for the given research question. Ort des Vortrages: Seminarraum IMBS Uhrzeit: 11 Uhr c. Das hierarchische Testen ist ein einfaches und populäres Verfahren zum Testen mehrerer Nullhypothesen, das ohne Adjustierung der Signifikanzniveaus auskommt. Hierbei werden die Hypothesen nach eine vorab festgelegten Ordnung gereiht, und eine Nullhypothese wird nur dann getestet, wenn alle vorgereihten Hypothesen verworfen werden können.

Eine Häufige Anwendung hierarchischer Tests ist das Testen von Wirksamkeit in einem primären und sekundären Endpunkt. Wie bei anderen schrittweisen multiplen Testverfahren, haben die zum hierarchischen Testen gehörigen Konfidenzintervalle das Defizit, oft keine über den Hypothesentest hinausgehende Information zu liefern, wenn die Nullhypothese verworfen wurde.

Beim hierarchischen Testen von einseitigen, nach oben beschränkten Nullhypothesen, beispielsweise, klebt die untere Konfidenzschranke der erstgereihten Hypothese am Rand der Nullhypothese, wenn immer sie verworfen wird. Um diese Schwierigkeit zu umgehen, schlagen wir eine Modifikation des hierarchischen Tests vor, die zu Konfidenzintervallen führt, mit denen immer zusätzliche Information gewonnen wird.

Die Modifikation führt zu keinem Powerverlust beim Testen der erstgereihten Nullhypothese und kann so gestaltet werden, dass der Powerverlust für die weiteren Hypothesen nur gering ist.

Massensterben der Hedgefunds ?

Sie kann über ein Erweiterung der graphischen Testprozeduren von Bretz et al. Wir illustrieren die Methode anhand eines Beispiels und präsentieren die Resultate von Simulationsstudien über die Güte des Verfahrens. Wir geben zudem einen Ausblick auf andere informative Konfidenzintervalle, nämlich für die Fall-Back-Prozedur und den Bonferroni-Holm-Test. OG Uhrzeit: Linear regression is an important and frequently used tool in statistics, however, its validity and interpretability relies on strong model assumptions. While robust estimates of the covariance matrix of the regression coefficients extend the validity of hypothesis tests and confidence intervals, a clear and simple interpretation of the regression coefficients is lacking when the mean structure of the model is miss-specified.

To overcome this deficiency, we suggest a new intuitive and mathematical rigorous interpretation of the linear regression coefficients that is independent from specific model assumptions. The interpretation relies on a new population based measure of association. The idea is to quantify how much the unconditional population mean of the dependent variable Y can be changed by changing the distribution of the independent variable X in the population. We show that with square integrable observations and a suitable standardization of the distributional changes, the maximum change in the mean of Y is well defined and equals zero if and only if the conditional mean of Y given X is independent of X.

Restriction to linear functions for the distributional changes in X provides the link to linear regression. It leads to a conservative approximation of the newly defined and generally non-linear measure of association. The conservative linear approximation can then be estimated by linear regression. We show how the new interpretation can be extended to multiple regression and how far and in which sense it leads to an adjustment for confounding. We illustrate the utility and limitations of the new interpretation by examples and in a simulation study, and we point to perspectives for new analysis strategies.

Stefan Winter Institut für Klinische Pharmakologie, Robert-Bosch-Krankenhaus, Stuttgart. OG Uhrzeit: 11 Uhr c.

The Global Political Economy of Green Finance and Socio-Ecological Transformation

Pharmacogenomics seek to explain the interindividual variability in drug disposition and response due to genetic variation. Efforts such as the Human Genome Project or the International HapMap Project have facilitated the identification of factors and mechanisms affecting patient outcome or drug toxicity. Pharmacogenomic investigations comprise genome-wide association studies GWAS , pathway or candidate gene approaches; the latter may, e. Known challenges which may arise in such examinations include study design, confounding population stratification , analysis and replication strategies, multiple testing issues or lack of reproducibility of results across studies.

Based on selected examples from pharmacogenomics research, several of these issues will be addressed. Silke Szymczak Institut für Klinische Molekularbiologie, Christian-Albrechts-Universität zu Kiel. Datum: 4. Machine learning methods and in particular random forests RFs are promising approaches for classification based on high dimensional omics data sets.

Variable importance measures allow variables to be ranked according to their predictive power. However, no clear criteria exist to determine how many variables should be selected for downstream analyses.


We evaluated several variable selection procedures based on methylation and gene expression data sets from the Gene Expression Omnibus and The Cancer Genome Atlas. RFs were used to estimate probabilities instead of binary classifications and variables were selected based on a permutation-based approach PERM , recursive feature elimination RFE and our new method using recurrent relative variable importance measurements r2VIM.

Each data set was repeatedly split into training and test sets and comparisons were based on the number of selected variables and mean squared error MSE of a RF built on selected variables only. We also permuted the phenotype to generate data sets under the null hypothesis. In all analyses, MSE were similar but PERM and r2VIM declared more than hundred variables as important while RFE selected only a handful. The latter approach was also the only one that used more variables under the null hypothesis compared to the original data.

PERM and r2VIM selected the same variables when predicting gender differences, however, in a more complex problem of glioma subtypes, PERM identified additional important variables. Run time was comparable for RFE and r2VIM whereas PERM was 20 times slower. In conclusion r2VIM is a sensible choice for variable selection in RF for application to high dimensional data sets. However, PERM should be preferred if false negative results need to be avoided and computational resources are not limited.

Johannes Krisam Institut für Medizinische Biometrie und Informatik, Universität Heidelberg. The planning stage of a clinical trial investigating a potentially targeted therapy commonly incorporates a high degree of uncertainty whether the treatment is more efficient or efficient only in a subgroup as compared to the whole population. Recently developed adaptive designs allow a mid-course efficacy assessment of the total population and a pre-defined subgroup and a selection of the most promising target population based on the interim results see, e.

Predictive biomarkers are frequently employed in these trials in order to identify a subset of patients more likely to benefit from a drug. The performance of the applied subset selection rule is crucial for the overall characteristics of the design. We investigate the performance of various subgroup selection rules to be applied in adaptive two-stage designs. Methods are presented that allow an evaluation of the operational characteristics of rules for selecting the target population thus enabling to choose an appropriate strategy.

The comparison includes optimal decision rules which take the situation of uncertain assumptions about the treatment effects into account by modeling the uncertainty about parameters by prior distributions Krisam and Kieser, Furthermore, alternative selection rules proposed in the literature are considered. We evaluate the performance of these selection rules in adaptive enrichment designs by investigating Type I error rate and power. Both the situation of perfect and imperfect e.

Datum: 6. Predictive biomarkers, especially gene-based ones, are usually wanted in order to predict risks associated with diseases.

Co-Head of Investment Team, Private Equity/Real Assets

However, just using main effects frequently lead to too biased models. Biological knowledge indicates that gene-gene interactions may play an important role in the genetic etiology underlying diseases. Yet, we still do not have standard tools for considering interactions in high-dimensional statistical models. There are several approaches for this task. Here, we want to present a modular approach that can increase the power to detect interactions and is easily extendible with respect to its components. First, we set the background with respect to notions such as statistical and biological interactions, epistasis and association.

After that, existing approaches for screening two-way interactions are succinctly described. As a guiding difference, we use the distinction between knowledge-based and data-driven screening, with emphasis on the latter. Then, we motivate our approach: the main aim is to combine techniques in order to find interpretable interactions on the gene-level, even if the underlying variables show weak main effects.

We use a regularized regression technique for evaluating the relevance of main effects and interactions, random forests RF for screening interactions, and refinements for improving the performance of RF, especially in order to stabilize results and find interactions related to variables with weak main effects. All techniques and the resulting final approach are described in a non-technical manner. A cross-validated simulation study for evaluating the effect of interactions was conducted.

The scenarios are shortly described and the results with a focus on sensitivity are presented. Besides the performance of our proposal, we also show the contribution of each component for the detection of interactions.

  1. Forex-Schule in Durban?
  2. Lungenmetastasen bei Brustkrebs. India Summer Tiffany Doll Der Informant.
  3. Universität Internationale Studentenrekrutierungsstrategie.
  4. Cindy und bryan indian porn.
  5. Forschung: Publikationen der Nachhaltigen Entwicklung - Geographisches Institut.
  6. Ein Brief an Charles Michel, EU-Ratspräsident;

The results show that RF can provide relevant interaction information, even without strong marginal effects. Pre-processing in the form of orthogonalization is important for achieving best interaction detection performances. Introducing correlations makes interaction detection difficult; however, the pre-selection of interactions is not as bad as the final selection, indicating some kind of robustness of RF.

The results also show that a moderate variable inclusion frequency of an interaction term e. Results on real data not shown indicate that the interactions found might be useful, but that their effects are rather weak. We considered only two-way interactions in our study. It is important to extend our approach to higher-level interactions and to integrate biological knowledge in the screening process. Whilst the genetic analysis of other major psychiatric phenotypes has been very successful recently, this has been much less the case for major depression MDD.

In my talk I will try to discuss possible reasons, notably related to a large gene-environment as well as gene-gene interaction components in individual risk which is not taken into account in genetic studies as usually performed. This is supported by recent functional work, also on MDD per se , but also on related phenotypes.

Taking into account these factors in analysis, strong gains in power can be reached. Daniela Adolf StatConsult Gesellschaft für klinische und Versorgungsforschung mbH, Magdeburg. Datum: 9. Anhand funktioneller Magnetresonanztomographie fMRT ist es möglich, neuronale Aktivität indirekt zu messen. Bei fMRT-Aufnahmen eines Probanden handelt es sich um hochdimensionale Daten, die neben einer räumlichen Korrelation auch zeitliche Abhängigkeiten aufweisen.

Die Anzahl der Variablen, der sogenannten Voxel, übersteigt dabei üblicherweise den Stichprobenumfang erheblich, welcher der Anzahl aufeinanderfolgender Messungen entspricht. In Standardsoftwaretools werden diese Daten zumeist voxelweise und multiple adjustiert über allgemeine lineare Modelle untersucht, wobei die zeitliche Korrelation der Messung über ein Pre-Whitening berücksichtigt wird.

Hierbei wird gewöhnlich ein autoregressiver Prozess erster Ordnung unterstellt und dessen Korrelationskoeffizient geschätzt. Die Einhaltung des Fehlers 1. Art ist jedoch viel diskutiert Eklund et al. Im Vortrag wird zum einen die multivariate Analyse dieser Daten anhand spezieller stabilisierter multivariater Verfahren betrachtet Läuter et al. Diese Testverfahren basieren auf linkssphärischen Verteilungen und arbeiten bei unabhängigen Stichprobenvektoren auch im hochdimensionalen Fall exakt. Zum anderen wird eine nichtparametrische Methode zum Umgang mit der zeitlichen Korrelation vorgestellt, die eine konkrete Annahme über die Korrelationsstruktur unnötig macht und dennoch den Fehler 1.

Art approximativ einhält. Matthias Schmid Institut für Statistik, Ludwig-Maximilians-Universität München. Ort des Vortrages: Seminarraum S 3b Zentralklinikum Uhrzeit: 11 Uhr c.