Skip to content

This repository contains the codes for INDEED simulation work

Notifications You must be signed in to change notification settings

Hurricaner1989/INDEED-simulation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

INDEED Simulation

About The Project

The main purpose of this project is to provide an intuition of the performance of INDEED version 2.3.1, compared to other competing methods. We select DNAPATH version 0.7.4 and JDINAC for comparison since they are both based on the idea of partial correlation, similiar to INDEED. For INDEED, we include the results with and without FDR correction. In practice, it's an option for users to decide. The simulatioin covers a wide range of $n, p$ combinations, it's supposed to tell us the performance of each method under $n < p$, $n = p$, and $n > p$ scenarios. We compute precision recall AUC, precision, recall and run time as the metrics. From our simulation, INDEED has the best performance when $n > p$, and $n = p$, while JDINAC works the best when $n < p$.

More Details About The Simulations

In the simulation, we mainly focus on $4$ pairs of $n, p$ combination, which are $n = 25, p = 100$, $n = 50, p = 100$, $n = 100, p = 100$, and $n = 100, p = 10$. We limit the $p$ value up to $100$ since a higher value usually result in a much longer time to run for INDEED. Ideally, we would like to optimize INDEED to run a few minutes for $p$ around $1000$, this is one future work on our task list. Right now, users should comfortably run INDEED with $p$ around $100$ in only a few minutes or less. We also test $n = 10, p = 100$ in the simulation. However, this combination seems to exceed the limit of most methods in our comparisons. We decide to exclude it in the following boxplots. For each $n, p$ combination, we run $10$ loops and generate the box-plots based on the metrics of precision recall AUC, precision, recall and run time. We recommend users who are interetsed in more loops to take our simulation codes and try it by themselves.

Results

Precision Recall AUC

JDINAC returns a ranking list of edges. Precision recall AUC is the better metric to compare different methods with a ranking output.

Precision

JDINAC returns a ranking list of edges with numb value as the score. We take all edges with numb > 0 as a positive prediction. This might be unfair to JDINAC. Precision recall AUC is probably the better metric to compare.

Recall

Similar to Precision, JDINAC returns a ranking list of edges with numb value as the score. We take all edges with numb > 0 as a positive prediction. This might be unfair to JDINAC. Precision recall AUC is probably the better metric to compare.

Run Time

About

This repository contains the codes for INDEED simulation work

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages