-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error running LEVIATHAN (robin_hood::map overflow) #4
Comments
Hi @LucaBertoli, I'm sorry to hear that you have a problem running LEVIATHAN. This error already occurred in the past (see closed issue #3) and we corrected it. The problem was that a hash table was not purged and kept to much memory. That's why I'm surprised you are facing this issue. Which version of LEVIATHAN are you using ? What is you command line ? Best, |
I downloaded it yesterday and am currently using version v.1.0, maybe 1.0.1 fixes this problem. the command line is: |
Hi, The bug was fixed in the latest commit, but it's not included in the release of the version v1.0.1. Please let me know if this solves your problem. Best, |
I tried installing it with let me know if there is another way of installing it (I'm new on github) Thank you, |
Hi Luca, First, I'd advise you to try out LEVIATHAN on the example dataset (see section "Getting started" of the README), to make sure LEVIATHAN has been installed properly. Then, if you have a big dataset with a good coverage, it's possible that you need a lot of RAM to run LEVIATHAN. How much RAM and how many CPUs did you use ? You can try running LEVIATHAN with 200G-300G RAM (and leave the number of threads to 8 by default). I'd suggest you also to increase the number of iterations (to avoid memory overflow), setting B to 100 ( Please let me know if doing that, LEVIATHAN runs without error. Best, |
Hi, Thank you, |
Dear @anne-gcd, I have the same error as @LucaBertoli (without "Aborted (core dumped)" at the end). I have very large data (approx 325GB BAM file - 10x Genomics format from LongRanger). I have enough RAM memory on computational server (7TB). Should I also try to use -r 10000 parameter? But I'm looking for all structural variants so can I also set -v 50? Is this also somehow influenced by threads parameter? Or what could help me solve this error. Thank you very much |
Hi @sav0016, Sorry for the late reply. First, I would like to know which version of LEVIATHAN are you using ? I recommend you to install LEVIATHAN from source, using git clone (see README). Then, as you have very large data, I recommend you to try running LEVIATHAN with Besides, if you're looking for all SVs, you should set I hope this solves your problem. If no, please let me know and we'll try to find a solution. Best, |
Hi @anne-gcd , Thank you very much! I tried using B10 r10000 v50 and B20 r5000 v50 and it worked. I installed from source although -v still shows v1.0 ... Average coverage is about 100X. If I set v50 and r10000 it will find about 3000 SVs. If I set v50 and r5000 it will find about 9000 SVs. And also many SVs have SVLEN=1. I have about 720CPU and 7TB RAM available. Best, |
Hi,
I was trying out LEVIATHAN for calling variants on a .bam file produced with LongRanger on linked-reads fastq files (TELL-seq technology). I have successfully built the LRez barcode indexes, but when I run LEVIATHAN, this is the output:
Any idea why?
Thank you in advance,
Luca
The text was updated successfully, but these errors were encountered: