You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have tried multiple versions, most recent being 3.4.3 with Python 3.8.5
bamCoverage works the majority of the time for my pipeline. But the bam files can be "large", which is subjective but they are from merged FASTQs/bams and without filtering can be ~20 GB or more. This is not standard human or mouse but there is nothing crazy with the number of scaffolds, and contigs seem fine.
I use the following depending on whether it's RNA/DNA (these specific parameters have been in use for a while and I would like to avoid modifying for consistency if possible):
Appears that the scaling factor is calculated and processing is occurring, but 24 hr+ can go by with no output. I work on a HPC and can allocate as much CPUs/RAM as necessary but nothing changes. I have seen that it is directly related to the size of the files, AKA from the same experiment the smaller ones can be processed. I have also found that downsampling with samtools can indirectly fix my problem, but I'd like to know the underlying cause. Thanks!
The text was updated successfully, but these errors were encountered:
Hi,
My issue is somewhat related to #662
Have tried multiple versions, most recent being 3.4.3 with Python 3.8.5
bamCoverage works the majority of the time for my pipeline. But the bam files can be "large", which is subjective but they are from merged FASTQs/bams and without filtering can be ~20 GB or more. This is not standard human or mouse but there is nothing crazy with the number of scaffolds, and contigs seem fine.
I use the following depending on whether it's RNA/DNA (these specific parameters have been in use for a while and I would like to avoid modifying for consistency if possible):
Appears that the scaling factor is calculated and processing is occurring, but 24 hr+ can go by with no output. I work on a HPC and can allocate as much CPUs/RAM as necessary but nothing changes. I have seen that it is directly related to the size of the files, AKA from the same experiment the smaller ones can be processed. I have also found that downsampling with samtools can indirectly fix my problem, but I'd like to know the underlying cause. Thanks!
The text was updated successfully, but these errors were encountered: