Genome Analysis Toolkit

Variant Discovery in High-Throughput Sequencing Data

GATK process banner

Need Help?

Search our documentation

Community Forum

Hi, How can we help?

Developed in the Data Sciences Platform at the Broad Institute, the toolkit offers a wide variety of tools with a primary focus on variant discovery and genotyping. Its powerful processing engine and high-performance computing features make it capable of taking on projects of any size. Learn more

python exited with 135 in germlinecnvcaller

0

5 comments

  • Avatar
    Genevieve Brandt (she/her)

    Hi shun inoue,

    Thank you for your patience, I had to look into this issue with my colleagues because python exited with 135 is an uncommon error with this tool. We think this is a potential memory issue and your operating system is terminating this job.

    How much memory is available on these nodes? You might consider decreasing the Xmx value, so that there is more memory available for the python script. 

    There are examples in our gCNV WDLs for how to set up environment variables, check out these lines: https://github.com/broadinstitute/gatk/blob/b6a28d1a8c03e2b90fc944e09aa153dd571b9398/scripts/cnv_wdl/germline/cnv_germline_cohort_workflow.wdl#L586

    Best,

    Genevieve

    0
    Comment actions Permalink
  • Avatar
    shun inoue

    Hi, Genevieve,

    Thank you for your proposal.

    I am analyzing in a local HPC of my institute. Previously, I specified 16 cores and 16GB each. Then, HPC manager send me a following alert.

    """The following user's job was stopped because it was using more CPU than required.
    This job will be held and will not start running.
     * jobid, ******;  def_slot, 16;  CPU usage, 5826 %;  Wild Score, 0.99
    A higher Wild Score indicates that your program is more likely to be lacking in control."""

    I asked the manager to solve this issue and they answered that I should specify the number of slot more than 60.

    Next, I specified 60 cores and 4GB each.After that I encountered the error of this article. This is the whole story.

    I am assuming that it is too large number of slots, so I wonder if I can restrict the number of slots or cores to use in the script.

    Setting up environment variables is seemed to be the solution, but the articled you pointed says "export MKL_NUM_THREADS=~{default=8 cpu}".

    Does this mean the maximum number of  slots to use is 8?

    I don't give anything MKL_NUM_THREADS and OMP_NUM_THREADS.

     

    To require more larger size of memory (like 16G) may solve this problem, but it takes long to start analysis in my environment. I hope the other solution.

    Best,

    Shun

    0
    Comment actions Permalink
  • Avatar
    Genevieve Brandt (she/her)

    Hi Shun,

    So when you got the error, you were giving each core 4GB each? Your Xmx value used is -Xmx6g, which is higher than the memory for each core. Try decreasing the Xmx value or increasing the memory for each core.

    Best,

    Genevieve

    0
    Comment actions Permalink
  • Avatar
    shun inoue

    Hi Genevieve,

    Exactly.

    Thanks to your advice, I do not encounter any errors now.

    As you pointed, i had to require larger memories, but requiring larger memories and cores takes too long to start analysis.

    I added environment variables  as you mentioned above  (MKL_NUM_THREADS and OMP_NUM_THREADS=16)  to my shell scripts, then my HPC did not say the above alert about the number of cores.

    My issues was simply the size of memory and the number, your advice of environment variables in wdl was so helpful. I had better limit the number of cores and require larger memory from the beginning.

    Thank you.

    Shun

    0
    Comment actions Permalink
  • Avatar
    Genevieve Brandt (she/her)

    Glad it is working now! Thanks for the update.

    0
    Comment actions Permalink

Please sign in to leave a comment.

Powered by Zendesk