(Internal) Examines aligned contigs from local assemblies and calls structural variants
Category Structural Variant Discovery
Overview(Internal) Examines aligned contigs from local assemblies and calls structural variants
This tool is used in development and should not be of interest to most researchers. It packages structural variant calling as a separate tool, independent of the generation of local assemblies. Most researchers will run StructuralVariationDiscoveryPipelineSpark, which both generates local assemblies of interesting genomic regions, and then calls structural variants from these assemblies.
This tool takes a SAM/BAM/CRAM containing the alignments of assembled contigs from local assemblies and searches it for split alignments indicating the presence of structural variations. To do so the tool parses primary and supplementary alignments; secondary alignments are ignored. To be considered valid evidence of an SV, two alignments from the same contig must have mapping quality 60, and both alignments must have length greater than or equal to min-alignment-length. Imprecise variants with approximate locations are also called.
The input file is typically the output file produced by FindBreakpointEvidenceSpark.
- An input file of assembled contigs or long reads aligned to reference.
- The reference to which the contigs have been aligned.
- A vcf file describing the discovered structural variants.
gatk DiscoverVariantsFromContigAlignmentsSAMSpark \ -I assemblies.sam \ -R reference.2bit \ -O structural_variants.vcf
This tool can be run without explicitly specifying Spark options. That is to say, the given example command without Spark options will run locally. See Tutorial#10060 for an example of how to set up and run a Spark tool on a cloud Spark cluster.
The reference is broadcast by Spark, and must therefore be a .2bit file due to current restrictions.
DiscoverVariantsFromContigAlignmentsSAMSpark specific arguments
This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.
|Argument name(s)||Default value||Summary|
|||BAM/SAM/CRAM file containing reads|
|null||"output path for discovery (non-genotyped) VCF|
|null||Reference sequence file|
|Optional Tool Arguments|
||||read one or more arguments files and add them to the command line|
||100||Uncertainty in overlap of assembled breakpoints and evidence target links.|
||0||maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).|
||null||External CNV calls file. Should be single sample VCF, and contain only confident autosomal non-reference CNV calls (for now).|
||||spark properties to set on the spark context in the format
||false||If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!|
|20||If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection|
|false||display the help message|
||7||Number of pieces of imprecise evidence necessary to call a variant in the absence of an assembled breakpoint.|
|ALL||Interval merging rule for abutting intervals|
|||One or more genomic intervals over which to operate|
||15000||Maximum size deletion to call based on imprecise evidence without corroborating read depth evidence|
||50||Minimum flanking alignment length|
||0||For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.|
||null||Name of the program running|
||false||For tools that write an output, write the output in multiple pieces (shards)|
||local[*]||URL of the Spark Master to submit jobs to when using the Spark pipeline runner.|
||50||Breakpoint padding for evaluation against truth data.|
||false||display the version number for this tool|
|Optional Common Arguments|
|||Read filters to be disabled before analysis|
||false||Disable all tool default read filters|
|||One or more genomic intervals to exclude from processing|
||null||A configuration file to use with the GATK.|
|0||Amount of padding (in bp) to add to each interval you are excluding.|
|0||Amount of padding (in bp) to add to each interval you are including.|
|UNION||Set merging approach to use for combining interval inputs|
||false||Whether to suppress job-summary info on System.err.|
|||Read filters to be applied before analysis|
||||Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.|
|SILENT||Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.|
|false||Whether to use the JdkDeflater (as opposed to IntelDeflater)|
|false||Whether to use the JdkInflater (as opposed to IntelInflater)|
||INFO||Control verbosity of logging.|
||false||display hidden arguments|
Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.
--arguments_file / NA
read one or more arguments files and add them to the command line
Uncertainty in overlap of assembled breakpoints and evidence target links.
int 100 [ [ -∞ ∞ ] ]
--bam-partition-size / NA
long 0 [ [ -∞ ∞ ] ]
--cnv-calls / NA
External CNV calls file. Should be single sample VCF, and contain only confident autosomal non-reference CNV calls (for now).
--conf / -conf
spark properties to set on the spark context in the format
--disable-read-filter / -DF
Read filters to be disabled before analysis
--disable-sequence-dictionary-validation / -disable-sequence-dictionary-validation
If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!
--disable-tool-default-read-filters / -disable-tool-default-read-filters
Disable all tool default read filters
--exclude-intervals / -XL
One or more genomic intervals to exclude from processing
Use this argument to exclude certain parts of the genome from the analysis (like -L, but the opposite). This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the command line (e.g. -XL 1 or -XL 1:100-200) or by loading in a file containing a list of intervals (e.g. -XL myFile.intervals).
--gatk-config-file / NA
A configuration file to use with the GATK.
--gcs-max-retries / -gcs-retries
If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
int 20 [ [ -∞ ∞ ] ]
--help / -h
display the help message
Number of pieces of imprecise evidence necessary to call a variant in the absence of an assembled breakpoint.
int 7 [ [ -∞ ∞ ] ]
--input / -I
BAM/SAM/CRAM file containing reads
R List[String] 
--interval-exclusion-padding / -ixp
Amount of padding (in bp) to add to each interval you are excluding.
Use this to add padding to the intervals specified using -XL. For example, '-XL 1:100' with a padding value of 20 would turn into '-XL 1:80-120'. This is typically used to add padding around targets when analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
--interval-merging-rule / -imr
Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not actually overlap) into a single continuous interval. However you can change this behavior if you want them to be treated as separate intervals instead.
The --interval-merging-rule argument is an enumerated type (IntervalMergingRule), which can have one of the following values:
--interval-padding / -ip
Amount of padding (in bp) to add to each interval you are including.
Use this to add padding to the intervals specified using -L. For example, '-L 1:100' with a padding value of 20 would turn into '-L 1:80-120'. This is typically used to add padding around targets when analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
--interval-set-rule / -isr
Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to perform the analysis only on chromosome 1 exomes, you could specify -L exomes.intervals -L 1 --interval-set-rule INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will always be merged using UNION). Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.
The --interval-set-rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:
- Take the union of all intervals
- Take the intersection of intervals (the subset that overlaps all intervals specified)
--intervals / -L
One or more genomic intervals over which to operate
Maximum size deletion to call based on imprecise evidence without corroborating read depth evidence
int 15000 [ [ -∞ ∞ ] ]
--min-align-length / NA
Minimum flanking alignment length
Integer 50 [ [ -∞ ∞ ] ]
--num-reducers / NA
int 0 [ [ -∞ ∞ ] ]
--output / -O
"output path for discovery (non-genotyped) VCF
R String null
--program-name / NA
Name of the program running
--QUIET / NA
Whether to suppress job-summary info on System.err.
--read-filter / -RF
Read filters to be applied before analysis
--read-index / -read-index
The --read-validation-stringency argument is an enumerated type (ValidationStringency), which can have one of the following values:
--reference / -R
Reference sequence file
R String null
--sharded-output / NA
For tools that write an output, write the output in multiple pieces (shards)
--showHidden / -showHidden
display hidden arguments
--spark-master / NA
URL of the Spark Master to submit jobs to when using the Spark pipeline runner.
--TMP_DIR / NA
Breakpoint padding for evaluation against truth data.
int 50 [ [ -∞ ∞ ] ]
--use-jdk-deflater / -jdk-deflater
Whether to use the JdkDeflater (as opposed to IntelDeflater)
--use-jdk-inflater / -jdk-inflater
Whether to use the JdkInflater (as opposed to IntelInflater)
--verbosity / -verbosity
Control verbosity of logging.
The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:
--version / NA
display the version number for this tool
GATK version 184.108.40.206 built at 02-29-2019 02:29:33.