Runs the structural variation discovery workflow on a single sample
Category Structural Variant Discovery
Overview
Runs the structural variation discovery workflow on a single sampleThis tool packages the algorithms described in FindBreakpointEvidenceSpark and org.broadinstitute.hellbender.tools.spark.sv.discovery.DiscoverVariantsFromContigAlignmentsSAMSpark as an integrated workflow. Please consult the descriptions of those tools for more details about the algorithms employed. In brief, input reads are examined for evidence of structural variation in a genomic region, regions so identified are locally assembled, and the local assemblies are called for structural variation.
Inputs
- An input file of aligned reads.
- The reference to which the reads have been aligned.
- A BWA index image for the reference. You can use BwaMemIndexImageCreator to create the index image file.
- A list of ubiquitous kmers to ignore. You can use FindBadGenomicGenomicKmersSpark to create the list of kmers to ignore.
Output
- A sam file with all of the contigs generated by local assemblies aligned to reference.
- A vcf file describing the discovered structural variants.
Optional Output
Extra, optional output is generated when the experimental interpretation unit is run (note that they may change as features are still being experimented and added)- several VCF files containing variants discovered via different code path (ultimately will be merged into a single VCF)
- query name sorted SAM file of local assembly contigs from whose alignments we can not yet make un-ambiguous calls (intended for debugging and future developments)
Usage example
gatk StructuralVariationDiscoveryPipelineSpark \ -I input_reads.bam \ -R reference.2bit \ --aligner-index-image reference.img \ --kmers-to-ignore kmers_to_ignore.txt \ --contig-sam-file aligned_contigs.sam \ -O structural_variants.vcf
This tool can be run without explicitly specifying Spark options. That is to say, the given example command without Spark options will run locally. See Tutorial#10060 for an example of how to set up and run a Spark tool on a cloud Spark cluster.
Caveats
Expected input is a paired-end, coordinate-sorted BAM with around 30x coverage. Coverage much lower than that probably won't work well.
The reference is broadcast by Spark, and must therefore be a .2bit file due to current restrictions.
StructuralVariationDiscoveryPipelineSpark specific arguments
This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.
Argument name(s) | Default value | Summary | |
---|---|---|---|
Required Arguments | |||
--aligner-index-image |
null | bwa-mem index image file | |
--contig-sam-file |
null | sam file for aligned contigs | |
--input -I |
[] | BAM/SAM/CRAM file containing reads | |
--kmers-to-ignore |
null | file containing ubiquitous kmer list. see FindBadGenomicKmersSpark to generate it. | |
--output -O |
null | directory for VCF output, including those from experimental interpretation tool if so requested, will be created if not present; sample name will be appended after the provided argument | |
--reference -R |
null | Reference sequence file | |
Optional Tool Arguments | |||
--adapter-sequence |
null | Adapter sequence. | |
--allowed-short-fragment-overhang |
10 | Proper pairs have the positive strand read upstream of the negative strand read, but we allow this much slop for short fragments. | |
--arguments_file |
[] | read one or more arguments files and add them to the command line | |
--assembled-contigs-output-order -sort |
coordinate | sorting order to be used for the output assembly alignments SAM/BAM file (currently only coordinate or query name is supported) | |
--assembly-imprecise-evidence-overlap-uncertainty |
100 | Uncertainty in overlap of assembled breakpoints and evidence target links. | |
--assembly-to-mapped-size-ratio-guess |
7 | Guess at the ratio of reads in the final assembly to the number reads mapped to the interval. | |
--bam-partition-size |
0 | maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block). | |
--breakpoint-evidence-dir |
null | directory for evidence output | |
--breakpoint-intervals |
null | file for breakpoint intervals output | |
--cleaner-max-copy-number |
4 | KmerCleaner maximum copy number (not count, but copy number) for a kmer. Kmers observed too frequently are probably mismapped or ubiquitous. | |
--cleaner-max-intervals |
3 | KmerCleaner maximum number of intervals for a localizing kmer. If a kmer occurs in too many intervals, it isn't sufficiently local. | |
--cleaner-min-kmer-count |
4 | KmerCleaner minimum kmer count for a localizing kmer. If we see it less often than this many times, we're guessing it's erroneous. | |
--cnv-calls |
null | External CNV calls file. Should be single sample VCF, and contain only confident autosomal non-reference CNV calls (for now). | |
--conf |
[] | spark properties to set on the spark context in the format = | |
--cross-contigs-to-ignore |
null | file containing alt contig names that will be ignored when looking for inter-contig pairs | |
--disable-sequence-dictionary-validation |
false | If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk! | |
--exclusion-interval-padding |
0 | Exclusion interval padding. | |
--exclusion-intervals |
null | file of reference intervals to exclude | |
--external-evidence |
null | external evidence input file | |
--external-evidence-uncertainty |
150 | Uncertainty in location of external evidence. | |
--external-evidence-weight |
10 | Weight to give external evidence. | |
--fastq-dir |
null | output dir for assembled fastqs | |
--gcs-max-retries -gcs-retries |
20 | If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection | |
--gcs-project-for-requester-pays |
"" | Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed. | |
--help -h |
false | display the help message | |
--high-coverage-intervals |
null | file for high-coverage intervals output | |
--high-depth-coverage-factor |
3 | We filter out contiguous regions of the genome that have coverage of at least high-depth-coverage-factor * avg-coverage and a peak coverage of high-depth-coverage-peak-factor * avg-coverage, because the reads mapped to those regions tend to be non-local and high depth prevents accurate assembly. | |
--high-depth-coverage-peak-factor |
7 | We filter out contiguous regions of the genome that have coverage of at least high-depth-coverage-factor * avg-coverage and a peak coverage of high-depth-coverage-peak-factor * avg-coverage, because the reads mapped to those regions tend to be non-local and high depth prevents accurate assembly. | |
--imprecise-variant-evidence-threshold |
7 | Number of pieces of imprecise evidence necessary to call a variant in the absence of an assembled breakpoint. | |
--include-mapping-location |
true | Include read mapping location in FASTQ files. | |
--interval-merging-rule -imr |
ALL | Interval merging rule for abutting intervals | |
--interval-only-assembly |
false | Don't look for extra reads mapped outside the interval. | |
--intervals -L |
[] | One or more genomic intervals over which to operate | |
--k-size |
51 | Kmer size. | |
--kmer-intervals |
null | file for kmer intervals output | |
--kmer-max-dust-score |
49 | Maximum kmer DUST score. | |
--max-callable-imprecise-deletion-size |
15000 | Maximum size deletion to call based on imprecise evidence without corroborating read depth evidence | |
--max-fastq-size |
3000000 | Maximum total bases in FASTQs that can be assembled. | |
--max-tracked-fragment-length |
2000 | Largest fragment size that will be explicitly counted in determining fragment size statistics. | |
--min-align-length |
50 | Minimum flanking alignment length | |
--min-coherent-evidence-coverage-ratio |
0.1633408753260167 | Minimum weight of the evidence that shares a distal target locus to validate the evidence, as a ratio of the mean coverage in the BAM. The default value is coherent-count / mean coverage ~ 7 / 42.9 ~ 0.163 | |
--min-evidence-coverage-ratio |
0.35001616141289293 | Minimum weight of the corroborating read evidence to validate some single piece of evidence, as a ratio of the mean coverage in the BAM. The default value is overlap-count / mean coverage ~ 15 / 42.9 ~ 0.350 | |
--min-evidence-mapq |
20 | The minimum mapping quality for reads used to gather evidence of breakpoints. | |
--min-evidence-match-length |
45 | The minimum length of the matched portion of an interesting alignment. Reads that don't match at least this many reference bases won't be used in gathering evidence. | |
--min-kmers-per-interval |
5 | Minimum number of localizing kmers in a valid interval. | |
--min-mq -mq |
30 | Minimum mapping quality of evidence assembly contig | |
--num-reducers |
0 | For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input. | |
--output-shard-tmp-dir |
null | when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used | |
--program-name |
null | Name of the program running | |
--qname-intervals-for-assembly |
null | file for mapped qname intervals output | |
--qname-intervals-mapped |
null | file for mapped qname intervals output | |
--read-metadata |
null | output file for read metadata | |
--run-without-gaps-annotation |
false | Allow evidence filter to run without gaps annotation (assume no gaps). | |
--run-without-umap-s100-annotation |
false | Allow evidence filter to run without annotation for single-read mappability of 100-mers (assume all mappable). | |
--sharded-output |
false | For tools that write an output, write the output in multiple pieces (shards) | |
--spark-master |
local[*] | URL of the Spark Master to submit jobs to when using the Spark pipeline runner. | |
--sv-evidence-filter-model-file |
null | Path to xgboost classifier model file for evidence filtering | |
--sv-evidence-filter-threshold-probability |
0.92 | Minimum classified probability for a piece of evidence to pass xgboost evidence filter | |
--sv-evidence-filter-type |
DENSITY | Filter method for selecting evidence to group into Assembly Intervals | |
--sv-genome-gaps-file |
null | Path to file enumerating gaps in the reference genome, used by classifier to score evidence for filtering. To use classifier without specifying gaps file, pass the flag --run-without-gaps-annotation | |
--sv-genome-umap-s100-file |
null | Path to single read 100-mer mappability file in the reference genome, used by classifier to score evidence for filtering. To use classifier without specifying mappability file, pass the flag --run-without-umap-s100-annotation | |
--target-link-file |
null | output file for non-assembled breakpoints in bedpe format | |
--truth-interval-padding |
50 | Breakpoint padding for evaluation against truth data. | |
--unfiltered-breakpoint-evidence-dir |
null | directory for evidence output | |
--version |
false | display the version number for this tool | |
--write-gfas |
false | Write GFA representation of assemblies in fastq-dir. | |
Optional Common Arguments | |||
--add-output-vcf-command-line |
true | If true, adds a command line header line to created VCF files. | |
--disable-read-filter -DF |
[] | Read filters to be disabled before analysis | |
--disable-tool-default-read-filters |
false | Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on) | |
--exclude-intervals -XL |
[] | One or more genomic intervals to exclude from processing | |
--gatk-config-file |
null | A configuration file to use with the GATK. | |
--interval-exclusion-padding -ixp |
0 | Amount of padding (in bp) to add to each interval you are excluding. | |
--interval-padding -ip |
0 | Amount of padding (in bp) to add to each interval you are including. | |
--interval-set-rule -isr |
UNION | Set merging approach to use for combining interval inputs | |
--QUIET |
false | Whether to suppress job-summary info on System.err. | |
--read-filter -RF |
[] | Read filters to be applied before analysis | |
--read-index |
[] | Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically. | |
--read-validation-stringency -VS |
SILENT | Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded. | |
--tmp-dir |
null | Temp directory to use. | |
--use-jdk-deflater -jdk-deflater |
false | Whether to use the JdkDeflater (as opposed to IntelDeflater) | |
--use-jdk-inflater -jdk-inflater |
false | Whether to use the JdkInflater (as opposed to IntelInflater) | |
--verbosity |
INFO | Control verbosity of logging. | |
Advanced Arguments | |||
--debug-mode |
false | Run interpretation tool in debug mode (more information print to screen) | |
--exp-interpret |
false | flag to signal that user wants to run experimental interpretation tool as well | |
--expand-assembly-graph |
true | Traverse assembly graph and produce contigs for all paths. | |
--pop-variant-bubbles |
false | Aggressively simplify local assemblies, ignoring small variants. | |
--remove-shadowed-contigs |
true | Simplify local assemblies by removing contigs shadowed by similar contigs. | |
--showHidden |
false | display hidden arguments | |
--z-dropoff |
20 | ZDropoff (see Bwa mem manual) for contig alignment. |
Argument details
Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.
--adapter-sequence / NA
Adapter sequence.
String null
--add-output-vcf-command-line / -add-output-vcf-command-line
If true, adds a command line header line to created VCF files.
boolean true
--aligner-index-image / NA
bwa-mem index image file
R String null
--allowed-short-fragment-overhang / NA
Proper pairs have the positive strand read upstream of the negative strand read, but we allow this much slop for short fragments.
int 10 [ [ -∞ ∞ ] ]
--arguments_file / NA
read one or more arguments files and add them to the command line
List[File] []
--assembled-contigs-output-order / -sort
sorting order to be used for the output assembly alignments SAM/BAM file (currently only coordinate or query name is supported)
The --assembled-contigs-output-order argument is an enumerated type (SortOrder), which can have one of the following values:
- unsorted
- queryname
- coordinate
- duplicate
- unknown
SortOrder coordinate
--assembly-imprecise-evidence-overlap-uncertainty / NA
Uncertainty in overlap of assembled breakpoints and evidence target links.
int 100 [ [ -∞ ∞ ] ]
--assembly-to-mapped-size-ratio-guess / NA
Guess at the ratio of reads in the final assembly to the number reads mapped to the interval.
int 7 [ [ -∞ ∞ ] ]
--bam-partition-size / NA
maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).
long 0 [ [ -∞ ∞ ] ]
--breakpoint-evidence-dir / NA
directory for evidence output
String null
--breakpoint-intervals / NA
file for breakpoint intervals output
String null
--cleaner-max-copy-number / NA
KmerCleaner maximum copy number (not count, but copy number) for a kmer. Kmers observed too frequently are probably mismapped or ubiquitous.
int 4 [ [ -∞ ∞ ] ]
--cleaner-max-intervals / NA
KmerCleaner maximum number of intervals for a localizing kmer. If a kmer occurs in too many intervals, it isn't sufficiently local.
int 3 [ [ -∞ ∞ ] ]
--cleaner-min-kmer-count / NA
KmerCleaner minimum kmer count for a localizing kmer. If we see it less often than this many times, we're guessing it's erroneous.
int 4 [ [ -∞ ∞ ] ]
--cnv-calls / NA
External CNV calls file. Should be single sample VCF, and contain only confident autosomal non-reference CNV calls (for now).
String null
--conf / -conf
spark properties to set on the spark context in the format =
List[String] []
--contig-sam-file / NA
sam file for aligned contigs
R String null
--cross-contigs-to-ignore / NA
file containing alt contig names that will be ignored when looking for inter-contig pairs
This is a path to a text file of contig names (one per line) that will be ignored when looking for inter-contig pairs.
String null
--debug-mode / NA
Run interpretation tool in debug mode (more information print to screen)
Boolean false
--disable-read-filter / -DF
Read filters to be disabled before analysis
List[String] []
--disable-sequence-dictionary-validation / -disable-sequence-dictionary-validation
If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!
boolean false
--disable-tool-default-read-filters / -disable-tool-default-read-filters
Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on)
boolean false
--exclude-intervals / -XL
One or more genomic intervals to exclude from processing
Use this argument to exclude certain parts of the genome from the analysis (like -L, but the opposite).
This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the
command line (e.g. -XL 1 or -XL 1:100-200) or by loading in a file containing a list of intervals
(e.g. -XL myFile.intervals).
List[String] []
--exclusion-interval-padding / NA
Exclusion interval padding.
int 0 [ [ -∞ ∞ ] ]
--exclusion-intervals / NA
file of reference intervals to exclude
This is a file that calls out the coordinates of intervals in the reference assembly to exclude from
consideration when calling putative breakpoints.
Each line is a tab-delimited interval with 1-based inclusive coordinates like this:
chr1 124535434 142535434
String null
--exp-interpret / NA
flag to signal that user wants to run experimental interpretation tool as well
Boolean false
--expand-assembly-graph / NA
Traverse assembly graph and produce contigs for all paths.
boolean true
--external-evidence / NA
external evidence input file
String null
--external-evidence-uncertainty / NA
Uncertainty in location of external evidence.
int 150 [ [ -∞ ∞ ] ]
--external-evidence-weight / NA
Weight to give external evidence.
int 10 [ [ -∞ ∞ ] ]
--fastq-dir / NA
output dir for assembled fastqs
String null
--gatk-config-file / NA
A configuration file to use with the GATK.
String null
--gcs-max-retries / -gcs-retries
If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
int 20 [ [ -∞ ∞ ] ]
--gcs-project-for-requester-pays / NA
Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed.
String ""
--help / -h
display the help message
boolean false
--high-coverage-intervals / NA
file for high-coverage intervals output
String null
--high-depth-coverage-factor / NA
We filter out contiguous regions of the genome that have coverage of at least high-depth-coverage-factor * avg-coverage and a peak coverage of high-depth-coverage-peak-factor * avg-coverage, because the reads mapped to those regions tend to be non-local and high depth prevents accurate assembly.
int 3 [ [ -∞ ∞ ] ]
--high-depth-coverage-peak-factor / NA
We filter out contiguous regions of the genome that have coverage of at least high-depth-coverage-factor * avg-coverage and a peak coverage of high-depth-coverage-peak-factor * avg-coverage, because the reads mapped to those regions tend to be non-local and high depth prevents accurate assembly.
int 7 [ [ -∞ ∞ ] ]
--imprecise-variant-evidence-threshold / NA
Number of pieces of imprecise evidence necessary to call a variant in the absence of an assembled breakpoint.
int 7 [ [ -∞ ∞ ] ]
--include-mapping-location / NA
Include read mapping location in FASTQ files.
boolean true
--input / -I
BAM/SAM/CRAM file containing reads
R List[String] []
--interval-exclusion-padding / -ixp
Amount of padding (in bp) to add to each interval you are excluding.
Use this to add padding to the intervals specified using -XL. For example, '-XL 1:100' with a
padding value of 20 would turn into '-XL 1:80-120'. This is typically used to add padding around targets when
analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
--interval-merging-rule / -imr
Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not
actually overlap) into a single continuous interval. However you can change this behavior if you want them to be
treated as separate intervals instead.
The --interval-merging-rule argument is an enumerated type (IntervalMergingRule), which can have one of the following values:
- ALL
- OVERLAPPING_ONLY
IntervalMergingRule ALL
--interval-only-assembly / NA
Don't look for extra reads mapped outside the interval.
boolean false
--interval-padding / -ip
Amount of padding (in bp) to add to each interval you are including.
Use this to add padding to the intervals specified using -L. For example, '-L 1:100' with a
padding value of 20 would turn into '-L 1:80-120'. This is typically used to add padding around targets when
analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
--interval-set-rule / -isr
Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can
change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to
perform the analysis only on chromosome 1 exomes, you could specify -L exomes.intervals -L 1 --interval-set-rule
INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will
always be merged using UNION).
Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.
The --interval-set-rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:
- UNION
- Take the union of all intervals
- INTERSECTION
- Take the intersection of intervals (the subset that overlaps all intervals specified)
IntervalSetRule UNION
--intervals / -L
One or more genomic intervals over which to operate
List[String] []
--k-size / NA
Kmer size.
int 51 [ [ -∞ ∞ ] ]
--kmer-intervals / NA
file for kmer intervals output
String null
--kmer-max-dust-score / NA
Maximum kmer DUST score.
int 49 [ [ -∞ ∞ ] ]
--kmers-to-ignore / NA
file containing ubiquitous kmer list. see FindBadGenomicKmersSpark to generate it.
This is a path to a file of kmers that appear too frequently in the reference to be usable as probes to localize
reads. We don't calculate it here, because it depends only on the reference.
The program FindBadGenomicKmersSpark can produce such a list for you.
R String null
--max-callable-imprecise-deletion-size / NA
Maximum size deletion to call based on imprecise evidence without corroborating read depth evidence
int 15000 [ [ -∞ ∞ ] ]
--max-fastq-size / NA
Maximum total bases in FASTQs that can be assembled.
int 3000000 [ [ -∞ ∞ ] ]
--max-tracked-fragment-length / NA
Largest fragment size that will be explicitly counted in determining fragment size statistics.
int 2000 [ [ -∞ ∞ ] ]
--min-align-length / NA
Minimum flanking alignment length
Integer 50 [ [ -∞ ∞ ] ]
--min-coherent-evidence-coverage-ratio / NA
Minimum weight of the evidence that shares a distal target locus to validate the evidence, as a ratio of the mean coverage in the BAM. The default value is coherent-count / mean coverage ~ 7 / 42.9 ~ 0.163
double 0.1633408753260167 [ [ -∞ ∞ ] ]
--min-evidence-coverage-ratio / NA
Minimum weight of the corroborating read evidence to validate some single piece of evidence, as a ratio of the mean coverage in the BAM. The default value is overlap-count / mean coverage ~ 15 / 42.9 ~ 0.350
double 0.35001616141289293 [ [ -∞ ∞ ] ]
--min-evidence-mapq / NA
The minimum mapping quality for reads used to gather evidence of breakpoints.
int 20 [ [ -∞ ∞ ] ]
--min-evidence-match-length / NA
The minimum length of the matched portion of an interesting alignment. Reads that don't match at least this many reference bases won't be used in gathering evidence.
int 45 [ [ -∞ ∞ ] ]
--min-kmers-per-interval / NA
Minimum number of localizing kmers in a valid interval.
int 5 [ [ -∞ ∞ ] ]
--min-mq / -mq
Minimum mapping quality of evidence assembly contig
Integer 30 [ [ -∞ ∞ ] ]
--num-reducers / NA
For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.
int 0 [ [ -∞ ∞ ] ]
--output / -O
directory for VCF output, including those from experimental interpretation tool if so requested, will be created if not present; sample name will be appended after the provided argument
R String null
--output-shard-tmp-dir / NA
when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used
Exclusion: This argument cannot be used at the same time as sharded-output
.
String null
--pop-variant-bubbles / NA
Aggressively simplify local assemblies, ignoring small variants.
boolean false
--program-name / NA
Name of the program running
String null
--qname-intervals-for-assembly / NA
file for mapped qname intervals output
String null
--qname-intervals-mapped / NA
file for mapped qname intervals output
String null
--QUIET / NA
Whether to suppress job-summary info on System.err.
Boolean false
--read-filter / -RF
Read filters to be applied before analysis
List[String] []
--read-index / -read-index
Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.
List[String] []
--read-metadata / NA
output file for read metadata
String null
--read-validation-stringency / -VS
Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.
The --read-validation-stringency argument is an enumerated type (ValidationStringency), which can have one of the following values:
- STRICT
- LENIENT
- SILENT
ValidationStringency SILENT
--reference / -R
Reference sequence file
R String null
--remove-shadowed-contigs / NA
Simplify local assemblies by removing contigs shadowed by similar contigs.
boolean true
--run-without-gaps-annotation / NA
Allow evidence filter to run without gaps annotation (assume no gaps).
boolean false
--run-without-umap-s100-annotation / NA
Allow evidence filter to run without annotation for single-read mappability of 100-mers (assume all mappable).
boolean false
--sharded-output / NA
For tools that write an output, write the output in multiple pieces (shards)
Exclusion: This argument cannot be used at the same time as output-shard-tmp-dir
.
boolean false
--showHidden / -showHidden
display hidden arguments
boolean false
--spark-master / NA
URL of the Spark Master to submit jobs to when using the Spark pipeline runner.
String local[*]
--sv-evidence-filter-model-file / NA
Path to xgboost classifier model file for evidence filtering
String null
--sv-evidence-filter-threshold-probability / NA
Minimum classified probability for a piece of evidence to pass xgboost evidence filter
double 0.92 [ [ -∞ ∞ ] ]
--sv-evidence-filter-type / NA
Filter method for selecting evidence to group into Assembly Intervals
The --sv-evidence-filter-type argument is an enumerated type (SvEvidenceFilterType), which can have one of the following values:
- DENSITY
- XGBOOST
SvEvidenceFilterType DENSITY
--sv-genome-gaps-file / NA
Path to file enumerating gaps in the reference genome, used by classifier to score evidence for filtering. To use classifier without specifying gaps file, pass the flag --run-without-gaps-annotation
String null
--sv-genome-umap-s100-file / NA
Path to single read 100-mer mappability file in the reference genome, used by classifier to score evidence for filtering. To use classifier without specifying mappability file, pass the flag --run-without-umap-s100-annotation
String null
--target-link-file / NA
output file for non-assembled breakpoints in bedpe format
String null
--tmp-dir / NA
Temp directory to use.
String null
--truth-interval-padding / NA
Breakpoint padding for evaluation against truth data.
int 50 [ [ -∞ ∞ ] ]
--unfiltered-breakpoint-evidence-dir / NA
directory for evidence output
String null
--use-jdk-deflater / -jdk-deflater
Whether to use the JdkDeflater (as opposed to IntelDeflater)
boolean false
--use-jdk-inflater / -jdk-inflater
Whether to use the JdkInflater (as opposed to IntelInflater)
boolean false
--verbosity / -verbosity
Control verbosity of logging.
The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:
- ERROR
- WARNING
- INFO
- DEBUG
LogLevel INFO
--version / NA
display the version number for this tool
boolean false
--write-gfas / NA
Write GFA representation of assemblies in fastq-dir.
boolean false
--z-dropoff / NA
ZDropoff (see Bwa mem manual) for contig alignment.
int 20 [ [ -∞ ∞ ] ]
GATK version 4.0.12.0 built at 23-45-2019 03:45:45.
0 comments
Please sign in to leave a comment.