Takes unaligned or aligned reads and runs BWA (if specified), MarkDuplicates, BQSR, and HaplotypeCaller to generate a VCF file of variants
Category Short Variant Discovery
Overview
ReadsPipelineSpark is our standard pipeline that takes unaligned or aligned reads and runs BWA (if specified), MarkDuplicates, BQSR, and HaplotypeCaller. The final result is analysis-ready variants.Examples
gatk ReadsPipelineSpark \ -I gs://my-gcs-bucket/aligned_reads.bam \ -R gs://my-gcs-bucket/reference.fasta \ --known-sites gs://my-gcs-bucket/sites_of_variation.vcf \ -O gs://my-gcs-bucket/output.vcf \ -- \ --sparkRunner GCS \ --cluster my-dataproc-cluster
To additionally align reads with BWA-MEM:
gatk ReadsPipelineSpark \ -I gs://my-gcs-bucket/unaligned_reads.bam \ -R gs://my-gcs-bucket/reference.fasta \ --known-sites gs://my-gcs-bucket/sites_of_variation.vcf \ -align -O gs://my-gcs-bucket/output.vcf \ -- \ --sparkRunner GCS \ --cluster my-dataproc-cluster
ReadsPipelineSpark specific arguments
This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.
Argument name(s) | Default value | Summary | |
---|---|---|---|
Required Arguments | |||
--input -I |
[] | BAM/SAM/CRAM file containing reads | |
--known-sites |
[] | the known variants | |
--output -O |
null | the output vcf | |
--reference -R |
null | Reference sequence file | |
Optional Tool Arguments | |||
--align |
false | whether to perform alignment using BWA-MEM | |
--alleles |
null | The set of alleles at which to genotype when --genotyping_mode is GENOTYPE_GIVEN_ALLELES | |
--annotate-with-num-discovered-alleles |
false | If provided, we will annotate records with the number of alternate alleles that were discovered (but not necessarily genotyped) at a given site | |
--annotation -A |
[] | One or more specific annotations to add to variant calls | |
--annotation-group -G |
[StandardAnnotation, StandardHCAnnotation] | One or more groups of annotations to apply to variant calls | |
--annotations-to-exclude -AX |
[] | One or more specific annotations to exclude from variant calls | |
--arguments_file |
[] | read one or more arguments files and add them to the command line | |
--assembly-region-padding |
100 | Number of additional bases of context to include around each assembly region | |
--bam-partition-size |
0 | maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block). | |
--base-quality-score-threshold |
18 | Base qualities below this threshold will be reduced to the minimum (6) | |
--binary-tag-name |
null | the binary tag covariate name if using it | |
--bqsr-baq-gap-open-penalty |
40.0 | BQSR BAQ gap open penalty (Phred Scaled). Default value is 40. 30 is perhaps better for whole genome call sets | |
--bwa-mem-index-image -image |
null | The BWA-MEM index image file name that you've distributed to each executor | |
--conf |
[] | spark properties to set on the spark context in the format = | |
--contamination-fraction-to-filter -contamination |
0.0 | Fraction of contamination in sequencing data (for all samples) to aggressively remove | |
--dbsnp -D |
null | dbSNP file | |
--default-base-qualities |
-1 | Assign a default base quality | |
--deletions-default-quality |
45 | default quality for the base deletions covariate | |
--disable-sequence-dictionary-validation |
false | If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk! | |
--duplicates-scoring-strategy -DS |
SUM_OF_BASE_QUALITIES | The scoring strategy for choosing the non-duplicate among candidates. | |
--emit-original-quals |
false | Emit original base qualities under the OQ tag | |
--gcs-max-retries -gcs-retries |
20 | If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection | |
--genotyping-mode |
DISCOVERY | Specifies how to determine the alternate alleles to use for genotyping | |
--global-qscore-prior |
-1.0 | Global Qscore Bayesian prior to use for BQSR | |
--graph-output -graph |
null | Write debug assembly graph information to this file | |
--help -h |
false | display the help message | |
--heterozygosity |
0.001 | Heterozygosity value used to compute prior likelihoods for any locus. See the GATKDocs for full details on the meaning of this population genetics concept | |
--heterozygosity-stdev |
0.01 | Standard deviation of eterozygosity for SNP and indel calling. | |
--indel-heterozygosity |
1.25E-4 | Heterozygosity for indel calling. See the GATKDocs for heterozygosity for full details on the meaning of this population genetics concept | |
--indels-context-size -ics |
3 | Size of the k-mer context to be used for base insertions and deletions | |
--insertions-default-quality |
45 | default quality for the base insertions covariate | |
--interval-merging-rule -imr |
ALL | Interval merging rule for abutting intervals | |
--intervals -L |
[] | One or more genomic intervals over which to operate | |
--join-strategy |
BROADCAST | the join strategy for reference bases and known variants | |
--low-quality-tail |
2 | minimum quality for the bases in the tail of the reads to be considered | |
--max-assembly-region-size |
300 | Maximum size of an assembly region | |
--max-reads-per-alignment-start |
50 | Maximum number of reads to retain per alignment start position. Reads above this threshold will be downsampled. Set to 0 to disable. | |
--maximum-cycle-value -max-cycle |
500 | The maximum cycle value permitted for the Cycle covariate | |
--min-assembly-region-size |
50 | Minimum size of an assembly region | |
--min-base-quality-score -mbq |
10 | Minimum base quality required to consider a base for calling | |
--mismatches-context-size -mcs |
2 | Size of the k-mer context to be used for base mismatches | |
--mismatches-default-quality |
-1 | default quality for the base mismatches covariate | |
--native-pair-hmm-threads |
4 | How many threads should a native pairHMM implementation use | |
--native-pair-hmm-use-double-precision |
false | use double precision in the native pairHmm. This is slower but matches the java implementation better | |
--num-reducers |
0 | For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input. | |
--output-bam |
null | the output bam | |
--output-mode |
EMIT_VARIANTS_ONLY | Specifies which type of calls we should output | |
--preserve-qscores-less-than |
6 | Don't recalibrate bases with quality scores less than this threshold (with -bqsr) | |
--program-name |
null | Name of the program running | |
--quantize-quals |
0 | Quantize quality scores to a given number of levels | |
--quantizing-levels |
16 | number of distinct quality scores in the quantized output | |
--read-shard-padding |
100 | Each read shard has this many bases of extra context on each side. Read shards must have as much or more padding than assembly regions. | |
--read-shard-size |
5000 | Maximum size of each read shard, in bases. For good performance, this should be much larger than the maximum assembly region size. | |
--sample-name -ALIAS |
null | Name of single sample to use from a multi-sample bam | |
--sample-ploidy -ploidy |
2 | Ploidy (number of chromosomes) per sample. For pooled data, set to (Number of samples in each pool * Sample Ploidy). | |
--sharded-output |
false | For tools that write an output, write the output in multiple pieces (shards) | |
--single-end-alignment -se |
false | Run single-end instead of paired-end alignment | |
--spark-master |
local[*] | URL of the Spark Master to submit jobs to when using the Spark pipeline runner. | |
--standard-min-confidence-threshold-for-calling -stand-call-conf |
10.0 | The minimum phred-scaled confidence threshold at which variants should be called | |
--use-new-qual-calculator -new-qual |
false | If provided, we will use the new AF model instead of the so-called exact model | |
--use-original-qualities -OQ |
false | Use the base quality scores from the OQ tag | |
--version |
false | display the version number for this tool | |
Optional Common Arguments | |||
--disable-read-filter -DF |
[] | Read filters to be disabled before analysis | |
--disable-tool-default-read-filters |
false | Disable all tool default read filters | |
--exclude-intervals -XL |
[] | One or more genomic intervals to exclude from processing | |
--gatk-config-file |
null | A configuration file to use with the GATK. | |
--interval-exclusion-padding -ixp |
0 | Amount of padding (in bp) to add to each interval you are excluding. | |
--interval-padding -ip |
0 | Amount of padding (in bp) to add to each interval you are including. | |
--interval-set-rule -isr |
UNION | Set merging approach to use for combining interval inputs | |
--QUIET |
false | Whether to suppress job-summary info on System.err. | |
--read-filter -RF |
[] | Read filters to be applied before analysis | |
--read-index |
[] | Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically. | |
--read-validation-stringency -VS |
SILENT | Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded. | |
--TMP_DIR |
[] | Undocumented option | |
--use-jdk-deflater -jdk-deflater |
false | Whether to use the JdkDeflater (as opposed to IntelDeflater) | |
--use-jdk-inflater -jdk-inflater |
false | Whether to use the JdkInflater (as opposed to IntelInflater) | |
--verbosity |
INFO | Control verbosity of logging. | |
Advanced Arguments | |||
--active-probability-threshold |
0.002 | Minimum probability for a locus to be considered active. | |
--all-site-pls |
false | Annotate all sites with PLs | |
--allow-non-unique-kmers-in-ref |
false | Allow graphs that have non-unique kmers in the reference | |
--bam-output -bamout |
null | File to which assembled haplotypes should be written | |
--bam-writer-type |
CALLED_HAPLOTYPES | Which haplotypes should be written to the BAM | |
--comp |
[] | Comparison VCF file(s) | |
--consensus |
false | 1000G consensus mode | |
--contamination-fraction-per-sample-file -contamination-file |
null | Tab-separated File containing fraction of contamination in sequencing data (per sample) to aggressively remove. Format should be "" (Contamination is double) per line; No header. | |
--debug |
false | Print out very verbose debug information about each triggering active region | |
--disable-optimizations |
false | Don't skip calculations in ActiveRegions with no variants | |
--do-not-run-physical-phasing |
false | Disable physical phasing | |
--dont-increase-kmer-sizes-for-cycles |
false | Disable iterating over kmer sizes when graph cycles are detected | |
--dont-trim-active-regions |
false | If specified, we will not trim down the active region from the full region (active + extension) to just the active interval for genotyping | |
--dont-use-soft-clipped-bases |
false | Do not analyze soft clipped bases in the reads | |
--emit-ref-confidence -ERC |
NONE | Mode for emitting reference confidence scores | |
--gvcf-gq-bands -GQB |
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 70, 80, 90, 99] | Exclusive upper bounds for reference confidence GQ bands (must be in [1, 100] and specified in increasing order) | |
--indel-size-to-eliminate-in-ref-model |
10 | The size of an indel to check for in the reference model | |
--input-prior |
[] | Input prior for calls | |
--kmer-size |
[10, 25] | Kmer size to use in the read threading assembler | |
--max-alternate-alleles |
6 | Maximum number of alternate alleles to genotype | |
--max-genotype-count |
1024 | Maximum number of genotypes to consider at any site | |
--max-num-haplotypes-in-population |
128 | Maximum number of haplotypes to consider for your population | |
--max-prob-propagation-distance |
50 | Upper limit on how many bases away probability mass can be moved around when calculating the boundaries between active and inactive assembly regions | |
--min-dangling-branch-length |
4 | Minimum length of a dangling branch to attempt recovery | |
--min-pruning |
2 | Minimum support to not prune paths in the graph | |
--num-pruning-samples |
1 | Number of samples that must pass the minPruning threshold | |
--pair-hmm-gap-continuation-penalty |
10 | Flat gap continuation penalty for use in the Pair HMM | |
--pcr-indel-model |
CONSERVATIVE | The PCR indel model to use | |
--phred-scaled-global-read-mismapping-rate |
45 | The global assumed mismapping rate for reads | |
--round-down-quantized |
false | Round quals down to nearest quantized qual | |
--showHidden |
false | display hidden arguments | |
--smith-waterman |
JAVA | Which Smith-Waterman implementation to use, generally FASTEST_AVAILABLE is the right choice | |
--static-quantized-quals |
[] | Use static quantized quality scores to a given number of levels (with -bqsr) | |
--use-alleles-trigger |
false | Use additional trigger on variants found in an external alleles file | |
--use-filtered-reads-for-annotations |
false | Use the contamination-filtered read maps for the purposes of annotating variants | |
Deprecated Arguments | |||
--recover-dangling-heads |
false | This argument is deprecated since version 3.3 |
Argument details
Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.
--active-probability-threshold / NA
Minimum probability for a locus to be considered active.
double 0.002 [ [ -∞ ∞ ] ]
--align / NA
whether to perform alignment using BWA-MEM
boolean false
--all-site-pls / NA
Annotate all sites with PLs
Advanced, experimental argument: if SNP likelihood model is specified, and if EMIT_ALL_SITES output mode is set, when we set this argument then we will also emit PLs at all sites.
This will give a measure of reference confidence and a measure of which alt alleles are more plausible (if any).
WARNINGS:
- This feature will inflate VCF file size considerably.
- All SNP ALT alleles will be emitted with corresponding 10 PL values.
- An error will be emitted if EMIT_ALL_SITES is not set, or if anything other than diploid SNP model is used
boolean false
--alleles / NA
The set of alleles at which to genotype when --genotyping_mode is GENOTYPE_GIVEN_ALLELES
When the caller is put into GENOTYPE_GIVEN_ALLELES mode it will genotype the samples using only the alleles provide in this rod binding
FeatureInput[VariantContext] null
--allow-non-unique-kmers-in-ref / NA
Allow graphs that have non-unique kmers in the reference
By default, the program does not allow processing of reference sections that contain non-unique kmers. Disabling
this check may cause problems in the assembly graph.
boolean false
--annotate-with-num-discovered-alleles / NA
If provided, we will annotate records with the number of alternate alleles that were discovered (but not necessarily genotyped) at a given site
Depending on the value of the --max_alternate_alleles argument, we may genotype only a fraction of the alleles being sent on for genotyping.
Using this argument instructs the genotyper to annotate (in the INFO field) the number of alternate alleles that were originally discovered at the site.
boolean false
--annotation / -A
One or more specific annotations to add to variant calls
Which annotations to include in variant calls in the output. These supplement annotations provided by annotation groups.
List[String] []
--annotation-group / -G
One or more groups of annotations to apply to variant calls
Which groups of annotations to add to the output variant calls.
Any requirements that are not met (e.g. failing to provide a pedigree file for a pedigree-based annotation) may cause the run to fail.
List[String] [StandardAnnotation, StandardHCAnnotation]
--annotations-to-exclude / -AX
One or more specific annotations to exclude from variant calls
Which annotations to exclude from output in the variant calls. Note that this argument has higher priority than the
-A or -G arguments, so these annotations will be excluded even if they are explicitly included with the other
options.
List[String] []
--arguments_file / NA
read one or more arguments files and add them to the command line
List[File] []
--assembly-region-padding / NA
Number of additional bases of context to include around each assembly region
int 100 [ [ -∞ ∞ ] ]
--bam-output / -bamout
File to which assembled haplotypes should be written
The assembled haplotypes and locally realigned reads will be written as BAM to this file if requested. Really
for debugging purposes only. Note that the output here does not include uninformative reads so that not every
input read is emitted to the bam.
Turning on this mode may result in serious performance cost for the caller. It's really only appropriate to
use in specific areas where you want to better understand why the caller is making specific calls.
The reads are written out containing an "HC" tag (integer) that encodes which haplotype each read best matches
according to the haplotype caller's likelihood calculation. The use of this tag is primarily intended
to allow good coloring of reads in IGV. Simply go to "Color Alignments By > Tag" and enter "HC" to more
easily see which reads go with these haplotype.
Note that the haplotypes (called or all, depending on mode) are emitted as single reads covering the entire
active region, coming from sample "HC" and a special read group called "ArtificialHaplotype". This will increase the
pileup depth compared to what would be expected from the reads only, especially in complex regions.
Note also that only reads that are actually informative about the haplotypes are emitted. By informative we mean
that there's a meaningful difference in the likelihood of the read coming from one haplotype compared to
its next best haplotype.
If multiple BAMs are passed as input to the tool (as is common for M2), then they will be combined in the bamout
output and tagged with the appropriate sample names.
The best way to visualize the output of this mode is with IGV. Tell IGV to color the alignments by tag,
and give it the "HC" tag, so you can see which reads support each haplotype. Finally, you can tell IGV
to group by sample, which will separate the potential haplotypes from the reads. All of this can be seen in
this screenshot
String null
--bam-partition-size / NA
maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).
long 0 [ [ -∞ ∞ ] ]
--bam-writer-type / NA
Which haplotypes should be written to the BAM
The type of BAM output we want to see. This determines whether HC will write out all of the haplotypes it
considered (top 128 max) or just the ones that were selected as alleles and assigned to samples.
The --bam-writer-type argument is an enumerated type (WriterType), which can have one of the following values:
- ALL_POSSIBLE_HAPLOTYPES
- A mode that's for method developers. Writes out all of the possible haplotypes considered, as well as reads aligned to each
- CALLED_HAPLOTYPES
- A mode for users. Writes out the reads aligned only to the called haplotypes. Useful to understand why the caller is calling what it is
WriterType CALLED_HAPLOTYPES
--base-quality-score-threshold / NA
Base qualities below this threshold will be reduced to the minimum (6)
Bases with a quality below this threshold will reduced to the minimum usable qualiy score (6).
byte 18 [ [ -∞ ∞ ] ]
--binary-tag-name / NA
the binary tag covariate name if using it
The tag name for the binary tag covariate (if using it)
String null
--bqsr-baq-gap-open-penalty / NA
BQSR BAQ gap open penalty (Phred Scaled). Default value is 40. 30 is perhaps better for whole genome call sets
double 40.0 [ [ -∞ ∞ ] ]
--bwa-mem-index-image / -image
The BWA-MEM index image file name that you've distributed to each executor
The BWA-MEM index image file name that you've distributed to each executor. The image file can be generated using
{@link BwaMemIndexImageCreator}. If this argument is not specified, the default behavior is to look for a
file whose name is the FASTA reference file with a .img suffix.
String null
--comp / -comp
Comparison VCF file(s)
If a call overlaps with a record from the provided comp track, the INFO field will be annotated
as such in the output with the track name (e.g. -comp:FOO will have 'FOO' in the INFO field). Records that are
filtered in the comp track will be ignored. Note that 'dbSNP' has been special-cased (see the --dbsnp argument).
List[FeatureInput[VariantContext]] []
--conf / -conf
spark properties to set on the spark context in the format =
List[String] []
--consensus / NA
1000G consensus mode
This argument is specifically intended for 1000G consensus analysis mode. Setting this flag will inject all
provided alleles to the assembly graph but will not forcibly genotype all of them.
boolean false
--contamination-fraction-per-sample-file / -contamination-file
Tab-separated File containing fraction of contamination in sequencing data (per sample) to aggressively remove. Format should be "" (Contamination is double) per line; No header.
This argument specifies a file with two columns "sample" and "contamination" specifying the contamination level for those samples.
Samples that do not appear in this file will be processed with CONTAMINATION_FRACTION.
File null
--contamination-fraction-to-filter / -contamination
Fraction of contamination in sequencing data (for all samples) to aggressively remove
If this fraction is greater is than zero, the caller will aggressively attempt to remove contamination through biased down-sampling of reads.
Basically, it will ignore the contamination fraction of reads for each alternate allele. So if the pileup contains N total bases, then we
will try to remove (N * contamination fraction) bases for each alternate allele.
double 0.0 [ [ -∞ ∞ ] ]
--dbsnp / -D
dbSNP file
A dbSNP VCF file.
FeatureInput[VariantContext] null
--debug / -debug
Print out very verbose debug information about each triggering active region
boolean false
--default-base-qualities / NA
Assign a default base quality
If reads are missing some or all base quality scores, this value will be used for all base quality scores.
By default this is set to -1 to disable default base quality assignment.
byte -1 [ [ -∞ ∞ ] ]
--deletions-default-quality / NA
default quality for the base deletions covariate
A default base qualities to use as a prior (reported quality) in the mismatch covariate model. This value will replace all base qualities in the read for this default value. Negative value turns it off. [default is on]
byte 45 [ [ -∞ ∞ ] ]
--disable-optimizations / NA
Don't skip calculations in ActiveRegions with no variants
If set, certain "early exit" optimizations in HaplotypeCaller, which aim to save compute and time by skipping
calculations if an ActiveRegion is determined to contain no variants, will be disabled. This is most likely to be useful if
you're using the -bamout argument to examine the placement of reads following reassembly and are interested in seeing the mapping of
reads in regions with no variations. Setting the -forceActive and -dontTrimActiveRegions flags may also be necessary.
boolean false
--disable-read-filter / -DF
Read filters to be disabled before analysis
List[String] []
--disable-sequence-dictionary-validation / -disable-sequence-dictionary-validation
If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!
boolean false
--disable-tool-default-read-filters / -disable-tool-default-read-filters
Disable all tool default read filters
boolean false
--do-not-run-physical-phasing / NA
Disable physical phasing
As of GATK 3.3, HaplotypeCaller outputs physical (read-based) information (see version 3.3 release notes and documentation for details). This argument disables that behavior.
boolean false
--dont-increase-kmer-sizes-for-cycles / NA
Disable iterating over kmer sizes when graph cycles are detected
When graph cycles are detected, the normal behavior is to increase kmer sizes iteratively until the cycles are
resolved. Disabling this behavior may cause the program to give up on assembling the ActiveRegion.
boolean false
--dont-trim-active-regions / NA
If specified, we will not trim down the active region from the full region (active + extension) to just the active interval for genotyping
boolean false
--dont-use-soft-clipped-bases / NA
Do not analyze soft clipped bases in the reads
boolean false
--duplicates-scoring-strategy / -DS
The scoring strategy for choosing the non-duplicate among candidates.
The --duplicates-scoring-strategy argument is an enumerated type (MarkDuplicatesScoringStrategy), which can have one of the following values:
- SUM_OF_BASE_QUALITIES
- TOTAL_MAPPED_REFERENCE_LENGTH
MarkDuplicatesScoringStrategy SUM_OF_BASE_QUALITIES
--emit-original-quals / NA
Emit original base qualities under the OQ tag
The tool is capable of writing out the original quality scores of each read in the recalibrated output file
under the "OQ" tag. By default, this behavior is disabled because emitting original qualities results in a
significant increase of the file size. Use this flag to turn on emission of original qualities.
boolean false
--emit-ref-confidence / -ERC
Mode for emitting reference confidence scores
The reference confidence mode makes it possible to emit a per-bp or summarized confidence estimate for a site being strictly homozygous-reference.
See http://www.broadinstitute.org/gatk/guide/article?id=2940 for more details of how this works.
Note that if you set -ERC GVCF, you also need to set -variant_index_type LINEAR and -variant_index_parameter 128000 (with those exact values!).
This requirement is a temporary workaround for an issue with index compression.
The --emit-ref-confidence argument is an enumerated type (ReferenceConfidenceMode), which can have one of the following values:
- NONE
- Regular calling without emitting reference confidence calls.
- BP_RESOLUTION
- Reference model emitted site by site.
- GVCF
- Reference model emitted with condensed non-variant blocks, i.e. the GVCF format.
ReferenceConfidenceMode NONE
--exclude-intervals / -XL
One or more genomic intervals to exclude from processing
Use this argument to exclude certain parts of the genome from the analysis (like -L, but the opposite).
This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the
command line (e.g. -XL 1 or -XL 1:100-200) or by loading in a file containing a list of intervals
(e.g. -XL myFile.intervals).
List[String] []
--gatk-config-file / NA
A configuration file to use with the GATK.
String null
--gcs-max-retries / -gcs-retries
If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
int 20 [ [ -∞ ∞ ] ]
--genotyping-mode / NA
Specifies how to determine the alternate alleles to use for genotyping
The --genotyping-mode argument is an enumerated type (GenotypingOutputMode), which can have one of the following values:
- DISCOVERY
- The genotyper will choose the most likely alternate allele
- GENOTYPE_GIVEN_ALLELES
- Only the alleles passed by the user should be considered.
GenotypingOutputMode DISCOVERY
--global-qscore-prior / NA
Global Qscore Bayesian prior to use for BQSR
If specified, the value of this argument will be used as a flat prior for all mismatching quality scores instead
of the reported quality score (assigned by the sequencer).
double -1.0 [ [ -∞ ∞ ] ]
--graph-output / -graph
Write debug assembly graph information to this file
This argument is meant for debugging and is not immediately useful for normal analysis use.
String null
--gvcf-gq-bands / -GQB
Exclusive upper bounds for reference confidence GQ bands (must be in [1, 100] and specified in increasing order)
When HC is run in reference confidence mode with banding compression enabled (-ERC GVCF), homozygous-reference
sites are compressed into bands of similar genotype quality (GQ) that are emitted as a single VCF record. See
the FAQ documentation for more details about the GVCF format.
This argument allows you to set the GQ bands. HC expects a list of strictly increasing GQ values
that will act as exclusive upper bounds for the GQ bands. To pass multiple values,
you provide them one by one with the argument, as in `-GQB 10 -GQB 20 -GQB 30` and so on
(this would set the GQ bands to be `[0, 10), [10, 20), [20, 30)` and so on, for example).
Note that GQ values are capped at 99 in the GATK, so values must be integers in [1, 100].
If the last value is strictly less than 100, the last GQ band will start at that value (inclusive)
and end at 100 (exclusive).
List[Integer] [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 70, 80, 90, 99]
--help / -h
display the help message
boolean false
--heterozygosity / NA
Heterozygosity value used to compute prior likelihoods for any locus. See the GATKDocs for full details on the meaning of this population genetics concept
The expected heterozygosity value used to compute prior probability that a locus is non-reference.
The default priors are for provided for humans:
het = 1e-3
which means that the probability of N samples being hom-ref at a site is:
1 - sum_i_2N (het / i)
Note that heterozygosity as used here is the population genetics concept:
http://en.wikipedia.org/wiki/Zygosity#Heterozygosity_in_population_genetics
That is, a hets value of 0.01 implies that two randomly chosen chromosomes from the population of organisms
would differ from each other (one being A and the other B) at a rate of 1 in 100 bp.
Note that this quantity has nothing to do with the likelihood of any given sample having a heterozygous genotype,
which in the GATK is purely determined by the probability of the observed data P(D | AB) under the model that there
may be a AB het genotype. The posterior probability of this AB genotype would use the het prior, but the GATK
only uses this posterior probability in determining the prob. that a site is polymorphic. So changing the
het parameters only increases the chance that a site will be called non-reference across all samples, but
doesn't actually change the output genotype likelihoods at all, as these aren't posterior probabilities at all.
The quantity that changes whether the GATK considers the possibility of a het genotype at all is the ploidy,
which determines how many chromosomes each individual in the species carries.
Double 0.001 [ [ -∞ ∞ ] ]
--heterozygosity-stdev / NA
Standard deviation of eterozygosity for SNP and indel calling.
The standard deviation of the distribution of alt allele fractions. The above heterozygosity parameters give the
*mean* of this distribution; this parameter gives its spread.
double 0.01 [ [ -∞ ∞ ] ]
--indel-heterozygosity / NA
Heterozygosity for indel calling. See the GATKDocs for heterozygosity for full details on the meaning of this population genetics concept
This argument informs the prior probability of having an indel at a site.
double 1.25E-4 [ [ -∞ ∞ ] ]
--indel-size-to-eliminate-in-ref-model / NA
The size of an indel to check for in the reference model
This parameter determines the maximum size of an indel considered as potentially segregating in the
reference model. It is used to eliminate reads from being indel informative at a site, and determines
by that mechanism the certainty in the reference base. Conceptually, setting this parameter to
X means that each informative read is consistent with any indel of size < X being present at a specific
position in the genome, given its alignment to the reference.
int 10 [ [ -∞ ∞ ] ]
--indels-context-size / -ics
Size of the k-mer context to be used for base insertions and deletions
The context covariate will use a context of this size to calculate its covariate value for base insertions and deletions. Must be between 1 and 13 (inclusive). Note that higher values will increase runtime and required java heap size.
int 3 [ [ -∞ ∞ ] ]
--input / -I
BAM/SAM/CRAM file containing reads
R List[String] []
--input-prior / NA
Input prior for calls
By default, the prior specified with the argument --heterozygosity/-hets is used for variant discovery at a particular locus, using an infinite sites model,
see e.g. Waterson (1975) or Tajima (1996).
This model asserts that the probability of having a population of k variant sites in N chromosomes is proportional to theta/k, for 1=1:N
There are instances where using this prior might not be desireable, e.g. for population studies where prior might not be appropriate,
as for example when the ancestral status of the reference allele is not known.
By using this argument, user can manually specify priors to be used for calling as a vector for doubles, with the following restriciotns:
a) User must specify 2N values, where N is the number of samples.
b) Only diploid calls supported.
c) Probability values are specified in double format, in linear space.
d) No negative values allowed.
e) Values will be added and Pr(AC=0) will be 1-sum, so that they sum up to one.
f) If user-defined values add to more than one, an error will be produced.
If user wants completely flat priors, then user should specify the same value (=1/(2*N+1)) 2*N times,e.g.
-inputPrior 0.33 -inputPrior 0.33
for the single-sample diploid case.
List[Double] []
--insertions-default-quality / NA
default quality for the base insertions covariate
A default base qualities to use as a prior (reported quality) in the insertion covariate model. This parameter is used for all reads without insertion quality scores for each base. [default is on]
byte 45 [ [ -∞ ∞ ] ]
--interval-exclusion-padding / -ixp
Amount of padding (in bp) to add to each interval you are excluding.
Use this to add padding to the intervals specified using -XL. For example, '-XL 1:100' with a
padding value of 20 would turn into '-XL 1:80-120'. This is typically used to add padding around targets when
analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
--interval-merging-rule / -imr
Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not
actually overlap) into a single continuous interval. However you can change this behavior if you want them to be
treated as separate intervals instead.
The --interval-merging-rule argument is an enumerated type (IntervalMergingRule), which can have one of the following values:
- ALL
- OVERLAPPING_ONLY
IntervalMergingRule ALL
--interval-padding / -ip
Amount of padding (in bp) to add to each interval you are including.
Use this to add padding to the intervals specified using -L. For example, '-L 1:100' with a
padding value of 20 would turn into '-L 1:80-120'. This is typically used to add padding around targets when
analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
--interval-set-rule / -isr
Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can
change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to
perform the analysis only on chromosome 1 exomes, you could specify -L exomes.intervals -L 1 --interval-set-rule
INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will
always be merged using UNION).
Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.
The --interval-set-rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:
- UNION
- Take the union of all intervals
- INTERSECTION
- Take the intersection of intervals (the subset that overlaps all intervals specified)
IntervalSetRule UNION
--intervals / -L
One or more genomic intervals over which to operate
List[String] []
--join-strategy / NA
the join strategy for reference bases and known variants
The --join-strategy argument is an enumerated type (JoinStrategy), which can have one of the following values:
- BROADCAST
- Use a broadcast join strategy, where one side of the join is collected into memory and broadcast to all workers.
- OVERLAPS_PARTITIONER
- Use an overlaps partitioner strategy, where one side of the join is sharded in partitions and the other side is broadcast.
- SHUFFLE
- Use a shuffle join strategy, where both sides of join are shuffled across the workers.
JoinStrategy BROADCAST
--kmer-size / NA
Kmer size to use in the read threading assembler
Multiple kmer sizes can be specified, using e.g. `-kmerSize 10 -kmerSize 25`.
List[Integer] [10, 25]
--known-sites / NA
the known variants
R List[String] []
--low-quality-tail / NA
minimum quality for the bases in the tail of the reads to be considered
Reads with low quality bases on either tail (beginning or end) will not be considered in the context. This parameter defines the quality below which (inclusive) a tail is considered low quality
byte 2 [ [ -∞ ∞ ] ]
--max-alternate-alleles / NA
Maximum number of alternate alleles to genotype
If there are more than this number of alternate alleles presented to the genotyper (either through discovery or GENOTYPE_GIVEN ALLELES),
then only this many alleles will be used. Note that genotyping sites with many alternate alleles is both CPU and memory intensive and it
scales exponentially based on the number of alternate alleles. Unless there is a good reason to change the default value, we highly recommend
that you not play around with this parameter.
See also {@link #MAX_GENOTYPE_COUNT}.
int 6 [ [ -∞ ∞ ] ]
--max-assembly-region-size / NA
Maximum size of an assembly region
int 300 [ [ -∞ ∞ ] ]
--max-genotype-count / NA
Maximum number of genotypes to consider at any site
If there are more than this number of genotypes at a locus presented to the genotyper, then only this many genotypes will be used.
The possible genotypes are simply different ways of partitioning alleles given a specific ploidy asumption.
Therefore, we remove genotypes from consideration by removing alternate alleles that are the least well supported.
The estimate of allele support is based on the ranking of the candidate haplotypes coming out of the graph building step.
Note that the reference allele is always kept.
Note that genotyping sites with large genotype counts is both CPU and memory intensive.
Unless there is a good reason to change the default value, we highly recommend that you not play around with this parameter.
The maximum number of alternative alleles used in the genotyping step will be the lesser of the two:
1. the largest number of alt alleles, given ploidy, that yields a genotype count no higher than {@link #MAX_GENOTYPE_COUNT}
2. the value of {@link #MAX_ALTERNATE_ALLELES}
See also {@link #MAX_ALTERNATE_ALLELES}.
int 1024 [ [ -∞ ∞ ] ]
--max-num-haplotypes-in-population / NA
Maximum number of haplotypes to consider for your population
The assembly graph can be quite complex, and could imply a very large number of possible haplotypes. Each haplotype
considered requires N PairHMM evaluations if there are N reads across all samples. In order to control the
run of the haplotype caller we only take maxNumHaplotypesInPopulation paths from the graph, in order of their
weights, no matter how many paths are possible to generate from the graph. Putting this number too low
will result in dropping true variation because paths that include the real variant are not even considered.
You can consider increasing this number when calling organisms with high heterozygosity.
int 128 [ [ -∞ ∞ ] ]
--max-prob-propagation-distance / NA
Upper limit on how many bases away probability mass can be moved around when calculating the boundaries between active and inactive assembly regions
int 50 [ [ -∞ ∞ ] ]
--max-reads-per-alignment-start / NA
Maximum number of reads to retain per alignment start position. Reads above this threshold will be downsampled. Set to 0 to disable.
int 50 [ [ -∞ ∞ ] ]
--maximum-cycle-value / -max-cycle
The maximum cycle value permitted for the Cycle covariate
The cycle covariate will generate an error if it encounters a cycle greater than this value.
This argument is ignored if the Cycle covariate is not used.
int 500 [ [ -∞ ∞ ] ]
--min-assembly-region-size / NA
Minimum size of an assembly region
int 50 [ [ -∞ ∞ ] ]
--min-base-quality-score / -mbq
Minimum base quality required to consider a base for calling
Bases with a quality below this threshold will not be used for calling.
byte 10 [ [ -∞ ∞ ] ]
--min-dangling-branch-length / NA
Minimum length of a dangling branch to attempt recovery
When constructing the assembly graph we are often left with "dangling" branches. The assembly engine attempts to rescue these branches
by merging them back into the main graph. This argument describes the minimum length of a dangling branch needed for the engine to
try to rescue it. A smaller number here will lead to higher sensitivity to real variation but also to a higher number of false positives.
int 4 [ [ -∞ ∞ ] ]
--min-pruning / NA
Minimum support to not prune paths in the graph
Paths with fewer supporting kmers than the specified threshold will be pruned from the graph.
Be aware that this argument can dramatically affect the results of variant calling and should only be used with great caution.
Using a prune factor of 1 (or below) will prevent any pruning from the graph, which is generally not ideal; it can make the
calling much slower and even less accurate (because it can prevent effective merging of "tails" in the graph). Higher values
tend to make the calling much faster, but also lowers the sensitivity of the results (because it ultimately requires higher
depth to produce calls).
int 2 [ [ -∞ ∞ ] ]
--mismatches-context-size / -mcs
Size of the k-mer context to be used for base mismatches
The context covariate will use a context of this size to calculate its covariate value for base mismatches. Must be between 1 and 13 (inclusive). Note that higher values will increase runtime and required java heap size.
int 2 [ [ -∞ ∞ ] ]
--mismatches-default-quality / NA
default quality for the base mismatches covariate
A default base qualities to use as a prior (reported quality) in the mismatch covariate model. This value will replace all base qualities in the read for this default value. Negative value turns it off. [default is off]
byte -1 [ [ -∞ ∞ ] ]
--native-pair-hmm-threads / NA
How many threads should a native pairHMM implementation use
int 4 [ [ -∞ ∞ ] ]
--native-pair-hmm-use-double-precision / NA
use double precision in the native pairHmm. This is slower but matches the java implementation better
boolean false
--num-pruning-samples / NA
Number of samples that must pass the minPruning threshold
If fewer samples than the specified number pass the minPruning threshold for a given path, that path will be eliminated from the graph.
int 1 [ [ -∞ ∞ ] ]
--num-reducers / NA
For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.
int 0 [ [ -∞ ∞ ] ]
--output / -O
the output vcf
R String null
--output-bam / NA
the output bam
String null
--output-mode / NA
Specifies which type of calls we should output
The --output-mode argument is an enumerated type (OutputMode), which can have one of the following values:
- EMIT_VARIANTS_ONLY
- produces calls only at variant sites
- EMIT_ALL_CONFIDENT_SITES
- produces calls at variant sites and confident reference sites
- EMIT_ALL_SITES
- produces calls at any callable site regardless of confidence; this argument is intended only for point mutations (SNPs) in DISCOVERY mode or generally when running in GENOTYPE_GIVEN_ALLELES mode; it will by no means produce a comprehensive set of indels in DISCOVERY mode
OutputMode EMIT_VARIANTS_ONLY
--pair-hmm-gap-continuation-penalty / NA
Flat gap continuation penalty for use in the Pair HMM
int 10 [ [ -∞ ∞ ] ]
--pcr-indel-model / NA
The PCR indel model to use
When calculating the likelihood of variants, we can try to correct for PCR errors that cause indel artifacts.
The correction is based on the reference context, and acts specifically around repetitive sequences that tend
to cause PCR errors). The variant likelihoods are penalized in increasing scale as the context around a
putative indel is more repetitive (e.g. long homopolymer). The correction can be disabling by specifying
'-pcrModel NONE'; in that case the default base insertion/deletion qualities will be used (or taken from the
read if generated through the BaseRecalibrator). VERY IMPORTANT: when using PCR-free sequencing data we
definitely recommend setting this argument to NONE.
The --pcr-indel-model argument is an enumerated type (PCRErrorModel), which can have one of the following values:
- NONE
- no specialized PCR error model will be applied; if base insertion/deletion qualities are present they will be used
- HOSTILE
- a most aggressive model will be applied that sacrifices true positives in order to remove more false positives
- AGGRESSIVE
- a more aggressive model will be applied that sacrifices true positives in order to remove more false positives
- CONSERVATIVE
- a less aggressive model will be applied that tries to maintain a high true positive rate at the expense of allowing more false positives
PCRErrorModel CONSERVATIVE
--phred-scaled-global-read-mismapping-rate / NA
The global assumed mismapping rate for reads
The phredScaledGlobalReadMismappingRate reflects the average global mismapping rate of all reads, regardless of their
mapping quality. This term effects the probability that a read originated from the reference haplotype, regardless of
its edit distance from the reference, in that the read could have originated from the reference haplotype but
from another location in the genome. Suppose a read has many mismatches from the reference, say like 5, but
has a very high mapping quality of 60. Without this parameter, the read would contribute 5 * Q30 evidence
in favor of its 5 mismatch haplotype compared to reference, potentially enough to make a call off that single
read for all of these events. With this parameter set to Q30, though, the maximum evidence against any haplotype
that this (and any) read could contribute is Q30.
Set this term to any negative number to turn off the global mapping rate.
int 45 [ [ -∞ ∞ ] ]
--preserve-qscores-less-than / NA
Don't recalibrate bases with quality scores less than this threshold (with -bqsr)
This flag tells GATK not to modify quality scores less than this value. Instead they will be written out unmodified in the recalibrated BAM file.
In general it's unsafe to change qualities scores below < 6, since base callers use these values to indicate random or bad bases.
For example, Illumina writes Q2 bases when the machine has really gone wrong. This would be fine in and of itself,
but when you select a subset of these reads based on their ability to align to the reference and their dinucleotide effect,
your Q2 bin can be elevated to Q8 or Q10, leading to issues downstream.
int 6 [ [ -∞ ∞ ] ]
--program-name / NA
Name of the program running
String null
--quantize-quals / NA
Quantize quality scores to a given number of levels
Turns on the base quantization module. It requires a recalibration report.
A value of 0 here means "do not quantize".
Any value greater than zero will be used to recalculate the quantization using that many levels.
Negative values mean that we should quantize using the recalibration report's quantization level.
Exclusion: This argument cannot be used at the same time as static-quantized-quals, round-down-quantized
.
int 0 [ [ -∞ ∞ ] ]
--quantizing-levels / NA
number of distinct quality scores in the quantized output
BQSR generates a quantization table for quick quantization later by subsequent tools. BQSR does not quantize the base qualities, this is done by the engine with the -qq or -bqsr options.
This parameter tells BQSR the number of levels of quantization to use to build the quantization table.
int 16 [ [ -∞ ∞ ] ]
--QUIET / NA
Whether to suppress job-summary info on System.err.
Boolean false
--read-filter / -RF
Read filters to be applied before analysis
List[String] []
--read-index / -read-index
Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.
List[String] []
--read-shard-padding / NA
Each read shard has this many bases of extra context on each side. Read shards must have as much or more padding than assembly regions.
int 100 [ [ -∞ ∞ ] ]
--read-shard-size / NA
Maximum size of each read shard, in bases. For good performance, this should be much larger than the maximum assembly region size.
int 5000 [ [ -∞ ∞ ] ]
--read-validation-stringency / -VS
Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.
The --read-validation-stringency argument is an enumerated type (ValidationStringency), which can have one of the following values:
- STRICT
- LENIENT
- SILENT
ValidationStringency SILENT
--recover-dangling-heads / NA
This argument is deprecated since version 3.3
As of version 3.3, this argument is no longer needed because dangling end recovery is now the default behavior. See GATK 3.3 release notes for more details.
boolean false
--reference / -R
Reference sequence file
R String null
--round-down-quantized / NA
Round quals down to nearest quantized qual
Round down quantized only works with the static_quantized_quals option, and should not be used with
the dynamic binning option provided by quantize_quals. When roundDown = false, rounding is done in
probability space to the nearest bin. When roundDown = true, the value is rounded to the nearest bin
that is smaller than the current bin.
Exclusion: This argument cannot be used at the same time as quantize-quals
.
boolean false
--sample-name / -ALIAS
Name of single sample to use from a multi-sample bam
You can use this argument to specify that HC should process a single sample out of a multisample BAM file. This
is especially useful if your samples are all in the same file but you need to run them individually through HC
in -ERC GVC mode (which is the recommended usage). Note that the name is case-sensitive.
String null
--sample-ploidy / -ploidy
Ploidy (number of chromosomes) per sample. For pooled data, set to (Number of samples in each pool * Sample Ploidy).
Sample ploidy - equivalent to number of chromosomes per pool. In pooled experiments this should be = # of samples in pool * individual sample ploidy
int 2 [ [ -∞ ∞ ] ]
--sharded-output / NA
For tools that write an output, write the output in multiple pieces (shards)
boolean false
--showHidden / -showHidden
display hidden arguments
boolean false
--single-end-alignment / -se
Run single-end instead of paired-end alignment
Run single-end instead of paired-end alignment.
boolean false
--smith-waterman / NA
Which Smith-Waterman implementation to use, generally FASTEST_AVAILABLE is the right choice
The --smith-waterman argument is an enumerated type (Implementation), which can have one of the following values:
- FASTEST_AVAILABLE
- use the fastest available Smith-Waterman aligner that runs on your hardware
- AVX_ENABLED
- use the AVX enabled Smith-Waterman aligner
- JAVA
- use the pure java implementation of Smith-Waterman, works on all hardware
Implementation JAVA
--spark-master / NA
URL of the Spark Master to submit jobs to when using the Spark pipeline runner.
String local[*]
--standard-min-confidence-threshold-for-calling / -stand-call-conf
The minimum phred-scaled confidence threshold at which variants should be called
The minimum phred-scaled confidence threshold at which variants should be called. Only variant sites with QUAL equal
or greater than this threshold will be called. Note that since version 3.7, we no longer differentiate high confidence
from low confidence calls at the calling step. The default call confidence threshold is set low intentionally to achieve
high sensitivity, which will allow false positive calls as a side effect. Be sure to perform some kind of filtering after
calling to reduce the amount of false positives in your final callset. Note that when HaplotypeCaller is used in GVCF mode
(using either -ERC GVCF or -ERC BP_RESOLUTION) the call threshold is automatically set to zero. Call confidence thresholding
will then be performed in the subsequent GenotypeGVCFs command.
double 10.0 [ [ -∞ ∞ ] ]
--static-quantized-quals / NA
Use static quantized quality scores to a given number of levels (with -bqsr)
Static quantized quals are entirely separate from the quantize_qual option which uses dynamic binning.
The two types of binning should not be used together.
Exclusion: This argument cannot be used at the same time as quantize-quals
.
List[Integer] []
--TMP_DIR / NA
Undocumented option
List[File] []
--use-alleles-trigger / NA
Use additional trigger on variants found in an external alleles file
boolean false
--use-filtered-reads-for-annotations / NA
Use the contamination-filtered read maps for the purposes of annotating variants
boolean false
--use-jdk-deflater / -jdk-deflater
Whether to use the JdkDeflater (as opposed to IntelDeflater)
boolean false
--use-jdk-inflater / -jdk-inflater
Whether to use the JdkInflater (as opposed to IntelInflater)
boolean false
--use-new-qual-calculator / -new-qual
If provided, we will use the new AF model instead of the so-called exact model
Use the new allele frequency / QUAL score model
boolean false
--use-original-qualities / -OQ
Use the base quality scores from the OQ tag
This flag tells GATK to use the original base qualities (that were in the data before BQSR/recalibration) which
are stored in the OQ tag, if they are present, rather than use the post-recalibration quality scores. If no OQ
tag is present for a read, the standard qual score will be used.
Boolean false
--verbosity / -verbosity
Control verbosity of logging.
The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:
- ERROR
- WARNING
- INFO
- DEBUG
LogLevel INFO
--version / NA
display the version number for this tool
boolean false
GATK version 4.0.0.0 built at 27-36-2019 11:36:13.
1 comment
I am finding that MarkDuplicates and this tool ReadsPipelineSpark mark a different number of reads as duplicate (flag > 1024). Same version of gatk (4.1.0.0). Why is this the case? any configuration we need to make sure that they are the same?
Please sign in to leave a comment.