Determine if two potentially identical BAMs have the same duplicate reads
Category Diagnostics and Quality Control
Overview
Determine if two potentially identical BAMs have the same duplicate reads. This tool is useful for checking if two BAMs that seem identical have the same reads marked as duplicates.It fails if at least one of the following conditions is true of the two BAM files:
- Different number of primary mapped reads
- Different number of duplicate reads (as indicated by the SAM record flag)
- Different reads mapped to some position in the reference
The tool gathers the mapped reads together into groups that belong to the same library and map to the same position and strand in the reference. If the tool does not fail, then it reports the number of these groups with the following properties:
- SIZE_UNEQUAL: different number of reads
- EQUAL: same reads and same duplicates
- READ_MISMATCH: reads with different names
- DIFFERENT_REPRESENTATIVE_READ: same reads and number of duplicates, but one or more duplicate reads are different
- DIFF_NUM_DUPES: same reads but different number of duplicates
Input
- Two BAM files
Output
Results are printed to the screen
Usage example
gatk CompareDuplicatesSpark \ -I input_reads_1.bam \ -I2 input_reads_2.bam
This tool can be run without explicitly specifying Spark options. That is to say, the given example command without Spark options will run locally. See Tutorial#10060 for an example of how to set up and run a Spark tool on a cloud Spark cluster.
CompareDuplicatesSpark specific arguments
This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.
Argument name(s) | Default value | Summary | |
---|---|---|---|
Required Arguments | |||
--input -I |
[] | BAM/SAM/CRAM file containing reads | |
--input2 -I2 |
null | The second BAM | |
Optional Tool Arguments | |||
--arguments_file |
[] | read one or more arguments files and add them to the command line | |
--bam-partition-size |
0 | maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block). | |
--conf |
[] | spark properties to set on the spark context in the format = | |
--disable-sequence-dictionary-validation |
false | If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk! | |
--gcs-max-retries -gcs-retries |
20 | If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection | |
--gcs-project-for-requester-pays |
"" | Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed. | |
--help -h |
false | display the help message | |
--interval-merging-rule -imr |
ALL | Interval merging rule for abutting intervals | |
--intervals -L |
[] | One or more genomic intervals over which to operate | |
--num-reducers |
0 | For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input. | |
--output -O |
null | If output is given, the tool will return a bam with all the mismatching duplicate groups in the first specified file | |
--output-shard-tmp-dir |
null | when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used | |
--output2 -O2 |
null | If output is given, the tool will return a bam with all the mismatching duplicate groups in the second specified input file | |
--print-summary |
true | Print a summary | |
--program-name |
null | Name of the program running | |
--reference -R |
null | Reference sequence | |
--sharded-output |
false | For tools that write an output, write the output in multiple pieces (shards) | |
--spark-master |
local[*] | URL of the Spark Master to submit jobs to when using the Spark pipeline runner. | |
--throw-on-diff |
false | Throw error if any differences were found | |
--version |
false | display the version number for this tool | |
Optional Common Arguments | |||
--add-output-vcf-command-line |
true | If true, adds a command line header line to created VCF files. | |
--disable-read-filter -DF |
[] | Read filters to be disabled before analysis | |
--disable-tool-default-read-filters |
false | Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on) | |
--exclude-intervals -XL |
[] | One or more genomic intervals to exclude from processing | |
--gatk-config-file |
null | A configuration file to use with the GATK. | |
--interval-exclusion-padding -ixp |
0 | Amount of padding (in bp) to add to each interval you are excluding. | |
--interval-padding -ip |
0 | Amount of padding (in bp) to add to each interval you are including. | |
--interval-set-rule -isr |
UNION | Set merging approach to use for combining interval inputs | |
--QUIET |
false | Whether to suppress job-summary info on System.err. | |
--read-filter -RF |
[] | Read filters to be applied before analysis | |
--read-index |
[] | Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically. | |
--read-validation-stringency -VS |
SILENT | Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded. | |
--tmp-dir |
null | Temp directory to use. | |
--use-jdk-deflater -jdk-deflater |
false | Whether to use the JdkDeflater (as opposed to IntelDeflater) | |
--use-jdk-inflater -jdk-inflater |
false | Whether to use the JdkInflater (as opposed to IntelInflater) | |
--verbosity |
INFO | Control verbosity of logging. | |
Advanced Arguments | |||
--showHidden |
false | display hidden arguments |
Argument details
Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.
--add-output-vcf-command-line / -add-output-vcf-command-line
If true, adds a command line header line to created VCF files.
boolean true
--arguments_file / NA
read one or more arguments files and add them to the command line
List[File] []
--bam-partition-size / NA
maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).
long 0 [ [ -∞ ∞ ] ]
--conf / -conf
spark properties to set on the spark context in the format =
List[String] []
--disable-read-filter / -DF
Read filters to be disabled before analysis
List[String] []
--disable-sequence-dictionary-validation / -disable-sequence-dictionary-validation
If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!
boolean false
--disable-tool-default-read-filters / -disable-tool-default-read-filters
Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on)
boolean false
--exclude-intervals / -XL
One or more genomic intervals to exclude from processing
Use this argument to exclude certain parts of the genome from the analysis (like -L, but the opposite).
This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the
command line (e.g. -XL 1 or -XL 1:100-200) or by loading in a file containing a list of intervals
(e.g. -XL myFile.intervals).
List[String] []
--gatk-config-file / NA
A configuration file to use with the GATK.
String null
--gcs-max-retries / -gcs-retries
If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
int 20 [ [ -∞ ∞ ] ]
--gcs-project-for-requester-pays / NA
Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed.
String ""
--help / -h
display the help message
boolean false
--input / -I
BAM/SAM/CRAM file containing reads
R List[String] []
--input2 / -I2
The second BAM
R String null
--interval-exclusion-padding / -ixp
Amount of padding (in bp) to add to each interval you are excluding.
Use this to add padding to the intervals specified using -XL. For example, '-XL 1:100' with a
padding value of 20 would turn into '-XL 1:80-120'. This is typically used to add padding around targets when
analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
--interval-merging-rule / -imr
Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not
actually overlap) into a single continuous interval. However you can change this behavior if you want them to be
treated as separate intervals instead.
The --interval-merging-rule argument is an enumerated type (IntervalMergingRule), which can have one of the following values:
- ALL
- OVERLAPPING_ONLY
IntervalMergingRule ALL
--interval-padding / -ip
Amount of padding (in bp) to add to each interval you are including.
Use this to add padding to the intervals specified using -L. For example, '-L 1:100' with a
padding value of 20 would turn into '-L 1:80-120'. This is typically used to add padding around targets when
analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
--interval-set-rule / -isr
Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can
change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to
perform the analysis only on chromosome 1 exomes, you could specify -L exomes.intervals -L 1 --interval-set-rule
INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will
always be merged using UNION).
Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.
The --interval-set-rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:
- UNION
- Take the union of all intervals
- INTERSECTION
- Take the intersection of intervals (the subset that overlaps all intervals specified)
IntervalSetRule UNION
--intervals / -L
One or more genomic intervals over which to operate
List[String] []
--num-reducers / NA
For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.
int 0 [ [ -∞ ∞ ] ]
--output / -O
If output is given, the tool will return a bam with all the mismatching duplicate groups in the first specified file
String null
--output-shard-tmp-dir / NA
when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used
Exclusion: This argument cannot be used at the same time as sharded-output
.
String null
--output2 / -O2
If output is given, the tool will return a bam with all the mismatching duplicate groups in the second specified input file
String null
--print-summary / NA
Print a summary
boolean true
--program-name / NA
Name of the program running
String null
--QUIET / NA
Whether to suppress job-summary info on System.err.
Boolean false
--read-filter / -RF
Read filters to be applied before analysis
List[String] []
--read-index / -read-index
Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.
List[String] []
--read-validation-stringency / -VS
Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.
The --read-validation-stringency argument is an enumerated type (ValidationStringency), which can have one of the following values:
- STRICT
- LENIENT
- SILENT
ValidationStringency SILENT
--reference / -R
Reference sequence
String null
--sharded-output / NA
For tools that write an output, write the output in multiple pieces (shards)
Exclusion: This argument cannot be used at the same time as output-shard-tmp-dir
.
boolean false
--showHidden / -showHidden
display hidden arguments
boolean false
--spark-master / NA
URL of the Spark Master to submit jobs to when using the Spark pipeline runner.
String local[*]
--throw-on-diff / NA
Throw error if any differences were found
boolean false
--tmp-dir / NA
Temp directory to use.
String null
--use-jdk-deflater / -jdk-deflater
Whether to use the JdkDeflater (as opposed to IntelDeflater)
boolean false
--use-jdk-inflater / -jdk-inflater
Whether to use the JdkInflater (as opposed to IntelInflater)
boolean false
--verbosity / -verbosity
Control verbosity of logging.
The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:
- ERROR
- WARNING
- INFO
- DEBUG
LogLevel INFO
--version / NA
display the version number for this tool
boolean false
GATK version 4.0.12.0 built at 23-45-2019 03:45:45.
0 comments
Please sign in to leave a comment.