Identifies duplicate reads.
This tool locates and tags duplicate reads in a BAM or SAM file, where duplicate reads are defined as originating from a single fragment of DNA. Duplicates can arise during sample preparation e.g. library construction using PCR. See also EstimateLibraryComplexity for additional notes on PCR duplication artifacts. Duplicate reads can also result from a single amplification cluster, incorrectly detected as multiple clusters by the optical sensor of the sequencing instrument. These duplication artifacts are referred to as optical duplicates.
The MarkDuplicates tool works by comparing sequences in the 5 prime positions of both reads and read-pairs in a SAM/BAM file. An BARCODE_TAG option is available to facilitate duplicate marking using molecular barcodes. After duplicate reads are collected, the tool differentiates the primary and duplicate reads using an algorithm that ranks reads by the sums of their base-quality scores (default method).
The tool's main output is a new SAM or BAM file, in which duplicates have been identified in the SAM flags field for each read. Duplicates are marked with the hexadecimal value of 0x0400, which corresponds to a decimal value of 1024. If you are not familiar with this type of annotation, please see the following blog post for additional information.
Although the bitwise flag annotation indicates whether a read was marked as a duplicate, it does not identify the type of duplicate. To do this, a new tag called the duplicate type (DT) tag was recently added as an optional output in the 'optional field' section of a SAM/BAM file. Invoking the TAGGING_POLICY option, you can instruct the program to mark all the duplicates (All), only the optical duplicates (OpticalOnly), or no duplicates (DontTag). The records within the output of a SAM/BAM file will have values for the 'DT' tag (depending on the invoked TAGGING_POLICY), as either library/PCR-generated duplicates (LB), or sequencing-platform artifact duplicates (SQ). This tool uses the READ_NAME_REGEX and the OPTICAL_DUPLICATE_PIXEL_DISTANCE options as the primary methods to identify and differentiate duplicate types. Set READ_NAME_REGEX to null to skip optical duplicate detection, e.g. for RNA-seq or other data where duplicate sets are extremely large and estimating library complexity is not an aim. Note that without optical duplicate counts, library size estimation will be inaccurate.
MarkDuplicates also produces a metrics file indicating the numbers of duplicates for both single- and paired-end reads.
The program can take either coordinate-sorted or query-sorted inputs, however the behavior is slightly different. When the input is coordinate-sorted, unmapped mates of mapped records and supplementary/secondary alignments are not marked as duplicates. However, when the input is query-sorted (actually query-grouped), then unmapped mates and secondary/supplementary reads are not excluded from the duplication test and can be marked as duplicate reads.
If desired, duplicates can be removed using the REMOVE_DUPLICATE and REMOVE_SEQUENCING_DUPLICATES options.
Usage example:
java -jar picard.jar MarkDuplicates \Please see MarkDuplicates for detailed explanations of the output metrics.
I=input.bam \
O=marked_duplicates.bam \
M=marked_dup_metrics.txt
Category Read Data Manipulation
Overview
A better duplication marking algorithm that handles all cases including clipped and gapped alignments.MarkDuplicates (Picard) specific arguments
This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.
Argument name(s) | Default value | Summary | |
---|---|---|---|
Required Arguments | |||
--INPUT -I |
[] | One or more input SAM or BAM files to analyze. Must be coordinate sorted. | |
--METRICS_FILE -M |
null | File to write duplication metrics to | |
--OUTPUT -O |
null | The output file to write marked records to | |
Optional Tool Arguments | |||
--arguments_file |
[] | read one or more arguments files and add them to the command line | |
--ASSUME_SORT_ORDER -ASO |
null | If not null, assume that the input file has this order even if the header says otherwise. | |
--BARCODE_TAG |
null | Barcode SAM tag (ex. BC for 10X Genomics) | |
--CLEAR_DT |
true | Clear DT tag from input SAM records. Should be set to false if input SAM doesn't have this tag. Default true | |
--COMMENT -CO |
[] | Comment(s) to include in the output file's header. | |
--DUPLICATE_SCORING_STRATEGY -DS |
SUM_OF_BASE_QUALITIES | The scoring strategy for choosing the non-duplicate among candidates. | |
--help -h |
false | display the help message | |
--MAX_FILE_HANDLES_FOR_READ_ENDS_MAP -MAX_FILE_HANDLES |
8000 | Maximum number of file handles to keep open when spilling read ends to disk. Set this number a little lower than the per-process maximum number of file that may be open. This number can be found by executing the 'ulimit -n' command on a Unix system. | |
--MAX_OPTICAL_DUPLICATE_SET_SIZE |
300000 | This number is the maximum size of a set of duplicate reads for which we will attempt to determine which are optical duplicates. Please be aware that if you raise this value too high and do encounter a very large set of duplicate reads, it will severely affect the runtime of this tool. To completely disable this check, set the value to -1. | |
--MAX_SEQUENCES_FOR_DISK_READ_ENDS_MAP -MAX_SEQS |
50000 | This option is obsolete. ReadEnds will always be spilled to disk. | |
--OPTICAL_DUPLICATE_PIXEL_DISTANCE |
100 | The maximum offset between two duplicate clusters in order to consider them optical duplicates. The default is appropriate for unpatterned versions of the Illumina platform. For the patterned flowcell models, 2500 is moreappropriate. For other platforms and models, users should experiment to find what works best. | |
--PROGRAM_GROUP_COMMAND_LINE -PG_COMMAND |
null | Value of CL tag of PG record to be created. If not supplied the command line will be detected automatically. | |
--PROGRAM_GROUP_NAME -PG_NAME |
MarkDuplicates | Value of PN tag of PG record to be created. | |
--PROGRAM_GROUP_VERSION -PG_VERSION |
null | Value of VN tag of PG record to be created. If not specified, the version will be detected automatically. | |
--PROGRAM_RECORD_ID -PG |
MarkDuplicates | The program record ID for the @PG record(s) created by this program. Set to null to disable PG record creation. This string may have a suffix appended to avoid collision with other program record IDs. | |
--READ_NAME_REGEX |
Regular expression that can be used to parse read names in the incoming SAM file. Read names are parsed to extract three variables: tile/region, x coordinate and y coordinate. These values are used to estimate the rate of optical duplication in order to give a more accurate estimated library size. Set this option to null to disable optical duplicate detection, e.g. for RNA-seq or other data where duplicate sets are extremely large and estimating library complexity is not an aim. Note that without optical duplicate counts, library size estimation will be inaccurate. The regular expression should contain three capture groups for the three variables, in order. It must match the entire read name. Note that if the default regex is specified, a regex match is not actually done, but instead the read name is split on colon character. For 5 element names, the 3rd, 4th and 5th elements are assumed to be tile, x and y values. For 7 element names (CASAVA 1.8), the 5th, 6th, and 7th elements are assumed to be tile, x and y values. | ||
--READ_ONE_BARCODE_TAG |
null | Read one barcode SAM tag (ex. BX for 10X Genomics) | |
--READ_TWO_BARCODE_TAG |
null | Read two barcode SAM tag (ex. BX for 10X Genomics) | |
--REMOVE_DUPLICATES |
false | If true do not write duplicates to the output file instead of writing them with appropriate flags set. | |
--REMOVE_SEQUENCING_DUPLICATES |
false | If true remove 'optical' duplicates and other duplicates that appear to have arisen from the sequencing process instead of the library preparation process, even if REMOVE_DUPLICATES is false. If REMOVE_DUPLICATES is true, all duplicates are removed and this option is ignored. | |
--SORTING_COLLECTION_SIZE_RATIO |
0.25 | This number, plus the maximum RAM available to the JVM, determine the memory footprint used by some of the sorting collections. If you are running out of memory, try reducing this number. | |
--TAG_DUPLICATE_SET_MEMBERS |
false | If a read appears in a duplicate set, add two tags. The first tag, DUPLICATE_SET_SIZE_TAG (DS), indicates the size of the duplicate set. The smallest possible DS value is 2 which occurs when two reads map to the same portion of the reference only one of which is marked as duplicate. The second tag, DUPLICATE_SET_INDEX_TAG (DI), represents a unique identifier for the duplicate set to which the record belongs. This identifier is the index-in-file of the representative read that was selected out of the duplicate set. | |
--TAGGING_POLICY |
DontTag | Determines how duplicate types are recorded in the DT optional attribute. | |
--version |
false | display the version number for this tool | |
Optional Common Arguments | |||
--ADD_PG_TAG_TO_READS |
true | Add PG tag to each read in a SAM or BAM | |
--COMPRESSION_LEVEL |
5 | Compression level for all compressed files created (e.g. BAM and VCF). | |
--CREATE_INDEX |
false | Whether to create a BAM index when writing a coordinate-sorted BAM file. | |
--CREATE_MD5_FILE |
false | Whether to create an MD5 digest for any BAM or FASTQ files created. | |
--GA4GH_CLIENT_SECRETS |
client_secrets.json | Google Genomics API client_secrets.json file path. | |
--MAX_RECORDS_IN_RAM |
500000 | When writing files that need to be sorted, this will specify the number of records stored in RAM before spilling to disk. Increasing this number reduces the number of file handles needed to sort the file, and increases the amount of RAM needed. | |
--QUIET |
false | Whether to suppress job-summary info on System.err. | |
--REFERENCE_SEQUENCE -R |
null | Reference sequence file. | |
--TMP_DIR |
[] | One or more directories with space available to be used by this program for temporary storage of working files | |
--USE_JDK_DEFLATER -use_jdk_deflater |
false | Use the JDK Deflater instead of the Intel Deflater for writing compressed output | |
--USE_JDK_INFLATER -use_jdk_inflater |
false | Use the JDK Inflater instead of the Intel Inflater for reading compressed input | |
--VALIDATION_STRINGENCY |
STRICT | Validation stringency for all SAM files read by this program. Setting stringency to SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded. | |
--VERBOSITY |
INFO | Control verbosity of logging. | |
Advanced Arguments | |||
--showHidden |
false | display hidden arguments | |
Deprecated Arguments | |||
--ASSUME_SORTED -AS |
false | If true, assume that the input file is coordinate sorted even if the header says otherwise. Deprecated, used ASSUME_SORT_ORDER=coordinate instead. |
Argument details
Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.
--ADD_PG_TAG_TO_READS / NA
Add PG tag to each read in a SAM or BAM
Boolean true
--arguments_file / NA
read one or more arguments files and add them to the command line
List[File] []
--ASSUME_SORT_ORDER / -ASO
If not null, assume that the input file has this order even if the header says otherwise.
Exclusion: This argument cannot be used at the same time as ASSUME_SORTED
.
The --ASSUME_SORT_ORDER argument is an enumerated type (SortOrder), which can have one of the following values:
- unsorted
- queryname
- coordinate
- duplicate
- unknown
SortOrder null
--ASSUME_SORTED / -AS
If true, assume that the input file is coordinate sorted even if the header says otherwise. Deprecated, used ASSUME_SORT_ORDER=coordinate instead.
Exclusion: This argument cannot be used at the same time as ASSUME_SORT_ORDER, ASO
.
boolean false
--BARCODE_TAG / NA
Barcode SAM tag (ex. BC for 10X Genomics)
String null
--CLEAR_DT / NA
Clear DT tag from input SAM records. Should be set to false if input SAM doesn't have this tag. Default true
boolean true
--COMMENT / -CO
Comment(s) to include in the output file's header.
List[String] []
--COMPRESSION_LEVEL / NA
Compression level for all compressed files created (e.g. BAM and VCF).
int 5 [ [ -∞ ∞ ] ]
--CREATE_INDEX / NA
Whether to create a BAM index when writing a coordinate-sorted BAM file.
Boolean false
--CREATE_MD5_FILE / NA
Whether to create an MD5 digest for any BAM or FASTQ files created.
boolean false
--DUPLICATE_SCORING_STRATEGY / -DS
The scoring strategy for choosing the non-duplicate among candidates.
The --DUPLICATE_SCORING_STRATEGY argument is an enumerated type (ScoringStrategy), which can have one of the following values:
- SUM_OF_BASE_QUALITIES
- TOTAL_MAPPED_REFERENCE_LENGTH
- RANDOM
ScoringStrategy SUM_OF_BASE_QUALITIES
--GA4GH_CLIENT_SECRETS / NA
Google Genomics API client_secrets.json file path.
String client_secrets.json
--help / -h
display the help message
boolean false
--INPUT / -I
One or more input SAM or BAM files to analyze. Must be coordinate sorted.
R List[String] []
--MAX_FILE_HANDLES_FOR_READ_ENDS_MAP / -MAX_FILE_HANDLES
Maximum number of file handles to keep open when spilling read ends to disk. Set this number a little lower than the per-process maximum number of file that may be open. This number can be found by executing the 'ulimit -n' command on a Unix system.
int 8000 [ [ -∞ ∞ ] ]
--MAX_OPTICAL_DUPLICATE_SET_SIZE / NA
This number is the maximum size of a set of duplicate reads for which we will attempt to determine which are optical duplicates. Please be aware that if you raise this value too high and do encounter a very large set of duplicate reads, it will severely affect the runtime of this tool. To completely disable this check, set the value to -1.
long 300000 [ [ -∞ ∞ ] ]
--MAX_RECORDS_IN_RAM / NA
When writing files that need to be sorted, this will specify the number of records stored in RAM before spilling to disk. Increasing this number reduces the number of file handles needed to sort the file, and increases the amount of RAM needed.
Integer 500000 [ [ -∞ ∞ ] ]
--MAX_SEQUENCES_FOR_DISK_READ_ENDS_MAP / -MAX_SEQS
This option is obsolete. ReadEnds will always be spilled to disk.
If more than this many sequences in SAM file, don't spill to disk because there will not
be enough file handles.
int 50000 [ [ -∞ ∞ ] ]
--METRICS_FILE / -M
File to write duplication metrics to
R File null
--OPTICAL_DUPLICATE_PIXEL_DISTANCE / NA
The maximum offset between two duplicate clusters in order to consider them optical duplicates. The default is appropriate for unpatterned versions of the Illumina platform. For the patterned flowcell models, 2500 is moreappropriate. For other platforms and models, users should experiment to find what works best.
int 100 [ [ -∞ ∞ ] ]
--OUTPUT / -O
The output file to write marked records to
R File null
--PROGRAM_GROUP_COMMAND_LINE / -PG_COMMAND
Value of CL tag of PG record to be created. If not supplied the command line will be detected automatically.
String null
--PROGRAM_GROUP_NAME / -PG_NAME
Value of PN tag of PG record to be created.
String MarkDuplicates
--PROGRAM_GROUP_VERSION / -PG_VERSION
Value of VN tag of PG record to be created. If not specified, the version will be detected automatically.
String null
--PROGRAM_RECORD_ID / -PG
The program record ID for the @PG record(s) created by this program. Set to null to disable PG record creation. This string may have a suffix appended to avoid collision with other program record IDs.
String MarkDuplicates
--QUIET / NA
Whether to suppress job-summary info on System.err.
Boolean false
--READ_NAME_REGEX / NA
Regular expression that can be used to parse read names in the incoming SAM file. Read names are parsed to extract three variables: tile/region, x coordinate and y coordinate. These values are used to estimate the rate of optical duplication in order to give a more accurate estimated library size. Set this option to null to disable optical duplicate detection, e.g. for RNA-seq or other data where duplicate sets are extremely large and estimating library complexity is not an aim. Note that without optical duplicate counts, library size estimation will be inaccurate. The regular expression should contain three capture groups for the three variables, in order. It must match the entire read name. Note that if the default regex is specified, a regex match is not actually done, but instead the read name is split on colon character. For 5 element names, the 3rd, 4th and 5th elements are assumed to be tile, x and y values. For 7 element names (CASAVA 1.8), the 5th, 6th, and 7th elements are assumed to be tile, x and y values.
String
--READ_ONE_BARCODE_TAG / NA
Read one barcode SAM tag (ex. BX for 10X Genomics)
String null
--READ_TWO_BARCODE_TAG / NA
Read two barcode SAM tag (ex. BX for 10X Genomics)
String null
--REFERENCE_SEQUENCE / -R
Reference sequence file.
File null
--REMOVE_DUPLICATES / NA
If true do not write duplicates to the output file instead of writing them with appropriate flags set.
boolean false
--REMOVE_SEQUENCING_DUPLICATES / NA
If true remove 'optical' duplicates and other duplicates that appear to have arisen from the sequencing process instead of the library preparation process, even if REMOVE_DUPLICATES is false. If REMOVE_DUPLICATES is true, all duplicates are removed and this option is ignored.
boolean false
--showHidden / -showHidden
display hidden arguments
boolean false
--SORTING_COLLECTION_SIZE_RATIO / NA
This number, plus the maximum RAM available to the JVM, determine the memory footprint used by some of the sorting collections. If you are running out of memory, try reducing this number.
double 0.25 [ [ -∞ ∞ ] ]
--TAG_DUPLICATE_SET_MEMBERS / NA
If a read appears in a duplicate set, add two tags. The first tag, DUPLICATE_SET_SIZE_TAG (DS), indicates the size of the duplicate set. The smallest possible DS value is 2 which occurs when two reads map to the same portion of the reference only one of which is marked as duplicate. The second tag, DUPLICATE_SET_INDEX_TAG (DI), represents a unique identifier for the duplicate set to which the record belongs. This identifier is the index-in-file of the representative read that was selected out of the duplicate set.
boolean false
--TAGGING_POLICY / NA
Determines how duplicate types are recorded in the DT optional attribute.
The --TAGGING_POLICY argument is an enumerated type (DuplicateTaggingPolicy), which can have one of the following values:
- DontTag
- OpticalOnly
- All
DuplicateTaggingPolicy DontTag
--TMP_DIR / NA
One or more directories with space available to be used by this program for temporary storage of working files
List[File] []
--USE_JDK_DEFLATER / -use_jdk_deflater
Use the JDK Deflater instead of the Intel Deflater for writing compressed output
Boolean false
--USE_JDK_INFLATER / -use_jdk_inflater
Use the JDK Inflater instead of the Intel Inflater for reading compressed input
Boolean false
--VALIDATION_STRINGENCY / NA
Validation stringency for all SAM files read by this program. Setting stringency to SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.
The --VALIDATION_STRINGENCY argument is an enumerated type (ValidationStringency), which can have one of the following values:
- STRICT
- LENIENT
- SILENT
ValidationStringency STRICT
--VERBOSITY / NA
Control verbosity of logging.
The --VERBOSITY argument is an enumerated type (LogLevel), which can have one of the following values:
- ERROR
- WARNING
- INFO
- DEBUG
LogLevel INFO
--version / NA
display the version number for this tool
boolean false
GATK version 4.0.1.1 built at 02-49-2019 01:49:44.
4 comments
What should be the "--ASSUME_SORTED" option if the bam file is sorted by query name?
are you sure you've set default COMPRESSION_LEVEL to 5?
Cannot find Output files after applying Markduplicates with picard tools
I've some sorted bam files and i wanted to mark the duplicate reads using MarkDuplicate in picard tool:
all files are present in a directory named `AlignmentOfTrimmed_Sam_Files` the whole path for these files is defined below, and this is my current working directory,
After running this command several times which takes an hour each time and with minor changes each time I was never able to find the output files
Any suggestions to help??
And thanks in advance
```
### Path of the directory where sorted bam files are located:
samfiles_dir = '/media/phmagdy/TOSHIBA_EXT/PhD_Data_Analysis/group3/AlignmentOfTrimmed_Sam_Files/'
### Loop over sorted bam files and markduplicates using picard tools
for file in os.listdir(samfiles_dir):
if file.endswith('sorted.bam'):
inputfile = os.path.join(samfiles_dir,file)
fileBasename = '_'.join(os.path.basename(file).rsplit('_',4)[0:3])
!java -Xmx20g -jar {picard_path}/picard.jar MarkDuplicates --INPUT {inputfile} \
--OUTPUT {fileBasename}.markdup.bam \
--METRICS_FILE {fileBasename}.metrics.txt
```
here is a part of the output :
```
MarkDuplicates starts at 2022-09-18 16:07:52.296874
16:07:53.413 INFO NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/home/phmagdy/miniconda3/envs/Jhm/share/picard-2.27.4-0/picard.jar!/com/intel/gkl/native/libgkl_compression.so
[Sun Sep 18 16:07:53 EET 2022] MarkDuplicates --INPUT /media/phmagdy/TOSHIBA_EXT/PhD_Data_Analysis/group3/AlignmentOfTrimmed_Sam_Files/S000021_S5424Nr_7_sorted.bam --OUTPUT S000021_S5424Nr_7.markdup.bam --METRICS_FILE S000021_S5424Nr_7.metrics.txt --MAX_SEQUENCES_FOR_DISK_READ_ENDS_MAP 50000 --MAX_FILE_HANDLES_FOR_READ_ENDS_MAP 8000 --SORTING_COLLECTION_SIZE_RATIO 0.25 --TAG_DUPLICATE_SET_MEMBERS false --REMOVE_SEQUENCING_DUPLICATES false --TAGGING_POLICY DontTag --CLEAR_DT true --DUPLEX_UMI false --FLOW_MODE false --FLOW_QUALITY_SUM_STRATEGY false --USE_END_IN_UNPAIRED_READS false --USE_UNPAIRED_CLIPPED_END false --UNPAIRED_END_UNCERTAINTY 0 --FLOW_SKIP_FIRST_N_FLOWS 0 --FLOW_Q_IS_KNOWN_END false --FLOW_EFFECTIVE_QUALITY_THRESHOLD 15 --ADD_PG_TAG_TO_READS true --REMOVE_DUPLICATES false --ASSUME_SORTED false --DUPLICATE_SCORING_STRATEGY SUM_OF_BASE_QUALITIES --PROGRAM_RECORD_ID MarkDuplicates --PROGRAM_GROUP_NAME MarkDuplicates --READ_NAME_REGEX <optimized capture of last three ':' separated fields as numeric values> --OPTICAL_DUPLICATE_PIXEL_DISTANCE 100 --MAX_OPTICAL_DUPLICATE_SET_SIZE 300000 --VERBOSITY INFO --QUIET false --VALIDATION_STRINGENCY STRICT --COMPRESSION_LEVEL 5 --MAX_RECORDS_IN_RAM 500000 --CREATE_INDEX false --CREATE_MD5_FILE false --GA4GH_CLIENT_SECRETS client_secrets.json --help false --version false --showHidden false --USE_JDK_DEFLATER false --USE_JDK_INFLATER false
[Sun Sep 18 16:07:53 EET 2022] Executing as phmagdy@ubuntu on Linux 5.15.0-46-generic amd64; OpenJDK 64-Bit Server VM 1.8.0_112-b16; Deflater: Intel; Inflater: Intel; Provider GCS is not available; Picard version: Version:2.27.4-SNAPSHOT
INFO 2022-09-18 16:07:53 MarkDuplicates Start of doWork freeMemory: 208248760; totalMemory: 221249536; maxMemory: 19088801792
INFO 2022-09-18 16:07:53 MarkDuplicates Reading input file and constructing read end information.
INFO 2022-09-18 16:07:53 MarkDuplicates Will retain up to 69162325 data points before spilling to disk.
INFO 2022-09-18 16:08:00 MarkDuplicates Read 1,000,000 records. Elapsed time: 00:00:06s. Time for last 1,000,000: 6s. Last read position: chr1:16,264,133
INFO 2022-09-18 16:08:00 MarkDuplicates Tracking 3899 as yet unmatched pairs. 422 records in RAM.
INFO 2022-09-18 16:08:05 MarkDuplicates Read 2,000,000 records. Elapsed time: 00:00:11s. Time for last
```
N.B. there was no error at the end of the execution after almost one hour ... and here are the last few lines
INFO 2022-09-18 14:58:24 MarkDuplicates Read 41,000,000 records. Elapsed time: 00:03:19s. Time for last 1,000,000: 3s. Last read position: chr8:107,782,217
INFO 2022-09-18 14:58:24 MarkDuplicates Tracking 114840 as yet unmatched pairs. 2544 records in RAM.
INFO 2022-09-18 14:59:01 MarkDuplicates Read 42,000,000 records. Elapsed time: 00:03:57s. Time for last 1,000,000: 37s. Last read position: chr9:2,718,932
INFO 2022-09-18 14:59:01 MarkDuplicates Tracking 114824 as yet unmatched pairs. 9314 records in RAM.
INFO 2022-09-18 14:59:57 MarkDuplicates Read 43,000,000 records. Elapsed time: 00:04:52s. Time for last 1,000,000: 55s. Last read position: chr9:66,499,605
INFO 2022-09-18 14:59:57 MarkDuplicates Tracking 114507 as yet unmatched pairs. 6658 records in RAM.
INFO 2022-09-18 15:00:02 MarkDuplicates Read 44,000,000 records. Elapsed time: 00:04:57s. Time for last 1,000,000: 4s. Last read position: chr9:107,578,518
INFO 2022-09-18 15:00:02 MarkDuplicates Tracking 113906 as yet unmatched pairs. 3393 records in RAM.
Time elapsed = 0:57:49.228557
Hi team,
Do you have any suggestion about --OPTICAL_DUPLICATE_PIXEL_DISTANCE when analyzing Novaseq X and NovaSeq 6000?
Thank you
Please sign in to leave a comment.