Step 3: Classifies pathogen-aligned reads and generates abundance scores
Category Metagenomics
Overview
Classify reads and estimate abundances of each taxon in the reference. This is the third and final step of the PathSeq pipeline.See PathSeqPipelineSpark for an overview of the PathSeq pipeline.
This tool performs taxonomic classification of reads that have been aligned to a microbe reference. Using alignments that are sufficiently identical to the reference, it scores each taxon in the reference based on aligned reads. These scores can be used to detect and quantify microbe abundance.
Methods
Alignments with sufficient identity score (e.g. 90% of read length) are used to estimate read counts and the relative abundance of microorganisms present in the sample at each level of the taxonomic tree (e.g. strain, species, genus, family, etc.). If a read maps to more than one organism, only the best alignment and any others with identity score within a margin of error of the best (e.g. 2%) are retained. For paired-end reads, alignments to organisms present in one read but not the other are discarded. Reads with a single valid alignment add a score of 1 to the corresponding species or strain. For reads with N hits, a score of 1/N is distributed to each organism. Scores are totaled for each taxon by summing the scores across all reads and the scores of any descendent taxa.
Input
- Queryname-sorted BAM file containing only paired reads aligned to the microbe reference
- BAM file containing only unpaired reads aligned to the microbe reference
- *Taxonomy file generated using PathSeqBuildReferenceTaxonomy
*A standard microbe taxonomy file is available in the GATK Resource Bundle.
Output
Tab-delimited scores table with the following columns:
- NCBI taxonomic ID
- phylogenetic classification
- phylogenetic rank (order, family, genus, etc.)
- taxon name
- abundance score (described above)
- abundance score normalized as a percentage of total microbe-mapped reads
- total number of reads aligned to this taxon
- number of reads assigned unambiguously to the taxon (i.e. mapped only to the node and/or its children)
- total taxon reference sequence length
The tool may also optionally produce:
- BAM file of all reads annotated with the NCBI taxonomy IDs of mapped organisms
- Metrics file with the number of mapped and unmapped reads
Usage example
This tool can be run without explicitly specifying Spark options. That is to say, the given example command without Spark options will run locally. See Tutorial#10060 for an example of how to set up and run a Spark tool on a cloud Spark cluster.
gatk PathSeqScoreSpark \ --paired-input input_reads_paired.bam \ --unpaired-input input_reads_unpaired.bam \ --taxonomy-file taxonomy.db \ --scores-output scores.txt \ --output output_reads.bam \ --min-score-identity 0.90 \ --identity-margin 0.02
PathSeqScoreSpark specific arguments
This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.
Argument name(s) | Default value | Summary | |
---|---|---|---|
Required Arguments | |||
--scores-output -SO |
null | URI for the taxonomic scores output | |
--taxonomy-file -T |
null | URI to the microbe reference taxonomy database built using PathSeqBuildReferenceTaxonomy | |
Optional Tool Arguments | |||
--arguments_file |
[] | read one or more arguments files and add them to the command line | |
--bam-partition-size |
0 | maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block). | |
--conf |
[] | spark properties to set on the spark context in the format = | |
--disable-sequence-dictionary-validation |
false | If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk! | |
--divide-by-genome-length |
false | Divide abundance scores by each taxon's reference genome length (in millions) | |
--gcs-max-retries -gcs-retries |
20 | If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection | |
--help -h |
false | display the help message | |
--identity-margin |
0.02 | Identity margin, as a fraction of the best hit (between 0 and 1). | |
--interval-merging-rule -imr |
ALL | Interval merging rule for abutting intervals | |
--intervals -L |
[] | One or more genomic intervals over which to operate | |
--min-score-identity |
0.9 | Alignment identity score threshold, as a fraction of the read length (between 0 and 1). | |
--not-normalized-by-kingdom |
false | If true, normalized abundance scores will be reported as a percentage within their kingdom. | |
--num-reducers |
0 | For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input. | |
--output -O |
null | Output BAM | |
--output-shard-tmp-dir |
null | when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used | |
--paired-input -PI |
null | Input queryname-sorted BAM containing only paired reads | |
--program-name |
null | Name of the program running | |
--reference -R |
null | Reference sequence | |
--score-metrics -SM |
null | Log counts of mapped and unmapped reads to this file | |
--score-warnings -SW |
null | Write accessions found in the reads header but not the taxonomy database to this file | |
--sharded-output |
false | For tools that write an output, write the output in multiple pieces (shards) | |
--spark-master |
local[*] | URL of the Spark Master to submit jobs to when using the Spark pipeline runner. | |
--unpaired-input -UI |
null | Input BAM containing only unpaired reads | |
--version |
false | display the version number for this tool | |
Optional Common Arguments | |||
--disable-read-filter -DF |
[] | Read filters to be disabled before analysis | |
--disable-tool-default-read-filters |
false | Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on) | |
--exclude-intervals -XL |
[] | One or more genomic intervals to exclude from processing | |
--gatk-config-file |
null | A configuration file to use with the GATK. | |
--input -I |
[] | BAM/SAM/CRAM file containing reads | |
--interval-exclusion-padding -ixp |
0 | Amount of padding (in bp) to add to each interval you are excluding. | |
--interval-padding -ip |
0 | Amount of padding (in bp) to add to each interval you are including. | |
--interval-set-rule -isr |
UNION | Set merging approach to use for combining interval inputs | |
--QUIET |
false | Whether to suppress job-summary info on System.err. | |
--read-filter -RF |
[] | Read filters to be applied before analysis | |
--read-index |
[] | Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically. | |
--read-validation-stringency -VS |
SILENT | Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded. | |
--TMP_DIR |
[] | Undocumented option | |
--use-jdk-deflater -jdk-deflater |
false | Whether to use the JdkDeflater (as opposed to IntelDeflater) | |
--use-jdk-inflater -jdk-inflater |
false | Whether to use the JdkInflater (as opposed to IntelInflater) | |
--verbosity |
INFO | Control verbosity of logging. | |
Advanced Arguments | |||
--score-reads-per-partition-estimate |
200000 | Estimated reads per Spark partition for scoring | |
--showHidden |
false | display hidden arguments |
Argument details
Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.
--arguments_file / NA
read one or more arguments files and add them to the command line
List[File] []
--bam-partition-size / NA
maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).
long 0 [ [ -∞ ∞ ] ]
--conf / -conf
spark properties to set on the spark context in the format =
List[String] []
--disable-read-filter / -DF
Read filters to be disabled before analysis
List[String] []
--disable-sequence-dictionary-validation / -disable-sequence-dictionary-validation
If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!
boolean false
--disable-tool-default-read-filters / -disable-tool-default-read-filters
Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on)
boolean false
--divide-by-genome-length / -divide-by-genome-length
Divide abundance scores by each taxon's reference genome length (in millions)
If true, the score contributed by each read is divided by the mapped organism's genome length in the reference.
boolean false
--exclude-intervals / -XL
One or more genomic intervals to exclude from processing
Use this argument to exclude certain parts of the genome from the analysis (like -L, but the opposite).
This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the
command line (e.g. -XL 1 or -XL 1:100-200) or by loading in a file containing a list of intervals
(e.g. -XL myFile.intervals).
List[String] []
--gatk-config-file / NA
A configuration file to use with the GATK.
String null
--gcs-max-retries / -gcs-retries
If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
int 20 [ [ -∞ ∞ ] ]
--help / -h
display the help message
boolean false
--identity-margin / -identity-margin
Identity margin, as a fraction of the best hit (between 0 and 1).
For reads having multiple alignments, the best hit is always counted as long as it is above the identity score
threshold. Any additional hits will be counted when its identity score is within this percentage of the best hit.
For example, consider a read that aligns to two different sequences, one with identity score 0.90 and the other with 0.85. If the minimum identity score is 0.7, the best hit (with score 0.90) is counted. In addition, if the identity margin is 10%, then any additional alignments at or above 0.90 * (1 - 0.10) = 0.81 would also be counted. Therefore in this example the second alignment with score 0.85 would be counted.
double 0.02 [ [ 0 1 ] ]
--input / -I
BAM/SAM/CRAM file containing reads
List[String] []
--interval-exclusion-padding / -ixp
Amount of padding (in bp) to add to each interval you are excluding.
Use this to add padding to the intervals specified using -XL. For example, '-XL 1:100' with a
padding value of 20 would turn into '-XL 1:80-120'. This is typically used to add padding around targets when
analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
--interval-merging-rule / -imr
Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not
actually overlap) into a single continuous interval. However you can change this behavior if you want them to be
treated as separate intervals instead.
The --interval-merging-rule argument is an enumerated type (IntervalMergingRule), which can have one of the following values:
- ALL
- OVERLAPPING_ONLY
IntervalMergingRule ALL
--interval-padding / -ip
Amount of padding (in bp) to add to each interval you are including.
Use this to add padding to the intervals specified using -L. For example, '-L 1:100' with a
padding value of 20 would turn into '-L 1:80-120'. This is typically used to add padding around targets when
analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
--interval-set-rule / -isr
Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can
change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to
perform the analysis only on chromosome 1 exomes, you could specify -L exomes.intervals -L 1 --interval-set-rule
INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will
always be merged using UNION).
Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.
The --interval-set-rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:
- UNION
- Take the union of all intervals
- INTERSECTION
- Take the intersection of intervals (the subset that overlaps all intervals specified)
IntervalSetRule UNION
--intervals / -L
One or more genomic intervals over which to operate
List[String] []
--min-score-identity / -min-score-identity
Alignment identity score threshold, as a fraction of the read length (between 0 and 1).
This parameter controls the stringency of the microbe alignment. The identity score threshold is defined as the
number of matching bases minus number of deletions. Alignments below this threshold score will be ignored.
double 0.9 [ [ 0 1 ] ]
--not-normalized-by-kingdom / -not-normalized-by-kingdom
If true, normalized abundance scores will be reported as a percentage within their kingdom.
Comparmentalizes the normalized abundance scores by kingdom.
boolean false
--num-reducers / NA
For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.
int 0 [ [ -∞ ∞ ] ]
--output / -O
Output BAM
Records have a "YP" tag that lists the NCBI taxonomy IDs of any mapped organisms. This tag is omitted if the
read is unmapped.
String null
--output-shard-tmp-dir / NA
when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used
Exclusion: This argument cannot be used at the same time as sharded-output
.
String null
--paired-input / -PI
Input queryname-sorted BAM containing only paired reads
String null
--program-name / NA
Name of the program running
String null
--QUIET / NA
Whether to suppress job-summary info on System.err.
Boolean false
--read-filter / -RF
Read filters to be applied before analysis
List[String] []
--read-index / -read-index
Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.
List[String] []
--read-validation-stringency / -VS
Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.
The --read-validation-stringency argument is an enumerated type (ValidationStringency), which can have one of the following values:
- STRICT
- LENIENT
- SILENT
ValidationStringency SILENT
--reference / -R
Reference sequence
String null
--score-metrics / -SM
Log counts of mapped and unmapped reads to this file
If specified, records the following metrics:
- Number of reads mapped to the microbial reference
- Number of unmapped reads
Note that using this option may increase runtime.
String null
--score-reads-per-partition-estimate / -score-reads-per-partition-estimate
Estimated reads per Spark partition for scoring
This parameter is for fine-tuning memory performance. Lower values may result in less memory usage but possibly
at the expense of greater computation time.
int 200000 [ [ 1 ∞ ] ]
--score-warnings / -SW
Write accessions found in the reads header but not the taxonomy database to this file
String null
--scores-output / -SO
URI for the taxonomic scores output
R String null
--sharded-output / NA
For tools that write an output, write the output in multiple pieces (shards)
Exclusion: This argument cannot be used at the same time as output-shard-tmp-dir
.
boolean false
--showHidden / -showHidden
display hidden arguments
boolean false
--spark-master / NA
URL of the Spark Master to submit jobs to when using the Spark pipeline runner.
String local[*]
--taxonomy-file / -T
URI to the microbe reference taxonomy database built using PathSeqBuildReferenceTaxonomy
R String null
--TMP_DIR / NA
Undocumented option
List[File] []
--unpaired-input / -UI
Input BAM containing only unpaired reads
String null
--use-jdk-deflater / -jdk-deflater
Whether to use the JdkDeflater (as opposed to IntelDeflater)
boolean false
--use-jdk-inflater / -jdk-inflater
Whether to use the JdkInflater (as opposed to IntelInflater)
boolean false
--verbosity / -verbosity
Control verbosity of logging.
The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:
- ERROR
- WARNING
- INFO
- DEBUG
LogLevel INFO
--version / NA
display the version number for this tool
boolean false
GATK version 4.0.6.0 built at 25-39-2019 01:39:46.
0 comments
Please sign in to leave a comment.