Prints read alignments in samtools pileup format
Category Coverage Analysis
Overview
Prints read alignments in samtools pileup format. The tool leverages the Spark framework for faster operation.This tool emulates the functionality of samtools pileup. It prints the alignments in a format that is very similar to the samtools pileup format; see the Samtools Pileup format documentation for more details about the original format. The output comprises one line per genomic position, listing the chromosome name, coordinate, reference base, bases from reads, and corresponding base qualities from reads. In addition to these default fields, additional information can be added to the output as extra columns.
Usage example
gatk PileupSpark \ -R reference.fasta \ -I input.bam \ -O output.txt
Emulated command:
samtools pileup -f reference.fasta input.bam
Typical output format
chr1 257 A CAA '&= chr1 258 C TCC A:= chr1 259 C CCC )A= chr1 260 C ACC (=< chr1 261 T TCT '44 chr1 262 A AAA '?: chr1 263 A AGA 1'6 chr1 264 C TCC 987 chr1 265 C CCC (@( chr1 266 C GCC ''= chr1 267 T AAT 7%>
This tool can be run without explicitly specifying Spark options. That is to say, the given example command without Spark options will run locally. See Tutorial#10060 for an example of how to set up and run a Spark tool on a cloud Spark cluster.
PileupSpark specific arguments
This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.
Argument name(s) | Default value | Summary | |
---|---|---|---|
Required Arguments | |||
--input -I |
[] | BAM/SAM/CRAM file containing reads | |
--output -O |
null | The output directory to which the scattered output will be written. | |
Optional Tool Arguments | |||
--arguments_file |
[] | read one or more arguments files and add them to the command line | |
--bam-partition-size |
0 | maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block). | |
--conf |
[] | spark properties to set on the spark context in the format = | |
--disable-sequence-dictionary-validation |
false | If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk! | |
--gcs-max-retries -gcs-retries |
20 | If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection | |
--help -h |
false | display the help message | |
--interval-merging-rule -imr |
ALL | Interval merging rule for abutting intervals | |
--intervals -L |
[] | One or more genomic intervals over which to operate | |
--maxDepthPerSample |
0 | Maximum number of reads to retain per sample per locus. Reads above this threshold will be downsampled. Set to 0 to disable. | |
--metadata |
[] | Features file(s) containing metadata. The overlapping sites will be annotated with the corresponding source Feature identifier. | |
--num-reducers |
0 | For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input. | |
--output-insert-length |
false | If enabled, inserts lengths will be added to the output pileup. | |
--output-shard-tmp-dir |
null | when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used | |
--program-name |
null | Name of the program running | |
--readShardPadding |
1000 | Each read shard has this many bases of extra context on each side. | |
--readShardSize |
10000 | Maximum size of each read shard, in bases. | |
--reference -R |
null | Reference sequence | |
--sharded-output |
false | For tools that write an output, write the output in multiple pieces (shards) | |
--show-verbose -verbose |
false | Add extra informative columns to the pileup output. The verbose output contains the number of spanning deletions, and for each read in the pileup it has the read name, offset in the base string, read length, and read mapping quality. These per read items are delimited with an '@' character. | |
--shuffle |
false | whether to use the shuffle implementation or overlaps partitioning (the default) | |
--spark-master |
local[*] | URL of the Spark Master to submit jobs to when using the Spark pipeline runner. | |
--version |
false | display the version number for this tool | |
Optional Common Arguments | |||
--disable-read-filter -DF |
[] | Read filters to be disabled before analysis | |
--disable-tool-default-read-filters |
false | Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on) | |
--exclude-intervals -XL |
[] | One or more genomic intervals to exclude from processing | |
--gatk-config-file |
null | A configuration file to use with the GATK. | |
--interval-exclusion-padding -ixp |
0 | Amount of padding (in bp) to add to each interval you are excluding. | |
--interval-padding -ip |
0 | Amount of padding (in bp) to add to each interval you are including. | |
--interval-set-rule -isr |
UNION | Set merging approach to use for combining interval inputs | |
--QUIET |
false | Whether to suppress job-summary info on System.err. | |
--read-filter -RF |
[] | Read filters to be applied before analysis | |
--read-index |
[] | Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically. | |
--read-validation-stringency -VS |
SILENT | Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded. | |
--TMP_DIR |
[] | Undocumented option | |
--use-jdk-deflater -jdk-deflater |
false | Whether to use the JdkDeflater (as opposed to IntelDeflater) | |
--use-jdk-inflater -jdk-inflater |
false | Whether to use the JdkInflater (as opposed to IntelInflater) | |
--verbosity |
INFO | Control verbosity of logging. | |
Advanced Arguments | |||
--showHidden |
false | display hidden arguments |
Argument details
Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.
--arguments_file / NA
read one or more arguments files and add them to the command line
List[File] []
--bam-partition-size / NA
maximum number of bytes to read from a file into each partition of reads. Setting this higher will result in fewer partitions. Note that this will not be equal to the size of the partition in memory. Defaults to 0, which uses the default split size (determined by the Hadoop input format, typically the size of one HDFS block).
long 0 [ [ -∞ ∞ ] ]
--conf / -conf
spark properties to set on the spark context in the format =
List[String] []
--disable-read-filter / -DF
Read filters to be disabled before analysis
List[String] []
--disable-sequence-dictionary-validation / -disable-sequence-dictionary-validation
If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!
boolean false
--disable-tool-default-read-filters / -disable-tool-default-read-filters
Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on)
boolean false
--exclude-intervals / -XL
One or more genomic intervals to exclude from processing
Use this argument to exclude certain parts of the genome from the analysis (like -L, but the opposite).
This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the
command line (e.g. -XL 1 or -XL 1:100-200) or by loading in a file containing a list of intervals
(e.g. -XL myFile.intervals).
List[String] []
--gatk-config-file / NA
A configuration file to use with the GATK.
String null
--gcs-max-retries / -gcs-retries
If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
int 20 [ [ -∞ ∞ ] ]
--help / -h
display the help message
boolean false
--input / -I
BAM/SAM/CRAM file containing reads
R List[String] []
--interval-exclusion-padding / -ixp
Amount of padding (in bp) to add to each interval you are excluding.
Use this to add padding to the intervals specified using -XL. For example, '-XL 1:100' with a
padding value of 20 would turn into '-XL 1:80-120'. This is typically used to add padding around targets when
analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
--interval-merging-rule / -imr
Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not
actually overlap) into a single continuous interval. However you can change this behavior if you want them to be
treated as separate intervals instead.
The --interval-merging-rule argument is an enumerated type (IntervalMergingRule), which can have one of the following values:
- ALL
- OVERLAPPING_ONLY
IntervalMergingRule ALL
--interval-padding / -ip
Amount of padding (in bp) to add to each interval you are including.
Use this to add padding to the intervals specified using -L. For example, '-L 1:100' with a
padding value of 20 would turn into '-L 1:80-120'. This is typically used to add padding around targets when
analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
--interval-set-rule / -isr
Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can
change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to
perform the analysis only on chromosome 1 exomes, you could specify -L exomes.intervals -L 1 --interval-set-rule
INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will
always be merged using UNION).
Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.
The --interval-set-rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:
- UNION
- Take the union of all intervals
- INTERSECTION
- Take the intersection of intervals (the subset that overlaps all intervals specified)
IntervalSetRule UNION
--intervals / -L
One or more genomic intervals over which to operate
List[String] []
--maxDepthPerSample / -maxDepthPerSample
Maximum number of reads to retain per sample per locus. Reads above this threshold will be downsampled. Set to 0 to disable.
int 0 [ [ -∞ ∞ ] ]
--metadata / -metadata
Features file(s) containing metadata. The overlapping sites will be annotated with the corresponding source Feature identifier.
This enables annotating the pileup to show overlaps with metadata from a Feature file(s). For example, if the
user provide a VCF and there is a SNP at a given location covered by the pileup, the pileup output at that
position will be annotated with the corresponding source Feature identifier.
List[FeatureInput[Feature]] []
--num-reducers / NA
For tools that shuffle data or write an output, sets the number of reducers. Defaults to 0, which gives one partition per 10MB of input.
int 0 [ [ -∞ ∞ ] ]
--output / -O
The output directory to which the scattered output will be written.
R String null
--output-insert-length / -output-insert-length
If enabled, inserts lengths will be added to the output pileup.
Adds the length of the insert each base comes from to the output pileup. Here, "insert" refers to the DNA insert
produced during library generation before sequencing.
boolean false
--output-shard-tmp-dir / NA
when writing a bam, in single sharded mode this directory to write the temporary intermediate output shards, if not specified .parts/ will be used
Exclusion: This argument cannot be used at the same time as sharded-output
.
String null
--program-name / NA
Name of the program running
String null
--QUIET / NA
Whether to suppress job-summary info on System.err.
Boolean false
--read-filter / -RF
Read filters to be applied before analysis
List[String] []
--read-index / -read-index
Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.
List[String] []
--read-validation-stringency / -VS
Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.
The --read-validation-stringency argument is an enumerated type (ValidationStringency), which can have one of the following values:
- STRICT
- LENIENT
- SILENT
ValidationStringency SILENT
--readShardPadding / -readShardPadding
Each read shard has this many bases of extra context on each side.
int 1000 [ [ -∞ ∞ ] ]
--readShardSize / -readShardSize
Maximum size of each read shard, in bases.
int 10000 [ [ -∞ ∞ ] ]
--reference / -R
Reference sequence
String null
--sharded-output / NA
For tools that write an output, write the output in multiple pieces (shards)
Exclusion: This argument cannot be used at the same time as output-shard-tmp-dir
.
boolean false
--show-verbose / -verbose
Add extra informative columns to the pileup output. The verbose output contains the number of spanning deletions, and for each read in the pileup it has the read name, offset in the base string, read length, and read mapping quality. These per read items are delimited with an '@' character.
In addition to the standard pileup output, adds 'verbose' output too. The verbose output contains the number of
spanning deletions, and for each read in the pileup it has the read name, offset in the base string, read length,
and read mapping quality. These per read items are delimited with an '@' character.
boolean false
--showHidden / -showHidden
display hidden arguments
boolean false
--shuffle / -shuffle
whether to use the shuffle implementation or overlaps partitioning (the default)
boolean false
--spark-master / NA
URL of the Spark Master to submit jobs to when using the Spark pipeline runner.
String local[*]
--TMP_DIR / NA
Undocumented option
List[File] []
--use-jdk-deflater / -jdk-deflater
Whether to use the JdkDeflater (as opposed to IntelDeflater)
boolean false
--use-jdk-inflater / -jdk-inflater
Whether to use the JdkInflater (as opposed to IntelInflater)
boolean false
--verbosity / -verbosity
Control verbosity of logging.
The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:
- ERROR
- WARNING
- INFO
- DEBUG
LogLevel INFO
--version / NA
display the version number for this tool
boolean false
GATK version 4.0.5.2 built at 25-50-2019 01:50:01.
0 comments
Please sign in to leave a comment.