Extracts site-level variant annotations, labels, and other metadata from a VCF file to HDF5 files
Category Variant Filtering
Overview
Extracts site-level variant annotations, labels, and other metadata from a VCF file to HDF5 files.This tool is primarily intended to be used as the first step in a variant-filtering workflow that supersedes the {@link VariantRecalibrator} workflow. This tool extracts site-level annotations, labels, and other relevant metadata from variant sites (or alleles, in allele-specific mode) that are or are not present in specified labeled resource VCFs (e.g., training or calibration VCFs). Input sites that are present in the resources are considered labeled; each site can have multiple labels if it is present in multiple resources. Other input sites that are not present in any resources are considered unlabeled and can be randomly sampled using reservoir sampling; extraction of these is optional. The outputs of the tool are HDF5 files containing the extracted data for labeled and (optional) unlabeled variant sets, as well as a sites-only indexed VCF containing the labeled variants.
The extracted sets can be provided as input to the {@link TrainVariantAnnotationsModel} tool to produce an annotation-based model for scoring variant calls. This model can in turn be provided along with a VCF file to the {@link ScoreVariantAnnotations} tool, which assigns a score to each call (with a lower score indicating that a call is more likely to be an artifact and should perhaps be filtered). Each score can also be converted to a corresponding sensitivity with respect to a calibration set, if the latter is available.
Note that annotations and metadata are collected in memory during traversal until they are written to HDF5 files upon completion of the traversal. Memory requirements thus roughly scale linearly with both the number of sites extracted and the number of annotations.
Note that HDF5 files may be viewed using hdfview or loaded in Python using PyTables or h5py.
Inputs
- Input VCF file. Site-level annotations will be extracted from the contained variants (or alleles, if at least one allele-specific annotation with "Number=A" is specified).
- Annotations to extract.
- Variant types (i.e., SNP and/or INDEL) to extract. Logic for determining variant type was retained from {@link VariantRecalibrator}; see {@link VariantType}. Extracting SNPs and INDELs separately in two runs of this tool can be useful if one wishes to extract different sets of annotations for each variant type, for example.
- (Optional) Resource VCF file(s). Each resource should be tagged with a label, which will be assigned to extracted sites that are present in the resource. In typical use, the "training" and "calibration" labels should be used to tag at least one resource apiece. The resulting sets of sites will be used for model training and conversion of scores to calibration-set sensitivity, respectively; the trustworthiness of the respective resources should be taken into account accordingly. The "snp" label is reserved by the tool, as it is used to label sites determined to be SNPs, and thus it cannot be used to tag provided resources.
- (Optional) Maximum number of unlabeled variants (or alleles) to randomly sample with reservoir sampling. If nonzero, annotations will also be extracted from unlabeled sites (i.e., those that are not present in the labeled resources).
- Output prefix. This is used as the basename for output files.
Outputs
-
(Optional) Labeled-annotations HDF5 file (.annot.hdf5). Annotation data and metadata for those sites that
are present in labeled resources are stored in the following HDF5 directory structure:
|--- alleles
| |--- alt
| |--- ref
|--- annotations
| |--- chunk_0
| |--- ...
| |--- chunk_{num_chunks - 1}
| |--- names
| |--- num_chunks
| |--- num_columns
| |--- num_rows
|--- intervals
| |--- indexed_contig_names
| |--- transposed_index_start_end
|--- labels
| |--- snp
| |--- ... (e.g., training, calibration, etc.)
| |--- ...
Here, each chunk is a double matrix, with dimensions given by (number of sites in the chunk) x (number of annotations). See the methods {@link HDF5Utils#writeChunkedDoubleMatrix} and {@link HDF5Utils#writeIntervals} for additional details. In allele-specific mode (i.e., when allele-specific annotations are requested), each record corresponds to an individual allele; otherwise, each record corresponds to a variant site, which may contain multiple alleles. Storage of alleles can be omitted using the "--omit-alleles-in-hdf5" argument, which will reduce the size of the file. This file will only be produced if resources are provided and the number of extracted labeled sites is nonzero.
- Labeled sites-only VCF file and index. The VCF will not be gzipped if the "--do-not-gzip-vcf-output" argument is set to true. The VCF can be provided as a resource in subsequent runs of {@link ScoreVariantAnnotations} and used to indicate labeled sites that were extracted. This can be useful if the "--intervals/-L" argument was used to subset sites in training or calibration resources for extraction; this may occur when setting up training/validation/test splits, for example. Note that records for the random sample of unlabeled sites are currently not included in the VCF.
- (Optional) Unlabeled-annotations HDF5 file. This will have the same directory structure as in the labeled-annotations HDF5 file. However, note that records are currently written in the order they appear in the downsampling reservoir after random sampling, and hence, are not in genomic order. This file will only be produced if a nonzero value of the "--maximum-number-of-unlabeled-variants" argument is provided.
Usage examples
Extract annotations from training/calibration SNP/INDEL sites, producing the outputs 1) extract.annot.hdf5, 2) extract.vcf.gz, and 3) extract.vcf.gz.tbi. The HDF5 file can then be provided to {@link TrainVariantAnnotationsModel} to train a model using a positive-only approach. Note that the "--mode" arguments are made explicit here, although both SNP and INDEL modes are selected by default.
gatk ExtractVariantAnnotations \ -V input.vcf \ -A annotation_1 \ ... -A annotation_N \ --mode SNP \ --resource:snp-training,training=true snp-training.vcf \ --resource:snp-calibration,calibration=true snp-calibration.vcf \ --mode INDEL \ --resource:indel-training,training=true indel-training.vcf \ --resource:indel-calibration,calibration=true indel-calibration.vcf \ -O extract
Extract annotations from both training/calibration SNP/INDEL sites and a random sample of 1000000 unlabeled (i.e., non-training/calibration) sites, producing the outputs 1) extract.annot.hdf5, 2) extract.unlabeled.annot.hdf5, 3) extract.vcf.gz, and 4) extract.vcf.gz.tbi. The HDF5 files can then be provided to {@link TrainVariantAnnotationsModel} to train a model using a positive-unlabeled approach. Note that the "--mode" arguments are made explicit here, although both SNP and INDEL modes are selected by default.
gatk ExtractVariantAnnotations \ -V input.vcf \ -A annotation_1 \ ... -A annotation_N \ --mode SNP \ --resource:snp-training,training=true snp-training.vcf \ --resource:snp-calibration,calibration=true snp-calibration.vcf \ --mode INDEL \ --resource:indel-training,training=true indel-training.vcf \ --resource:indel-calibration,calibration=true indel-calibration.vcf \ --maximum-number-of-unlabeled-variants 1000000 -O extract
Note that separate SNP and INDEL resources are shown in the above examples purely for demonstration purposes, as are separate training and calibration resources. However, it may be desirable to specify combined resource(s); e.g., "--resource:snp-and-indel-resource,training=true,calibration=true snp-and-indel-resource.vcf".
In the (atypical) event that resource VCFs are unavailable, one can still extract annotations from a random sample of unlabeled sites, producing the outputs 1) extract.unlabeled.annot.hdf5, 2) extract.vcf.gz (which will contain no records), and 3) extract.vcf.gz.tbi. This random sample cannot be used by {@link TrainVariantAnnotationsModel}, but may still be useful for exploratory analyses. Note that the "--mode" arguments are made explicit here, although both SNP and INDEL modes are selected by default.
gatk ExtractVariantAnnotations \ -V input.vcf \ -A annotation_1 \ ... -A annotation_N \ --mode SNP \ --mode INDEL \ --maximum-number-of-unlabeled-variants 1000000 -O extract
Alternatively, if resource VCFs are unavailable, one might want to specify the input VCF itself as a resource and extract annotations for the input variants (or a subset thereof). Again, this may be useful for exploratory analyses.
DEVELOPER NOTE: See documentation in {@link LabeledVariantAnnotationsWalker}.
@author Samuel Lee <slee@broadinstitute.org>Additional Information
Read filters
This Read Filter is automatically applied to the data by the Engine before processing by ExtractVariantAnnotations.
ExtractVariantAnnotations specific arguments
This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.
Argument name(s) | Default value | Summary | |
---|---|---|---|
Required Arguments | |||
--annotation -A |
Names of the annotations to extract. Note that a requested annotation may in fact not be present at any extraction site; NaN missing values will be generated for such annotations. | ||
--output -O |
Prefix for output filenames. | ||
--variant -V |
A VCF file containing variants | ||
Optional Tool Arguments | |||
--arguments_file |
read one or more arguments files and add them to the command line | ||
--cloud-index-prefetch-buffer -CIPB |
-1 | Size of the cloud-only prefetch buffer (in MB; 0 to disable). Defaults to cloudPrefetchBuffer if unset. | |
--cloud-prefetch-buffer -CPB |
40 | Size of the cloud-only prefetch buffer (in MB; 0 to disable). | |
--disable-bam-index-caching -DBIC |
false | If true, don't cache bam indexes, this will reduce memory requirements but may harm performance if many intervals are specified. Caching is automatically disabled if there are no intervals specified. | |
--disable-sequence-dictionary-validation |
false | If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk! | |
--do-not-gzip-vcf-output |
false | If true, VCF output will not be compressed. | |
--do-not-trust-all-polymorphic |
false | If true, do not trust that unfiltered records in the resources contain only polymorphic sites. This may increase runtime if the resources are not sites-only VCFs. | |
--gcs-max-retries -gcs-retries |
20 | If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection | |
--gcs-project-for-requester-pays |
Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed. User must have storage.buckets.get permission on the bucket being accessed. | ||
--help -h |
false | display the help message | |
--ignore-all-filters |
false | If true, ignore all filters in the input VCF. | |
--ignore-filter |
Ignore the specified filter(s) in the input VCF. | ||
--interval-merging-rule -imr |
ALL | Interval merging rule for abutting intervals | |
--intervals -L |
One or more genomic intervals over which to operate | ||
--maximum-number-of-unlabeled-variants |
0 | Maximum number of unlabeled variants to extract. If greater than zero, reservoir sampling will be used to randomly sample this number of sites from input sites that are not present in the specified resources. Choice of this number should be guided by considerations for training the model in TrainVariantAnnotationsModel; users may wish to choose a number that is comparable to the expected size of the labeled training set or that is compatible with available memory resources. Note that in allele-specific mode, this argument limits the number of variant records, rather than the number of alleles. | |
--mode |
[SNP, INDEL] | Variant types to extract. | |
--omit-alleles-in-hdf5 |
false | If true, omit alleles in output HDF5 files in order to decrease file sizes. | |
--reference -R |
Reference sequence | ||
--reservoir-sampling-random-seed |
0 | Random seed to use for reservoir sampling of unlabeled variants. | |
--resource |
Resource VCFs used to label extracted variants. | ||
--resource-matching-strategy |
START_POSITION | The strategy to use for determining whether an input variant is present in a resource in non-allele-specific mode. START_POSITION: Start positions of input and resource variants must match. START_POSITION_AND_GIVEN_REPRESENTATION: The intersection of the sets of input and resource alleles (in their given representations) must also be non-empty. START_POSITION_AND_MINIMAL_REPRESENTATION: The intersection of the sets of input and resource alleles (after converting alleles to their minimal representations) must also be non-empty. This argument has no effect in allele-specific mode, in which the minimal representations of the input and resource alleles must match. | |
--sites-only-vcf-output |
false | If true, don't emit genotype fields when writing vcf file output. | |
--version |
false | display the version number for this tool | |
Optional Common Arguments | |||
--add-output-sam-program-record |
true | If true, adds a PG tag to created SAM/BAM/CRAM files. | |
--add-output-vcf-command-line |
true | If true, adds a command line header line to created VCF files. | |
--create-output-bam-index -OBI |
true | If true, create a BAM/CRAM index when writing a coordinate-sorted BAM/CRAM file. | |
--create-output-bam-md5 -OBM |
false | If true, create a MD5 digest for any BAM/SAM/CRAM file created | |
--create-output-variant-index -OVI |
true | If true, create a VCF index when writing a coordinate-sorted VCF file. | |
--create-output-variant-md5 -OVM |
false | If true, create a a MD5 digest any VCF file created. | |
--disable-read-filter -DF |
Read filters to be disabled before analysis | ||
--disable-tool-default-read-filters |
false | Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on) | |
--exclude-intervals -XL |
One or more genomic intervals to exclude from processing | ||
--gatk-config-file |
A configuration file to use with the GATK. | ||
--input -I |
BAM/SAM/CRAM file containing reads | ||
--interval-exclusion-padding -ixp |
0 | Amount of padding (in bp) to add to each interval you are excluding. | |
--interval-padding -ip |
0 | Amount of padding (in bp) to add to each interval you are including. | |
--interval-set-rule -isr |
UNION | Set merging approach to use for combining interval inputs | |
--lenient -LE |
false | Lenient processing of VCF files | |
--max-variants-per-shard |
0 | If non-zero, partitions VCF output into shards, each containing up to the given number of records. | |
--QUIET |
false | Whether to suppress job-summary info on System.err. | |
--read-filter -RF |
Read filters to be applied before analysis | ||
--read-index |
Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically. | ||
--read-validation-stringency -VS |
SILENT | Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded. | |
--seconds-between-progress-updates |
10.0 | Output traversal statistics every time this many seconds elapse | |
--sequence-dictionary |
Use the given sequence dictionary as the master/canonical sequence dictionary. Must be a .dict file. | ||
--tmp-dir |
Temp directory to use. | ||
--use-jdk-deflater -jdk-deflater |
false | Whether to use the JdkDeflater (as opposed to IntelDeflater) | |
--use-jdk-inflater -jdk-inflater |
false | Whether to use the JdkInflater (as opposed to IntelInflater) | |
--verbosity |
INFO | Control verbosity of logging. | |
Advanced Arguments | |||
--showHidden |
false | display hidden arguments |
Argument details
Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.
--add-output-sam-program-record / -add-output-sam-program-record
If true, adds a PG tag to created SAM/BAM/CRAM files.
boolean true
--add-output-vcf-command-line / -add-output-vcf-command-line
If true, adds a command line header line to created VCF files.
boolean true
--annotation / -A
Names of the annotations to extract. Note that a requested annotation may in fact not be present at any extraction site; NaN missing values will be generated for such annotations.
R List[String] []
--arguments_file
read one or more arguments files and add them to the command line
List[File] []
--cloud-index-prefetch-buffer / -CIPB
Size of the cloud-only prefetch buffer (in MB; 0 to disable). Defaults to cloudPrefetchBuffer if unset.
int -1 [ [ -∞ ∞ ] ]
--cloud-prefetch-buffer / -CPB
Size of the cloud-only prefetch buffer (in MB; 0 to disable).
int 40 [ [ -∞ ∞ ] ]
--create-output-bam-index / -OBI
If true, create a BAM/CRAM index when writing a coordinate-sorted BAM/CRAM file.
boolean true
--create-output-bam-md5 / -OBM
If true, create a MD5 digest for any BAM/SAM/CRAM file created
boolean false
--create-output-variant-index / -OVI
If true, create a VCF index when writing a coordinate-sorted VCF file.
boolean true
--create-output-variant-md5 / -OVM
If true, create a a MD5 digest any VCF file created.
boolean false
--disable-bam-index-caching / -DBIC
If true, don't cache bam indexes, this will reduce memory requirements but may harm performance if many intervals are specified. Caching is automatically disabled if there are no intervals specified.
boolean false
--disable-read-filter / -DF
Read filters to be disabled before analysis
List[String] []
--disable-sequence-dictionary-validation / -disable-sequence-dictionary-validation
If specified, do not check the sequence dictionaries from our inputs for compatibility. Use at your own risk!
boolean false
--disable-tool-default-read-filters / -disable-tool-default-read-filters
Disable all tool default read filters (WARNING: many tools will not function correctly without their default read filters on)
boolean false
--do-not-gzip-vcf-output
If true, VCF output will not be compressed.
boolean false
--do-not-trust-all-polymorphic
If true, do not trust that unfiltered records in the resources contain only polymorphic sites. This may increase runtime if the resources are not sites-only VCFs.
boolean false
--exclude-intervals / -XL
One or more genomic intervals to exclude from processing
Use this argument to exclude certain parts of the genome from the analysis (like -L, but the opposite). This argument can be specified multiple times. You can use samtools-style intervals either explicitly on the
command line (e.g. -XL 1 or -XL 1:100-200) or by loading in a file containing a list of intervals
(e.g. -XL myFile.intervals). strings gathered from the command line -XL argument to be parsed into intervals to exclude
List[String] []
--gatk-config-file
A configuration file to use with the GATK.
String null
--gcs-max-retries / -gcs-retries
If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
int 20 [ [ -∞ ∞ ] ]
--gcs-project-for-requester-pays
Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed. User must have storage.buckets.get permission on the bucket being accessed.
String ""
--help / -h
display the help message
boolean false
--ignore-all-filters
If true, ignore all filters in the input VCF.
boolean false
--ignore-filter
Ignore the specified filter(s) in the input VCF.
List[String] []
--input / -I
BAM/SAM/CRAM file containing reads
List[GATKPath] []
--interval-exclusion-padding / -ixp
Amount of padding (in bp) to add to each interval you are excluding.
Use this to add padding to the intervals specified using -XL. For example, '-XL 1:100' with a
padding value of 20 would turn into '-XL 1:80-120'. This is typically used to add padding around targets when
analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
--interval-merging-rule / -imr
Interval merging rule for abutting intervals
By default, the program merges abutting intervals (i.e. intervals that are directly side-by-side but do not
actually overlap) into a single continuous interval. However you can change this behavior if you want them to be
treated as separate intervals instead.
The --interval-merging-rule argument is an enumerated type (IntervalMergingRule), which can have one of the following values:
- ALL
- OVERLAPPING_ONLY
IntervalMergingRule ALL
--interval-padding / -ip
Amount of padding (in bp) to add to each interval you are including.
Use this to add padding to the intervals specified using -L. For example, '-L 1:100' with a
padding value of 20 would turn into '-L 1:80-120'. This is typically used to add padding around targets when
analyzing exomes.
int 0 [ [ -∞ ∞ ] ]
--interval-set-rule / -isr
Set merging approach to use for combining interval inputs
By default, the program will take the UNION of all intervals specified using -L and/or -XL. However, you can
change this setting for -L, for example if you want to take the INTERSECTION of the sets instead. E.g. to
perform the analysis only on chromosome 1 exomes, you could specify -L exomes.intervals -L 1 --interval-set-rule
INTERSECTION. However, it is not possible to modify the merging approach for intervals passed using -XL (they will
always be merged using UNION).
Note that if you specify both -L and -XL, the -XL interval set will be subtracted from the -L interval set.
The --interval-set-rule argument is an enumerated type (IntervalSetRule), which can have one of the following values:
- UNION
- Take the union of all intervals
- INTERSECTION
- Take the intersection of intervals (the subset that overlaps all intervals specified)
IntervalSetRule UNION
--intervals / -L
One or more genomic intervals over which to operate
List[String] []
--lenient / -LE
Lenient processing of VCF files
boolean false
--max-variants-per-shard
If non-zero, partitions VCF output into shards, each containing up to the given number of records.
int 0 [ [ 0 ∞ ] ]
--maximum-number-of-unlabeled-variants
Maximum number of unlabeled variants to extract. If greater than zero, reservoir sampling will be used to randomly sample this number of sites from input sites that are not present in the specified resources. Choice of this number should be guided by considerations for training the model in TrainVariantAnnotationsModel; users may wish to choose a number that is comparable to the expected size of the labeled training set or that is compatible with available memory resources. Note that in allele-specific mode, this argument limits the number of variant records, rather than the number of alleles.
int 0 [ [ 0 ∞ ] ]
--mode
Variant types to extract.
The --mode argument is an enumerated type (List[VariantType]), which can have one of the following values:
- SNP
- INDEL
List[VariantType] [SNP, INDEL]
--omit-alleles-in-hdf5
If true, omit alleles in output HDF5 files in order to decrease file sizes.
boolean false
--output / -O
Prefix for output filenames.
R String null
--QUIET
Whether to suppress job-summary info on System.err.
Boolean false
--read-filter / -RF
Read filters to be applied before analysis
List[String] []
--read-index / -read-index
Indices to use for the read inputs. If specified, an index must be provided for every read input and in the same order as the read inputs. If this argument is not specified, the path to the index for each input will be inferred automatically.
List[GATKPath] []
--read-validation-stringency / -VS
Validation stringency for all SAM/BAM/CRAM/SRA files read by this program. The default stringency value SILENT can improve performance when processing a BAM file in which variable-length data (read, qualities, tags) do not otherwise need to be decoded.
The --read-validation-stringency argument is an enumerated type (ValidationStringency), which can have one of the following values:
- STRICT
- LENIENT
- SILENT
ValidationStringency SILENT
--reference / -R
Reference sequence
GATKPath null
--reservoir-sampling-random-seed
Random seed to use for reservoir sampling of unlabeled variants.
int 0 [ [ -∞ ∞ ] ]
--resource
Resource VCFs used to label extracted variants.
List[FeatureInput[VariantContext]] []
--resource-matching-strategy
The strategy to use for determining whether an input variant is present in a resource in non-allele-specific mode. START_POSITION: Start positions of input and resource variants must match. START_POSITION_AND_GIVEN_REPRESENTATION: The intersection of the sets of input and resource alleles (in their given representations) must also be non-empty. START_POSITION_AND_MINIMAL_REPRESENTATION: The intersection of the sets of input and resource alleles (after converting alleles to their minimal representations) must also be non-empty. This argument has no effect in allele-specific mode, in which the minimal representations of the input and resource alleles must match.
The --resource-matching-strategy argument is an enumerated type (ResourceMatchingStrategy), which can have one of the following values:
- START_POSITION
- START_POSITION_AND_GIVEN_REPRESENTATION
- START_POSITION_AND_MINIMAL_REPRESENTATION
ResourceMatchingStrategy START_POSITION
--seconds-between-progress-updates / -seconds-between-progress-updates
Output traversal statistics every time this many seconds elapse
double 10.0 [ [ -∞ ∞ ] ]
--sequence-dictionary / -sequence-dictionary
Use the given sequence dictionary as the master/canonical sequence dictionary. Must be a .dict file.
GATKPath null
--showHidden / -showHidden
display hidden arguments
boolean false
--sites-only-vcf-output
If true, don't emit genotype fields when writing vcf file output.
boolean false
--tmp-dir
Temp directory to use.
GATKPath null
--use-jdk-deflater / -jdk-deflater
Whether to use the JdkDeflater (as opposed to IntelDeflater)
boolean false
--use-jdk-inflater / -jdk-inflater
Whether to use the JdkInflater (as opposed to IntelInflater)
boolean false
--variant / -V
A VCF file containing variants
R GATKPath null
--verbosity / -verbosity
Control verbosity of logging.
The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:
- ERROR
- WARNING
- INFO
- DEBUG
LogLevel INFO
--version
display the version number for this tool
boolean false
GATK version 4.5.0.0 built at Tue, 9 Jan 2024 14:37:17 -0500.
0 comments
Please sign in to leave a comment.