Creates a panel of normals for read-count denoising
Category Copy Number Variant Discovery
Overview
Creates a panel of normals (PoN) for read-count denoising given the read counts for samples in the panel. The resulting PoN can be used with DenoiseReadCounts to denoise other samples.The input read counts are first transformed to log2 fractional coverages and preprocessed according to specified filtering and imputation parameters. Singular value decomposition (SVD) is then performed to find the first number-of-eigensamples principal components, which are stored in the PoN. Some or all of these principal components can then be used for denoising case samples with DenoiseReadCounts; it is assumed that the principal components used represent systematic sequencing biases (rather than statistical noise). Examining the singular values, which are also stored in the PoN, may be useful in determining the appropriate number of principal components to use for denoising.
If annotated intervals are provided, explicit GC-bias correction will be performed by GCBiasCorrector before filtering and SVD. GC-content information for the intervals will be stored in the PoN and used to perform explicit GC-bias correction identically in DenoiseReadCounts. Note that if annotated intervals are not provided, it is still likely that GC-bias correction is implicitly performed by the SVD denoising process (i.e., some of the principal components arise from GC bias).
Note that such SVD denoising cannot distinguish between variance due to systematic sequencing biases and that due to true common germline CNVs present in the panel; signal from the latter may thus be inadvertently denoised away. Furthermore, variance arising from coverage on the sex chromosomes may also significantly contribute to the principal components if the panel contains samples of mixed sex. Therefore, if sex chromosomes are not excluded from coverage collection, it is strongly recommended that users avoid creating panels of mixed sex and take care to denoise case samples only with panels containing only individuals of the same sex as the case samples. (See GermlineCNVCaller, which avoids these issues by simultaneously learning a probabilistic model for systematic bias and calling rare and common germline CNVs for samples in the panel.)
Inputs
- Counts files (TSV or HDF5 output of CollectReadCounts).
- (Optional) GC-content annotated-intervals file from AnnotateIntervals. Explicit GC-bias correction will be performed on the panel samples and identically for subsequent case samples.
Output
- Panel-of-normals file. This is an HDF5 file containing the panel data in the paths defined in HDF5SVDReadCountPanelOfNormals. HDF5 files may be viewed using hdfview or loaded in python using PyTables or h5py.
Usage examples
gatk CreateReadCountPanelOfNormals \ -I sample_1.counts.hdf5 \ -I sample_2.counts.hdf5 \ ... \ -O cnv.pon.hdf5
gatk CreateReadCountPanelOfNormals \ -I sample_1.counts.hdf5 \ -I sample_2.counts.tsv \ ... \ --annotated-intervals annotated_intervals.tsv \ -O cnv.pon.hdf5
CreateReadCountPanelOfNormals specific arguments
This table summarizes the command-line arguments that are specific to this tool. For more details on each argument, see the list further down below the table or click on an argument name to jump directly to that entry in the list.
Argument name(s) | Default value | Summary | |
---|---|---|---|
Required Arguments | |||
--input -I |
[] | Input TSV or HDF5 files containing integer read counts in genomic intervals for all samples in the panel of normals (output of CollectReadCounts). Intervals must be identical and in the same order for all samples. | |
--output -O |
null | Output file for the panel of normals. | |
Optional Tool Arguments | |||
--annotated-intervals |
null | Input file containing annotations for GC content in genomic intervals (output of AnnotateIntervals). If provided, explicit GC correction will be performed before performing SVD. Intervals must be identical to and in the same order as those in the input read-counts files. | |
--arguments_file |
[] | read one or more arguments files and add them to the command line | |
--conf |
[] | Spark properties to set on the Spark context in the format = | |
--do-impute-zeros |
true | If true, impute zero-coverage values as the median of the non-zero values in the corresponding interval. (This is applied after all filters.) | |
--extreme-outlier-truncation-percentile |
0.1 | Fractional coverages normalized by genomic-interval medians that are below this percentile or above the complementary percentile are set to the corresponding percentile value. (This is applied after all filters and imputation.) | |
--extreme-sample-median-percentile |
2.5 | Samples with a median (across genomic intervals) of fractional coverage normalized by genomic-interval medians below this percentile or above the complementary percentile are filtered out. (This is the fourth filter applied.) | |
--gcs-max-retries -gcs-retries |
20 | If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection | |
--gcs-project-for-requester-pays |
"" | Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed. | |
--help -h |
false | display the help message | |
--maximum-zeros-in-interval-percentage |
5.0 | Genomic intervals with a fraction of zero-coverage samples above this percentage are filtered out. (This is the third filter applied.) | |
--maximum-zeros-in-sample-percentage |
5.0 | Samples with a fraction of zero-coverage genomic intervals above this percentage are filtered out. (This is the second filter applied.) | |
--minimum-interval-median-percentile |
10.0 | Genomic intervals with a median (across samples) of fractional coverage (optionally corrected for GC bias) less than or equal to this percentile are filtered out. (This is the first filter applied.) | |
--number-of-eigensamples |
20 | Number of eigensamples to use for truncated SVD and to store in the panel of normals. The number of samples retained after filtering will be used instead if it is smaller than this. | |
--program-name |
null | Name of the program running | |
--spark-master |
local[*] | URL of the Spark Master to submit jobs to when using the Spark pipeline runner. | |
--spark-verbosity |
null | Spark verbosity. Overrides --verbosity for Spark-generated logs only. Possible values: {ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE} | |
--version |
false | display the version number for this tool | |
Optional Common Arguments | |||
--gatk-config-file |
null | A configuration file to use with the GATK. | |
--QUIET |
false | Whether to suppress job-summary info on System.err. | |
--tmp-dir |
null | Temp directory to use. | |
--use-jdk-deflater -jdk-deflater |
false | Whether to use the JdkDeflater (as opposed to IntelDeflater) | |
--use-jdk-inflater -jdk-inflater |
false | Whether to use the JdkInflater (as opposed to IntelInflater) | |
--verbosity |
INFO | Control verbosity of logging. | |
Advanced Arguments | |||
--maximum-chunk-size |
16777215 | Maximum HDF5 matrix chunk size. Large matrices written to HDF5 are chunked into equally sized subsets of rows (plus a subset containing the remainder, if necessary) to avoid a hard limit in Java HDF5 on the number of elements in a matrix. However, since a single row is not allowed to be split across multiple chunks, the number of columns must be less than the maximum number of values in each chunk. Decreasing this number will reduce heap usage when writing chunks. | |
--showHidden |
false | display hidden arguments |
Argument details
Arguments in this list are specific to this tool. Keep in mind that other arguments are available that are shared with other tools (e.g. command-line GATK arguments); see Inherited arguments above.
--annotated-intervals / NA
Input file containing annotations for GC content in genomic intervals (output of AnnotateIntervals). If provided, explicit GC correction will be performed before performing SVD. Intervals must be identical to and in the same order as those in the input read-counts files.
File null
--arguments_file / NA
read one or more arguments files and add them to the command line
List[File] []
--conf / NA
Spark properties to set on the Spark context in the format =
List[String] []
--do-impute-zeros / NA
If true, impute zero-coverage values as the median of the non-zero values in the corresponding interval. (This is applied after all filters.)
boolean true
--extreme-outlier-truncation-percentile / NA
Fractional coverages normalized by genomic-interval medians that are below this percentile or above the complementary percentile are set to the corresponding percentile value. (This is applied after all filters and imputation.)
double 0.1 [ [ 0 50 ] ]
--extreme-sample-median-percentile / NA
Samples with a median (across genomic intervals) of fractional coverage normalized by genomic-interval medians below this percentile or above the complementary percentile are filtered out. (This is the fourth filter applied.)
double 2.5 [ [ 0 50 ] ]
--gatk-config-file / NA
A configuration file to use with the GATK.
String null
--gcs-max-retries / -gcs-retries
If the GCS bucket channel errors out, how many times it will attempt to re-initiate the connection
int 20 [ [ -∞ ∞ ] ]
--gcs-project-for-requester-pays / NA
Project to bill when accessing "requester pays" buckets. If unset, these buckets cannot be accessed.
String ""
--help / -h
display the help message
boolean false
--input / -I
Input TSV or HDF5 files containing integer read counts in genomic intervals for all samples in the panel of normals (output of CollectReadCounts). Intervals must be identical and in the same order for all samples.
R List[File] []
--maximum-chunk-size / NA
Maximum HDF5 matrix chunk size. Large matrices written to HDF5 are chunked into equally sized subsets of rows (plus a subset containing the remainder, if necessary) to avoid a hard limit in Java HDF5 on the number of elements in a matrix. However, since a single row is not allowed to be split across multiple chunks, the number of columns must be less than the maximum number of values in each chunk. Decreasing this number will reduce heap usage when writing chunks.
int 16777215 [ [ 1 268,435,455 ] ]
--maximum-zeros-in-interval-percentage / NA
Genomic intervals with a fraction of zero-coverage samples above this percentage are filtered out. (This is the third filter applied.)
double 5.0 [ [ 0 100 ] ]
--maximum-zeros-in-sample-percentage / NA
Samples with a fraction of zero-coverage genomic intervals above this percentage are filtered out. (This is the second filter applied.)
double 5.0 [ [ 0 100 ] ]
--minimum-interval-median-percentile / NA
Genomic intervals with a median (across samples) of fractional coverage (optionally corrected for GC bias) less than or equal to this percentile are filtered out. (This is the first filter applied.)
double 10.0 [ [ 0 100 ] ]
--number-of-eigensamples / NA
Number of eigensamples to use for truncated SVD and to store in the panel of normals. The number of samples retained after filtering will be used instead if it is smaller than this.
int 20 [ [ 0 ∞ ] ]
--output / -O
Output file for the panel of normals.
R File null
--program-name / NA
Name of the program running
String null
--QUIET / NA
Whether to suppress job-summary info on System.err.
Boolean false
--showHidden / -showHidden
display hidden arguments
boolean false
--spark-master / NA
URL of the Spark Master to submit jobs to when using the Spark pipeline runner.
String local[*]
--spark-verbosity / NA
Spark verbosity. Overrides --verbosity for Spark-generated logs only. Possible values: {ALL, DEBUG, INFO, WARN, ERROR, FATAL, OFF, TRACE}
String null
--tmp-dir / NA
Temp directory to use.
GATKPathSpecifier null
--use-jdk-deflater / -jdk-deflater
Whether to use the JdkDeflater (as opposed to IntelDeflater)
boolean false
--use-jdk-inflater / -jdk-inflater
Whether to use the JdkInflater (as opposed to IntelInflater)
boolean false
--verbosity / -verbosity
Control verbosity of logging.
The --verbosity argument is an enumerated type (LogLevel), which can have one of the following values:
- ERROR
- WARNING
- INFO
- DEBUG
LogLevel INFO
--version / NA
display the version number for this tool
boolean false
GATK version 4.1.6.0-SNAPSHOT built at Thu, 2 Apr 2020 14:54:17 -0400.
0 comments
Please sign in to leave a comment.