This workflow is broken into two tutorials. You are currently on the first part.
The tutorial outlines steps in detecting copy ratio alterations, more familiarly copy number variants (CNVs), as well as allelic segments in a single sample using GATK4. The tutorial (i) denoises case sample alignment data against a panel of normals (PoN) to obtain copy ratios (Tutorial#11682, this page) and (ii) models segments from the copy ratios and allelic counts (Tutorial#11683). The latter modeling incorporates data from a matched control. The same workflow steps apply to targeted exome and whole genome sequencing data.
Tutorial#11682 covers sections 1–4. Section 1 prepares a genomic intervals list with PreprocessIntervals and collects read coverage counts across the intervals. Section 2 creates a CNV PoN with CreateReadCountPanelOfNormals using read coverage counts. Section 3 denoises read coverage data against the PoN with DenoiseReadCounts using principal component analysis. Section 4 plots the results of standardizing and denoising copy ratios against the PoN.
Tutorial#11683 covers sections 5–8. Section 5 collects counts of reference versus alternate alleles with CollectAllelicCounts. Section 6 incorporates copy ratio and allelic counts data to group contiguous copy ratio and allelic counts segments with ModelSegments using kernel segmentation and Markov-chain Monte Carlo. The tool can also segment either copy ratio data or allelic counts data alone. Both types of data together refine segmentation results in that segments are based on the same copy ratio and the same minor allele fraction. Section 7 calls amplification, deletion and neutral events for the segmented copy ratios. Finally, Section 8 plots the results of segmentation and estimated allele-specific copy ratios.
Plotting is across genomic loci on the x-axis and copy or allelic ratios on the y-axis. The first part of the workflow focuses on removing systematic noise from coverage counts and adjusts the data points vertically. The second part focuses on segmentation and groups the data points horizontally. The extent of grouping, or smoothing, is adjustable with ModelSegments parameters. These adjustments do not change the copy ratios; the denoising in the first part of the workflow remains invariant in the second part of the workflow. See Figure 3 of this poster for a summary of tutorial results.
► The official GATK4 workflow is capable of running efficiently on WGS data and provides much greater resolution, up to ~50-fold more resolution for tested data. In these ways, GATK4 CNV improves upon its predecessor workflows in GATK4.alpha and GATK4.beta. Validations are still in progress and therefore the workflow itself is in BETA status, even if most tools, with the exception of ModelSegments, are production ready. The ModelSegments tool is still in BETA status and may change in small but significant ways going forward. Use at your own risk.
► The tutorial skips explicit GC-correction, an option in CNV analysis. For instructions on how to correct for GC bias, see AnnotateIntervals and DenoiseReadCounts tool documentation.
The GATK4 CNV workflow offers a multitude of levers, e.g. towards fine-tuning analyses and towards controls. Researchers are expected to tune workflow parameters on samples with similar copy number profiles as their case sample under scrutiny. Refer to each tool's documentation for descriptions of parameters.
Jump to a section
- Collect raw counts data with PreprocessIntervals and CollectFragmentCounts
☞ 1.1 How do I view HDF5 format data? - Generate a CNV panel of normals with CreateReadCountPanelOfNormals
- Standardize and denoise case read counts against the PoN with DenoiseReadCounts
- Plot standardized and denoised copy ratios with PlotDenoisedCopyRatios
☞ 4.1 Compare two PoNs: considerations in panel of normals creation
☞ 4.2 Compare PoN denoising versus matched-normal denoising - Count ref and alt alleles at common germline variant sites using CollectAllelicCounts
☞ 5.1 What is the difference between CollectAllelicCounts and GetPileupSummaries? - Group contiguous copy ratios into segments with ModelSegments
- Call copy-neutral, amplified and deleted segments with CallCopyRatioSegments
- Plot modeled segments and allelic copy ratios with PlotModeledSegments
☞ 8.1 Some considerations in interpreting allelic copy ratios
☞ 8.2 Some results of fine-tuning smoothing parameters
Tools involved
- GATK 4.0.1.1 or later releases.
- The plotting tools require particular R components. Options are to install these or to use the broadinstitute/gatk Docker. In particular, to match versions, use the broadinstitute/gatk:4.0.1.1 version.
- Install R v3.2.5 or above from https://www.r-project.org/, then install the components using the
install_R_packages.R
script withRscript install_R_packages.R
. - Alternatively, run the plotting tools from a GATK4 Docker container following instructions in Article#11090.
- Install R v3.2.5 or above from https://www.r-project.org/, then install the components using the
Download example data
Download tutorial11682.tar.gz and tutorial11683.tar.gz, either from the GoogleDrive or from the FTP site. To access the ftp site, leave the password field blank. If the GoogleDrive link is broken, please let us know. The tutorial also requires the GRCh38 reference FASTA, dictionary and index. These are available from the GATK Resource Bundle. For details on the example data, see Tutorial#11136's third footnote and [1].
Alternatively, download the spacecade7/tutorial_11682_11683
docker image from DockerHub. The command to do this is: docker pull spacecade7/tutorial_11682_11683:gatk4.0.1.1
. The image contains GATK4.0.1.1 and the data necessary to run the tutorial commands, including the GRCh38 reference. Allocation of at least 4GB memory to Docker is recommended before launching the container.
1. Collect raw counts data with PreprocessIntervals and CollectFragmentCounts
Before collecting coverage counts that forms the basis of copy number variant detection, we define the resolution of the analysis with a genomic intervals list. The extent of genomic coverage and the size of genomic intervals in the intervals list factor towards resolution.
Preparing a genomic intervals list is necessary whether an analysis is on targeted exome data or whole genome data. In the case of exome data, we pad the target regions of the capture kit. In the case of whole genome data, we divide the reference genome into equally sized intervals or bins. In either case, we use PreprocessIntervals to prepare the intervals list.
For the tutorial exome data, we provide the capture kit target regions in 1-based intervals and set --bin-length
to zero.
gatk PreprocessIntervals \ -L targets_C.interval_list \ -R /gatk/ref/Homo_sapiens_assembly38.fasta \ --bin-length 0 \ --interval-merging-rule OVERLAPPING_ONLY \ -O sandbox/targets_C.preprocessed.interval_list
This produces a Picard-style intervals list targets_C.preprocessed.interval_list
for use in the coverage collection step. Each interval is expanded 250 bases each on either side.
Comments on select parameters
- The
-L
argument is optional. If provided, the tool expects the intervals list to be in Picard-style as described in this article. The tool errs for other formats. If this argument is omitted, then the tool assumes each contig is a single interval. See [2] for additional discussion. - Set the
--bin-length
argument to be appropriate for the type of data, e.g. default 1000 for whole genome or 0 for exomes. In binning, an interval is divided into equal-sized regions of the specified length. The tool does not bin regions that containN
s. [3] - Set
--interval-merging-rule
to OVERLAPPING_ONLY, to prevent the tool from merging abutting intervals. [4] - The
--reference
or-R
is required and implies the presence of a corresponding reference index and a reference dictionary in the same directory. - To change the padding interval, specify the new value with
--padding
. The default value of 250 bases was determined to work well empirically for TCGA targeted exome data. This argument is relevant for exome data, as binning without an intervals list does not allow for intervals expansion. [5]
Take a look at the intervals before and after padding.
For consecutive intervals less than 250 bases apart, how does the tool pad the intervals?
Now collect raw integer counts data. The tutorial uses GATK4.0.1.1's CollectFragmentCounts, which counts coverage of paired end fragments. The tool counts once per fragment overlapping at its center with the interval. In GATK4.0.3.0, CollectReadCounts replaces CollectFragmentCounts. CollectReadCounts counts reads that overlap the interval.
The tutorial has already collected coverage on the tumor case sample, on the normal matched-control and on each of the normal samples that constitute the PoN. To demonstrate coverage collection, the following command uses the small BAM from Tutorial#11136’s data bundle [6]. The tutorial does not use the resulting file in subsequent steps. The CollectReadCounts command swaps out the tool name but otherwise uses identical parameters.
gatk CollectFragmentCounts \ -I tumor.bam \ -L targets_C.preprocessed.interval_list \ --interval-merging-rule OVERLAPPING_ONLY \ -O sandbox/tumor.counts.hdf5
In the tutorial data bundle, the equivalent full-length result is hcc1143_T_clean.counts.hdf5
. The data tabulates CONTIG, START, END and raw COUNT values for each genomic interval.
Comments on select parameters
- The
-L
argument interval list is a Picard-style interval list prepared with PreprocessIntervals. - The
-I
input is alignment data. - By default, data is in HDF5 format. To generate text-based TSV (tab-separated values) format data, specify
--format TSV
. The HDF5 format allows for quicker panel of normals creation. - Set
--interval-merging-rule
toOVERLAPPING_ONLY
, to prevent the tool from merging abutting intervals. [4] - The tool employs a number of engine-level read filters. Of note are NotDuplicateReadFilter, FirstOfPairReadFilter, ProperlyPairedReadFilter and MappingQualityReadFilter. [7]
☞ 1.1 How do I view HDF5 format data?
See Article#11508 for an overview of the format and instructions on how to navigate the data with external application HDFView. The article illustrates features of the format using data generated in this tutorial.
2. Generate a CNV panel of normals with CreateReadCountPanelOfNormals
In creating a PoN, CreateReadCountPanelOfNormals abstracts the counts data for the samples and the intervals using Singular Value Decomposition (SVD, 1), a type of Principal Component Analysis (PCA, 1, 2, 3). The normal samples in the PoN should match the sequencing approach of the case sample under scrutiny. This applies especially to targeted exome data because the capture step introduces target-specific noise.
The tutorial has already created a CNV panel of normals using forty 1000 Genomes Project samples. The command below illustrates PoN creation using just three samples.
gatk --java-options "-Xmx6500m" CreateReadCountPanelOfNormals \ -I HG00133.alt_bwamem_GRCh38DH.20150826.GBR.exome.counts.hdf5 \ -I HG00733.alt_bwamem_GRCh38DH.20150826.PUR.exome.counts.hdf5 \ -I NA19654.alt_bwamem_GRCh38DH.20150826.MXL.exome.counts.hdf5 \ --minimum-interval-median-percentile 5.0 \ -O sandbox/cnvponC.pon.hdf5
This generates a PoN in HDF5 format. The PoN stores information that, when applied, will (i) standardize case sample counts to PoN median counts and (ii) remove systematic noise in the case sample.
Comments on select parameters
- Provide integer read coverage counts for each sample using
-I
. Coverage data may be in either TSV or HDF5 format. The tool will accept a single sample, e.g. the matched-normal. - The default
--number-of-eigensamples
or principal components is twenty. The tool will adjust this number to the smaller of twenty or the number of samples the tool retains after filtering. In general, denoising against a PoN with more components improves segmentation, but at the expense of sensitivity. Ideally, researchers should perform a sensitivity analysis to choose an appropriate value for this parameter. - To run the tool using Spark, specify the Spark Master with
--spark-master
. See Article#11245 for details.
Comments on filtering and imputation parameters, in the order of application
- The tutorial changes the
--minimum-interval-median-percentile
argument from the default of 10.0 to a smaller value of 5.0. The tool filters out targets or bins with a median proportional coverage below this percentile. The median is across the samples. The proportional coverage is the target coverage divided by the sum of the coverage of all targets for a sample. The effect of setting this parameter to a smaller value is that we retain more information. - The
--maximum-zeros-in-sample-percentage
default is 5.0. Any sample with more than 5% zero coverage targets is filtered. - The
--maximum-zeros-in-interval-percentage
default is 5.0. Any target interval with more than 5% zero coverage across samples is filtered. - The
--extreme-sample-median-percentile
default is 2.5. Any sample with less than 2.5 percentile or more than 97.5 percentile normalized median proportional coverage is filtered. - The
--do-impute-zeros
default is set to true. The tool takes zero coverage regions and changes these values to the median of the non-zero values. The tool additionally normalizes zero values below the 0.10 percentile or above the 99.90 percentile to. - The
--extreme-outlier-truncation-percentile
default is 0.1. The tool takes any proportional coverage below the 0.1 percentile or above the 99.9 percentile and sets it to the corresponding percentile value.
The current filtering and imputation parameters are identical to that in the BETA release of the CNV workflow and may change for later versions based on evaluations. The implementation has been made to be more memory efficient so that the tool runs faster than the BETA release.
If the data are not uniform, e.g. has many intervals with zero or low counts, the tool gives the warning to adjust filtering parameters and stops the run. This may happen, for example, if one attempts to construct a panel of mixed-sex samples and includes the allosomal contigs [8]. In this case, first be sure to either exclude allosomal contigs via a subset intervals list or subset the panel samples to those expected to have similar coverage across the given contigs, e.g. panels of the same sex. If the warning still occurs, then adjust --minimum-interval-median-percentile
to a larger value. See this thread for the original discussion.
Based on what you know about PCA, what do you think are the effects of using more normal samples? A panel with some profiles that are outliers? Could PCA account for GC-bias? What do you know about the 1000 Genome Project? Specifically, the exome data? How could we tell a good PoN from a bad PoN? What control could we use?
In a somatic analysis, what is better for a PoN: tissue-matched normals or blood normals? Should we include our particular tumor’s matched normal in the PoN?
3. Standardize and denoise case read counts against the PoN with DenoiseReadCounts
Provide DenoiseReadCounts with counts collected by CollectFragmentCounts and the CNV PoN generated with CreateReadCountPanelOfNormals.
gatk --java-options "-Xmx12g" DenoiseReadCounts \ -I hcc1143_T_clean.counts.hdf5 \ --count-panel-of-normals cnvponC.pon.hdf5 \ --standardized-copy-ratios sandbox/hcc1143_T_clean.standardizedCR.tsv \ --denoised-copy-ratios sandbox/hcc1143_T_clean.denoisedCR.tsv
This produces two files, the standardized copy ratios hcc1143_T_clean.standardizedCR.tsv
and the denoised copy ratios hcc1143_T_clean.denoisedCR.tsv
that each represents a data transformation. In the first transformation, the tool standardizes counts by the PoN median counts. The standarization includes log2 transformation and normalizing the counts data to center around one. In the second transformation, the tool denoises the standardized copy ratios using the principal components of the PoN.
Comments on select parameters
- Because the default
--number-of-eigensamples
isnull
, the tool uses the maximum number of eigensamples available in the PoN. In section 2, by using default CreateReadCoundPanelOfNormals parameters, we capped the number of eigensamples in the PoN to twenty. Changing the--number-of-eigensamples
in DenoiseReadCounts to lower values can change the resolution of results, i.e. how smooth segments are. - Additionally provide the optional
--annotated-intervals
generated by AnnotateIntervals to concurrently perform GC-bias correction.
4. Plot standardized and denoised copy ratios with PlotDenoisedCopyRatios
We plot the standardized and denoised read counts with PlotDenoisedCopyRatios. The plots allow visually assessing the efficacy of denoising. Provide the tool with both the standardized and denoised copy ratios from the previous step as well as a reference sequence dictionary.
gatk PlotDenoisedCopyRatios \ --standardized-copy-ratios hcc1143_T_clean.standardizedCR.tsv \ --denoised-copy-ratios hcc1143_T_clean.denoisedCR.tsv \ --sequence-dictionary Homo_sapiens_assembly38.dict \ --minimum-contig-length 46709983 \ --output sandbox/plots \ --output-prefix hcc1143_T_clean
This produces six files in the plots
directory--two PNG images and four text files as follows.
hcc1143_T_clean.denoised.png
plots the standardized and denoised read counts across the contigs and scales the y-axis to accommodate all copy ratio data.hcc1143_T_clean.denoisedLimit4.png
plots the same but limits the y-axis range from 0 to 4 for comparability across samples.
Each of the text files contains a single quality control value. The value is the median of absolute differences (MAD) in copy-ratios of adjacent targets. Its calculation is robust to actual copy-number events and should decrease after denoising.
hcc1143_T_clean.standardizedMAD.txt
gives the MAD for standardized copy ratios.hcc1143_T_clean.denoisedMAD.txt
gives the MAD for denoised copy ratios.hcc1143_T_clean.deltaMAD.txt
gives the difference between standardized MAD and denoised MAD.hcc1143_T_clean.scaledDeltaMAD.txt
gives the fractional difference (standardized MAD - denoised MAD)/(standardized MAD).
Comments on select parameters
- The tutorial provides the
--sequence-dictionary
that matches the GRCh38 reference used in mapping. - To omit alternate and decoy contigs from the plots, the tutorial adjusts the
--minimum-contig-length
from the default value of 1,000,000 to 46,709,983, the length of the smallest of GRCh38's primary assembly contigs.
Here are the results for the HCC1143 tumor cell line and its matched normal cell line. The normal cell line serves as a control. For each sample are two plots that show the effects of PCA denoising. The upper plot shows standardized copy ratios in blue and the lower plot shows denoised copy ratios in green.
4A. Tumor standarized and denoised copy ratio plots
4B. Normal standarized and denoised copy ratio plots
Would you guess there are CNV events in the normal? Should we be surprised?
The next step is to perform segmentation. This can be done either using copy ratios alone or in combination with allelic copy ratios. In part II, Section 6 outlines considerations in modeling segments with allelic copy ratios, section 7 generates a callset and section 8 shows how to plot segmented copy and allelic ratios. Again, the tutorial presents these steps using the full features of the workflow. However, researchers may desire to perform copy ratio segmentation independently of allelic counts data, e.g. for a case without a matched-control. For the case-only, segmentation gives the following plots. To recapitulate this approach, omit allelic-counts parameters from the example commands in sections 6 and 8.
4C. Tumor case-only copy ratios segmentation gives 235 segments.
4D. Normal case-only copy ratios segmentation gives 41 segments.
While the normal sample shows trisomy of chr2 and a subpopulation with deletion of chr6, the tumor sample is highly aberrant. The extent of aneuploidy is unsurprising and consistent with these HCC1143 tumor dSKY results by Wenhan Chen. Remember that cell lines, with increasing culture time and selective bottlenecks, can give rise to new somatic events, undergo clonal selection and develop population heterogeneity much like in cancer.
☞ 4.1 Compare two PoNs: considerations in the panel of normals creation
Denoising with a PoN is critical for calling copy-number variants from targeted exome coverage profiles. It can also improve calls from WGS profiles that are typically more evenly distributed and subject to less noise. Furthermore, denoising with a PoN can greatly impact results for (i) samples that have more noise, e.g. those with lower coverage, lower purity or higher activity, (ii) samples lacking a matched normal and (iii) detection of smaller events that span only a few targets.
To understand the impact a PoN's constituents can have on an analysis, compare the results of denoising the normal sample against two different PoNs. Each PoN consists of forty 1000 Genomes Project exome samples. PoN-M consists of the same cohort used in the Mutect2 tutorial's PoN. We selected PoN-C's constituents with more care and this is the PoN the CNV tutorial uses.
4E. Compare standardization and denoising with PoN-C versus PoN-M.
What is the difference in the targets for the two cohorts--cohort-M and cohort-C? Is this a sufficient reason for the difference in noise profiles we observe above?
GATK4 denoises exome coverage profiles robustly with either panel of normals. However, a good panel allows maximal denoising, as is the case for PoN-C over PoN-M.
We use publically available 1000 Genomes Project data so as to be able to share the data and to illustrate considerations in CNV analyses. In an actual somatic analysis, we would construct the PoNs using the blood normals of the tumor cohort(s). We would construct a PoN for each sex, so as to be able to call events on allosomal chromosomes. Such a PoN should give better results than that from either of the tutorial PoNs.
Somatic analyses, due to the confounding factors of tumor purity and heterogeneity, require high sensitivity in calling. However, a sensitive caller can only do so much. Use of a carefully constructed PoN augments the sensitivity and helps illuminate copy number events.
This section is adapted from a hands-on tutorial developed and written by Soo Hee Lee in July of 2017 for the GATK workshops in Cambridge and Edinburgh, UK. The original tutorial uses the GATK4.beta workflow and can be found in the 1707 through 1711 GATK workshops folders. Although the Somatic CNV workflow has changed from GATK4.beta and the official GATK4 release, the PCA denoising remains the same. The hands-on tutorial focuses on differences in PCA denoising based on two different panels of normals (PoNs). Researchers may find working through the worksheet to the very end with either release version beneficial, as considerations in selecting PoN constituents remain identical.
Examining the read group information for the samples in the two PoNs shows a difference in mixtures of sequencing centers--four different sequencing centers for PoN-M versus a single sequencing center for PoN-C. The single sequencing center corresponds to that of the HCC1143 samples. Furthermore, tracing sample information will show different targeted exome capture kits for the sequencing centers. Comparing the denoising results of the two PoNs stresses the importance of selective PoN creation.
☞ 4.2 Compare PoN denoising versus matched-normal denoising
A feature of the GATK4 CNV workflow is the ability to normalize a case against a single control sample, e.g. a tumor case against its matched normal. This involves running the control sample through CreateReadCountPanelOfNormals, then denoising the case against this single-sample projection with DenoiseReadCounts. To illustrate this approach, here is the result of denoising the HCC1143 tumor sample against its matched normal. For single-sample matched-control denoising, DenoiseReadCounts produces identical data for standardizedCR.tsv
and denoisedCR.tsv
.
4F. Tumor case standardized against the normal matched-control
Compare these results to that of section 4.1. Notice the depression in chr2 copy ratios that occurs due to the PoN normal sample's chr2 trisomy. Here, the median absolute deviation (MAD) of 0.149 is an incremental improvement to section 4.1's PoN-M denoising (MAD=0.15). In contrast, PoN-C denoising (MAD=0.125) and even PoN-C standardization alone (MAD=0.134) are seemingly better normalization approaches than the matched-normal standardization. Again, results stress the importance of selective PoN creation.
The PoN accounts for germline CNVs common to its constituents such that the workflow discounts the same variation in the case. It is possible for the workflow to detect germline CNVs not represented in the PoN, in particular, rare germline CNVs. In the case of matched-normal standardization, the workflow should discount germline CNVs and reveal only somatic events.
The workflow does not support iteratively denoising two samples each against a PoN and then against each other.
The tutorial continues in a second document at #11683.
Footnotes
[1] The constituents of the forty sample CNV panel of normals differs from that of the Mutect2 panel of normals. Preliminarly CNV data was generated with v4.0.1.1 somatic CNV WDL scripts run locally on a Gcloud Compute Engine VM with Cromwell v30.2. Additional refinements were performed on a 16GB MacBook Pro laptop. Additional plots were generated using a broadinstitute/gatk:4.0.1.1 Docker container. Note the v4.0.1.1 WDL script does not allow custom sequence dictionaries for the plotting steps.
- Case (HCC1143) and matched control (HCC1143BL) sample data are based on a breast cancer cell line and its matched normal cell line derived from blood, respectively. The Broad Cancer Genome Analysis (CGA) group has graciously provided 2x76 paired-end whole exome sequence data from the two cell lines (C835.HCC11432 and C835.HCC1143_BL.4), and Soo Hee Lee reverted then realigned these to GRCh38 and preprocessed according to GATK guidelines.
- We express our gratitude to the 1000 Genomes Project for their publically available project data, from which Soo Hee Lee constructed the tutorial panel of normals. Read about the project at:
A global reference for human genetic variation, The 1000 Genomes Project Consortium, Nature 526, 68-74 (01 October 2015) doi:10.1038/nature15393.
[2] Considerations in genomic intervals are as follows.
- For targeted exomes, the intervals should represent the bait capture or target capture regions.
- For whole genomes, either supply regions where coverage is expected across samples, e.g. that exclude alternate haplotypes and decoy regions in GRCh38 or omit the option for references where coverage is expected for the entirety of the reference.
- For either type of data, expect to modify the intervals depending on (i) extent of masking in the reference used in read mapping and (ii) expectations in coverage on allosomal contigs. For example, for mammalian data, expect to remove Y chromosome intervals for female samples.
[3] See original discussion on bin size here. The bin size determines the resolution of CNV breakpoints. The theoretical limit depends on coverage depth and the insert-size distribution. Typically bin sizes on the order of the read length will give reasonable results. The GATK developers have tested WGS runs where the bin size is as small as 250 bases.
[4] Set --interval-merging-rule
to OVERLAPPING_ONLY
, to prevent the tool from merging abutting intervals. The default is set to ALL
for GATK4.0.1.1. For future versions, the default will be set to OVERLAPPING_ONLY
.
[5] The tool allows specifying both the padding and the binning arguments simultaneously. If exome targets are very long, it may be preferable to both pad and break up the intervals with binning. This may provide some additional resolution.
[6] The data bundle from Tutorial#11136 contains tumor.bam
and normal.bam
. These tumor and normal samples are identical to that in the current tutorial and represent a subset of the full data for the following regions:
chr6 29941013 29946495 + chr11 915890 1133890 + chr17 1 83257441 + chr11_KI270927v1_alt 1 218612 + HLA-A*24:03:01 1 3502 +
[7] The following regarding read filters may be of interest and apply to the workflow illustated in the tutorial that uses CollectFragmentCounts.
- In contrast to prior versions of the workflow, the GATK4 CNV workflow excludes duplicate fragments from consideration with the NotDuplicateReadFilter. To instead include duplicate fragments, specify
-DF NotDuplicateReadFilter
. - The tool only considers paired-end reads (0x1 SAM flag) and the first of pair (0x40 flag) with the FirstOfPairReadFilter. The tool uses the first-of-pair read’s mapping information for the fragment center.
- The tool only considers properly paired reads (0x2 SAM flag) using the ProperlyPairedReadFilter. Depending on whether and how data was preprocessed with MergeBamAlignment, proper pair assignments can differ from that given by the aligner. This filter also removes single ended reads.
- The MappingQualityReadFilter sets a threshold for alignment MAPQ. The tool sets
--minimum-mapping-quality
to 30. Thus, the tool uses reads with MAPQ30 or higher.
[8] The current tool version requires strategizing denoising of allosomal chromosomes, e.g. X and Y in humans, against the panel of normals. This is because coverage will vary for these regions depending on the sex of the sample. To determine the sex of samples, analyze them with DetermineGermlineContigPloidy. Aneuploidy in allosomal chromosomes, much like trisomy, can still make for viable organisms and so phenotypic sex designations are insufficient. GermlineCNVCaller can account for differential sex in data.
Many thanks to Samuel Lee for his patient explanations of the workflow parameters and mathematics behind the workflow.
3 comments
Thank you very much for the tutorial!
Is there some setting for maximum length? When I am doing the PlotDenoisedCopyRatios step I keep getting this error: "18:43:37.919 INFO PlotDenoisedCopyRatios - Contigs above length threshold:". I can't find any information for what this means or how I can change it. The lengths match the .dict file
Can someone apply the pipeline on shallow WGS? Thank you!
Please sign in to leave a comment.