comparison 0.7.0/readme/bettercallsal.md @ 17:0e7a0053e4a6

planemo upload
author kkonganti
date Mon, 15 Jul 2024 10:42:02 -0400
parents
children
comparison
equal deleted inserted replaced
16:b90e5a7a3d4f 17:0e7a0053e4a6
1 # bettercallsal
2
3 `bettercallsal` is an automated workflow to assign Salmonella serotype based on [NCBI Pathogens Database](https://www.ncbi.nlm.nih.gov/pathogens). It uses `MASH` to reduce the search space followed by additional genome filtering with `sourmash`. It then performs genome based alignment with `kma` followed by count generation using `salmon`. This workflow is especially useful in a case where a sample is of multi-serovar mixture.
4
5 \
6  
7
8 <!-- TOC -->
9
10 - [Minimum Requirements](#minimum-requirements)
11 - [CFSAN GalaxyTrakr](#cfsan-galaxytrakr)
12 - [Usage and Examples](#usage-and-examples)
13 - [Database](#database)
14 - [Input](#input)
15 - [Output](#output)
16 - [Computational resources](#computational-resources)
17 - [Runtime profiles](#runtime-profiles)
18 - [your_institution.config](#your_institutionconfig)
19 - [Cloud computing](#cloud-computing)
20 - [Example data](#example-data)
21 - [Using sourmash](#using-sourmash)
22 - [bettercallsal CLI Help](#bettercallsal-cli-help)
23
24 <!-- /TOC -->
25
26 \
27 &nbsp;
28
29 ## Minimum Requirements
30
31 1. [Nextflow version 23.04.3](https://github.com/nextflow-io/nextflow/releases/download/v23.04.3/nextflow).
32 - Make the `nextflow` binary executable (`chmod 755 nextflow`) and also make sure that it is made available in your `$PATH`.
33 - If your existing `JAVA` install does not support the newest **Nextflow** version, you can try **Amazon**'s `JAVA` (OpenJDK): [Corretto](https://corretto.aws/downloads/latest/amazon-corretto-17-x64-linux-jdk.tar.gz).
34 2. Either of `micromamba` (version `1.0.0`) or `docker` or `singularity` installed and made available in your `$PATH`.
35 - Running the workflow via `micromamba` software provisioning is **preferred** as it does not require any `sudo` or `admin` privileges or any other configurations with respect to the various container providers.
36 - To install `micromamba` for your system type, please follow these [installation steps](https://mamba.readthedocs.io/en/latest/micromamba-installation.html#manual-installation) and make sure that the `micromamba` binary is made available in your `$PATH`.
37 - Just the `curl` step is sufficient to download the binary as far as running the workflows are concerned.
38 - Once you have finished the installation, **it is important that you downgrade `micromamba` to version `1.0.0`**.
39
40 ```bash
41 micromamba self-update --version 1.0.0
42 ```
43
44 3. Minimum of 10 CPU cores and about 16 GBs for main workflow steps. More memory may be required if your **FASTQ** files are big.
45
46 \
47 &nbsp;
48
49 ## CFSAN GalaxyTrakr
50
51 The `bettercallsal` pipeline is also available for use on the [Galaxy instance supported by CFSAN, FDA](https://galaxytrakr.org/). If you wish to run the analysis using **Galaxy**, please register for an account, after which you can run the workflow using some test data by following the instructions
52 [from this PDF](https://research.foodsafetyrisk.org/bettercallsal/galaxytrakr/bettercallsal_on_cfsan_galaxytrakr.pdf).
53
54 Please note that the pipeline on [CFSAN GalaxyTrakr](https://galaxytrakr.org) in most cases may be a version older than the one on **GitHub** due to testing prioritization.
55
56 \
57 &nbsp;
58
59 ## Usage and Examples
60
61 Clone or download this repository and then call `cpipes`.
62
63 ```bash
64 cpipes --pipeline bettercallsal [options]
65 ```
66
67 Alternatively, you can use `nextflow` to directly pull and run the pipeline.
68
69 ```bash
70 nextflow pull CFSAN-Biostatistics/bettercallsal
71 nextflow list
72 nextflow info CFSAN-Biostatistics/bettercallsal
73 nextflow run CFSAN-Biostatistics/bettercallsal --pipeline bettercallsal_db --help
74 nextflow run CFSAN-Biostatistics/bettercallsal --pipeline bettercallsal --help
75 ```
76
77 \
78 &nbsp;
79
80 **Example**: Run the default `bettercallsal` pipeline in single-end mode.
81
82 ```bash
83 cd /data/scratch/$USER
84 mkdir nf-cpipes
85 cd nf-cpipes
86 cpipes
87 --pipeline bettercallsal \
88 --input /path/to/illumina/fastq/dir \
89 --output /path/to/output \
90 --bcs_root_dbdir /data/Kranti_Konganti/bettercallsal_db/PDG000000002.2876
91 ```
92
93 \
94 &nbsp;
95
96 **Example**: Run the `bettercallsal` pipeline in paired-end mode. In this mode, the `R1` and `R2` files are concatenated. We have found that concatenated reads yields better calling rates. Please refer to the **Methods** and the **Results** section in our [paper](https://www.frontiersin.org/articles/10.3389/fmicb.2023.1200983/full) for more information. Users can still choose to use `bbmerge.sh` by adding the following options on the command-line: `--bbmerge_run true --bcs_concat_pe false`.
97
98 ```bash
99 cd /data/scratch/$USER
100 mkdir nf-cpipes
101 cd nf-cpipes
102 cpipes \
103 --pipeline bettercallsal \
104 --input /path/to/illumina/fastq/dir \
105 --output /path/to/output \
106 --bcs_root_dbdir /data/Kranti_Konganti/bettercallsal_db/PDG000000002.2876 \
107 --fq_single_end false \
108 --fq_suffix '_R1_001.fastq.gz'
109 ```
110
111 \
112 &nbsp;
113
114 ### Database
115
116 ---
117
118 The successful run of the workflow requires certain database flat files specific for the workflow.
119
120 Please refer to `bettercallsal_db` [README](./bettercallsal_db.md) if you would like to run the workflow on the latest version of the **PDG** release.
121
122 &nbsp;
123
124 ### Input
125
126 ---
127
128 The input to the workflow is a folder containing compressed (`.gz`) FASTQ files. Please note that the sample grouping happens automatically by the file name of the FASTQ file. If for example, a single sample is sequenced across multiple sequencing lanes, you can choose to group those FASTQ files into one sample by using the `--fq_filename_delim` and `--fq_filename_delim_idx` options. By default, `--fq_filename_delim` is set to `_` (underscore) and `--fq_filename_delim_idx` is set to 1.
129
130 For example, if the directory contains FASTQ files as shown below:
131
132 - KB-01_apple_L001_R1.fastq.gz
133 - KB-01_apple_L001_R2.fastq.gz
134 - KB-01_apple_L002_R1.fastq.gz
135 - KB-01_apple_L002_R2.fastq.gz
136 - KB-02_mango_L001_R1.fastq.gz
137 - KB-02_mango_L001_R2.fastq.gz
138 - KB-02_mango_L002_R1.fastq.gz
139 - KB-02_mango_L002_R2.fastq.gz
140
141 Then, to create 2 sample groups, `apple` and `mango`, we split the file name by the delimitor (underscore in the case, which is default) and group by the first 2 words (`--fq_filename_delim_idx 2`).
142
143 This goes without saying that all the FASTQ files should have uniform naming patterns so that `--fq_filename_delim` and `--fq_filename_delim_idx` options do not have any adverse effect in collecting and creating a sample metadata sheet.
144
145 \
146 &nbsp;
147
148 ### Output
149
150 ---
151
152 All the outputs for each step are stored inside the folder mentioned with the `--output` option. A `multiqc_report.html` file inside the `bettercallsal-multiqc` folder can be opened in any browser on your local workstation which contains a consolidated brief report.
153
154 \
155 &nbsp;
156
157 ### Computational resources
158
159 ---
160
161 The workflow `bettercallsal` requires at least a minimum of 16 GBs of memory to successfully finish the workflow. By default, `bettercallsal` uses 10 CPU cores where possible. You can change this behavior and adjust the CPU cores with `--max_cpus` option.
162
163 \
164 &nbsp;
165
166 Example:
167
168 ```bash
169 cpipes \
170 --pipeline bettercallsal \
171 --input /path/to/bettercallsal_sim_reads \
172 --output /path/to/bettercallsal_sim_reads_output \
173 --bcs_root_dbdir /path/to/PDG000000002.2876
174 --kmaalign_ignorequals \
175 --max_cpus 5 \
176 -profile stdkondagac \
177 -resume
178 ```
179
180 \
181 &nbsp;
182
183 ### Runtime profiles
184
185 ---
186
187 You can use different run time profiles that suit your specific compute environments i.e., you can run the workflow locally on your machine or in a grid computing infrastructure.
188
189 \
190 &nbsp;
191
192 Example:
193
194 ```bash
195 cd /data/scratch/$USER
196 mkdir nf-cpipes
197 cd nf-cpipes
198 cpipes \
199 --pipeline bettercallsal \
200 --input /path/to/fastq_pass_dir \
201 --output /path/to/where/output/should/go \
202 -profile your_institution
203 ```
204
205 The above command would run the pipeline and store the output at the location per the `--output` flag and the **NEXTFLOW** reports are always stored in the current working directory from where `cpipes` is run. For example, for the above command, a directory called `CPIPES-bettercallsal` would hold all the **NEXTFLOW** related logs, reports and trace files.
206
207 \
208 &nbsp;
209
210 ### `your_institution.config`
211
212 ---
213
214 In the above example, we can see that we have mentioned the run time profile as `your_institution`. For this to work, add the following lines at the end of [`computeinfra.config`](../conf/computeinfra.config) file which should be located inside the `conf` folder. For example, if your institution uses **SGE** or **UNIVA** for grid computing instead of **SLURM** and has a job queue named `normal.q`, then add these lines:
215
216 \
217 &nbsp;
218
219 ```groovy
220 your_institution {
221 process.executor = 'sge'
222 process.queue = 'normal.q'
223 singularity.enabled = false
224 singularity.autoMounts = true
225 docker.enabled = false
226 params.enable_conda = true
227 conda.enabled = true
228 conda.useMicromamba = true
229 params.enable_module = false
230 }
231 ```
232
233 In the above example, by default, all the software provisioning choices are disabled except `conda`. You can also choose to remove the `process.queue` line altogether and the `bettercallsal` workflow will request the appropriate memory and number of CPU cores automatically, which ranges from 1 CPU, 1 GB and 1 hour for job completion up to 10 CPU cores, 1 TB and 120 hours for job completion.
234
235 \
236 &nbsp;
237
238 ### Cloud computing
239
240 ---
241
242 You can run the workflow in the cloud (works only with proper set up of AWS resources). Add new run time profiles with required parameters per [Nextflow docs](https://www.nextflow.io/docs/latest/executor.html):
243
244 \
245 &nbsp;
246
247 Example:
248
249 ```groovy
250 my_aws_batch {
251 executor = 'awsbatch'
252 queue = 'my-batch-queue'
253 aws.batch.cliPath = '/home/ec2-user/miniconda/bin/aws'
254 aws.batch.region = 'us-east-1'
255 singularity.enabled = false
256 singularity.autoMounts = true
257 docker.enabled = true
258 params.conda_enabled = false
259 params.enable_module = false
260 }
261 ```
262
263 \
264 &nbsp;
265
266 ### Example data
267
268 ---
269
270 After you make sure that you have all the [minimum requirements](#minimum-requirements) to run the workflow, you can try the `bettercallsal` pipeline on some simulated reads. The following input dataset contains simulated reads for `Montevideo` and `I 4,[5],12:i:-` in about roughly equal proportions.
271
272 - Download simulated reads: [S3](https://cfsan-pub-xfer.s3.amazonaws.com/Kranti.Konganti/bettercallsal/bettercallsal_sim_reads.tar.bz2) (~ 3 GB).
273 - Download pre-formatted test database: [S3](https://cfsan-pub-xfer.s3.amazonaws.com/Kranti.Konganti/bettercallsal/PDG000000002.2491.test-db.tar.bz2) (~ 75 MB). This test database works only with the simulated reads.
274 - Download pre-formatted full database (**Optional**): If you would like to do a complete run with your own **FASTQ** datasets, you can either create your own [database](./bettercallsal_db.md) or use [PDG000000002.2727](https://cfsan-pub-xfer.s3.amazonaws.com/Kranti.Konganti/bettercallsal/PDG000000002.2727.tar.bz2) version of the database (~ 42 GB).
275 - After succesful run of the workflow, your **MultiQC** report should look something like [this](https://cfsan-pub-xfer.s3.amazonaws.com/Kranti.Konganti/bettercallsal/bettercallsal_sim_reads_mqc.html).
276 - It is always a best practice to use absolute UNIX paths and real destinations of symbolic links during pipeline execution. For example, find out the real path(s) of your absolute UNIX path(s) and use that for the `--input` and `--output` options of the pipeline.
277
278 ```bash
279 realpath /hpc/scratch/user/input
280 ```
281
282 Now run the workflow by ignoring quality values since these are simulated base qualities:
283
284 \
285 &nbsp;
286
287 ```bash
288 cpipes \
289 --pipeline bettercallsal \
290 --input /path/to/bettercallsal_sim_reads \
291 --output /path/to/bettercallsal_sim_reads_output \
292 --bcs_root_dbdir /path/to/PDG000000002.2876
293 --kmaalign_ignorequals \
294 -profile stdkondagac \
295 -resume
296 ```
297
298 Please note that the run time profile `stdkondagac` will run jobs locally using `micromamba` for software provisioning. The first time you run the command, a new folder called `kondagac_cache` will be created and subsequent runs should use this `conda` cache.
299
300 \
301 &nbsp;
302
303 ## Using `sourmash`
304
305 Beginning with `v0.3.0` of `bettercallsal` workflow, `sourmash` sketching is used to further narrow down possible serotype hits. It is **ON** by default. This will enable the generation of **ANI Containment** matrix for **Samples** vs **Genomes**. There may be multiple hits for the same serotype in the final **MultiQC** report as multiple genome accessions can belong to a single serotype.
306
307 You can turn **OFF** this feature with `--sourmashsketch_run false` option.
308
309 \
310 &nbsp;
311
312 ## `bettercallsal` CLI Help
313
314 ```text
315 [Kranti_Konganti@my-unix-box ]$ cpipes --pipeline bettercallsal --help
316 N E X T F L O W ~ version 23.04.3
317 Launching `./bettercallsal/cpipes` [awesome_chandrasekhar] DSL2 - revision: 8da4e11078
318 ================================================================================
319 (o)
320 ___ _ __ _ _ __ ___ ___
321 / __|| '_ \ | || '_ \ / _ \/ __|
322 | (__ | |_) || || |_) || __/\__ \
323 \___|| .__/ |_|| .__/ \___||___/
324 | | | |
325 |_| |_|
326 --------------------------------------------------------------------------------
327 A collection of modular pipelines at CFSAN, FDA.
328 --------------------------------------------------------------------------------
329 Name : bettercallsal
330 Author : Kranti Konganti
331 Version : 0.7.0
332 Center : CFSAN, FDA.
333 ================================================================================
334
335
336 --------------------------------------------------------------------------------
337 Show configurable CLI options for each tool within bettercallsal
338 --------------------------------------------------------------------------------
339 Ex: cpipes --pipeline bettercallsal --help
340 Ex: cpipes --pipeline bettercallsal --help fastp
341 Ex: cpipes --pipeline bettercallsal --help fastp,mash
342 --------------------------------------------------------------------------------
343 --help bbmerge : Show bbmerge.sh CLI options
344 --help fastp : Show fastp CLI options
345 --help mash : Show mash `screen` CLI options
346 --help tuspy : Show get_top_unique_mash_hit_genomes.py CLI
347 options
348 --help sourmashsketch : Show sourmash `sketch` CLI options
349 --help sourmashgather : Show sourmash `gather` CLI options
350 --help sourmashsearch : Show sourmash `search` CLI options
351 --help sfhpy : Show sourmash_filter_hits.py CLI options
352 --help kmaindex : Show kma `index` CLI options
353 --help kmaalign : Show kma CLI options
354 --help megahit : Show megahit CLI options
355 --help mlst : Show mlst CLI options
356 --help abricate : Show abricate CLI options
357 --help salmon : Show salmon `index` CLI options
358 --help gsrpy : Show gen_salmon_res_table.py CLI options
359
360 ```