diff --git a/docs/nf_customize/01_orientation.md b/docs/nf_customize/01_orientation.md index 55de1506..7625d3ce 100644 --- a/docs/nf_customize/01_orientation.md +++ b/docs/nf_customize/01_orientation.md @@ -25,11 +25,11 @@ In this folder you will find three pairs of zipped fastq files (`*.fastq.gz`) in └── samplesheet.csv ``` -Each file will be used in this training module. +These files will be used in this training module. !!! question "Exercise" - Open the [Gitpod training environment](https://gitpod.io/#https://github.com/nextflow-io/training) and use the following command to switch to the `nf-customize` folder. View the files in this folder using the `tree` command: + Open the [Gitpod training environment](https://gitpod.io/#https://github.com/nextflow-io/training) and switch to the `nf-customize` folder. View the files in this folder using the `tree` command: ```bash cd /workspace/gitpod/nf-customize diff --git a/docs/nf_customize/02_nf-core.md b/docs/nf_customize/02_nf-core.md index 86710c2a..bc9802b8 100644 --- a/docs/nf_customize/02_nf-core.md +++ b/docs/nf_customize/02_nf-core.md @@ -27,9 +27,9 @@ nf-core is published in Nature Biotechnology: [Nat Biotechnol 38, 276–278 (202 ## nf-core pipelines -There are currently >100 nf-core pipelines. These pipelines are at various stages of development with 60 released, 34 under development, and 11 archived (April 2024). +There are currently 113 nf-core pipelines. These pipelines are at various stages of development, with 68 released, 32 under development, and 13 archived (October 2024). -The [nf-core website](https://nf-co.re/) contains a full list of pipelines, as well as their documentation, which can be explored. +The [nf-core website](https://nf-co.re/) hosts a full list of pipelines, as well as their documentation, which can be explored. ![nf-core logo](img/pipelines.png) @@ -40,13 +40,19 @@ Each released pipeline has a dedicated page that includes 6 documentation sectio - **Parameters:** Grouped pipeline parameters with descriptions - **Output:** Descriptions and examples of the expected output files - **Results:** Example output files generated from the full test dataset -- **Releases & Statistics:** pipeline version history and statistics +- **Releases & Statistics:** Pipeline version history and statistics Each section should be explored by a user to understand what the pipeline does and how it can be configured. +!!! question "Exercise" + + Explore the nf-core website to see the range of resources available. + ## Pulling an nf-core pipeline -Unless you intend to develop an nf-core pipeline independently, you do not need to clone a copy of a pipeline. Instead, you can use Nextflow’s `pull` command: +Unless you intend to develop an nf-core pipeline independently, you do not need to clone a copy of a pipeline. + +Instead, use Nextflow’s `pull` command: ```bash nextflow pull nf-core/demo @@ -54,35 +60,37 @@ nextflow pull nf-core/demo !!! note "The `nextflow run` command" - The `nextflow run` command will also automatically `pull` the pipeline if it had not been pulled. + The `nextflow run` command will also automatically `pull` the pipeline. -Nextflow will `pull` the pipelines default GitHub branch if a pipeline version is not specified. This will be the master branch for nf-core pipelines with a stable release. +Nextflow will `pull` the pipelines default GitHub branch if a pipeline version is not specified. The master branch is the default branch for nf-core pipelines with a stable release and the dev branch for pipelines that are still being developed. -nf-core pipelines use GitHub releases to tag stable versions of the code and software. You will always be able to execute different versions of a pipeline using the `-revision` or `-r` option. +Pipelines pulled from GitHub using Nextflow are automatically stored in a Nextflow assets folder (default: `$HOME/.nextflow/assets/`). + +nf-core pipelines use GitHub releases to tag stable versions of the code and software. You can execute different versions of a pipeline using the `-revision` or `-r` option. Similarly, you can use the `-r` option to specify a specific GitHub branch. For example, the `dev` branch of the `nf-core/demo` pipeline could be pulled with the command: -``` +```bash nextflow pull nf-core/demo -r dev ``` -If updates to a remote pipeline have been made, the pull command can be used to update or revery your local copy. +If updates to a remote pipeline have been made, run the pull command to update or revert your local copy. !!! question "Exercise" - Use nextflow to pull the `nf-core/demo` pipeline: + Pull the `nf-core/demo` pipeline: ```bash nextflow pull nf-core/demo ``` - Use the list command to view your cached pipelines: + Use the `list` command to view your cached pipelines: ```bash nextflow list ``` - Pulled pipelines are stored in a hidden assets folder: + View your pulled pipelines in the nextflow assets folder: ```bash ls $HOME/.nextflow/assets/ diff --git a/docs/nf_customize/03_execution.md b/docs/nf_customize/03_execution.md index 2e43b76b..fef005d2 100644 --- a/docs/nf_customize/03_execution.md +++ b/docs/nf_customize/03_execution.md @@ -2,7 +2,7 @@ [`nf-core/demo`](https://nf-co.re/demo/) is a simple nf-core style pipeline for workshops and demonstrations. -It was created using the nf-core template and is designed to run and configure quickly. +It was created using the full nf-core template and is designed to run and configure quickly.
--8<-- "docs/nf_customize/img/subway.excalidraw.svg" @@ -10,11 +10,11 @@ It was created using the nf-core template and is designed to run and configure q The [`nf-core/demo`](https://nf-co.re/demo/) pipeline consists of three processes: -- ([`FASTQC`](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/)): Read QC -- ([`SEQTK_TRIM`](https://github.com/lh3/seqtk)): Trim low quality bases from FastQ files -- ([`MULTIQC`](http://multiqc.info/)): Present QC for raw reads +- ([`FastQC`](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/)): Read quality control +- ([`SEQTK_TRIM`](https://github.com/lh3/seqtk)): Trim low quality bases from FASTQ files +- ([`MULTIQC`](http://multiqc.info/)): Present quality control reports for raw reads -[`nf-core/demo`](https://nf-co.re/demo/) takes a samplesheet that contains paths to fastq files as an input and will produce four output folders with a variety of logs and reports: +[`nf-core/demo`](https://nf-co.re/demo/) takes a samplesheet that contains paths to FASTQ files as an input and will produce four output folders with logs and reports: - `fastqc/` - `*_fastqc.html`: FastQC report containing quality metrics. @@ -38,19 +38,19 @@ The documentation for the `nf-core/demo` pipeline can be found [on the nf-core/d ## Required inputs -Before running any nf-core pipeline you will need to check if there are any parameters that are required. +Before running any nf-core pipeline you will need to check if any parameters are required. -You can view these on the pipelines parameters page. +You can find required parameters on the pipelines parameters page. The [parameters page of the `nf-core/demo` pipeline](https://nf-co.re/demo/dev/parameters) shows that this pipeline requires two parameters (`--input` and `--outdir`) to run. ![nf-core/demo parameters](img/demo-parameters.png) -Without these, the pipeline will not launch and nextflow will throw an error. +Without these, the pipeline will not launch and will throw an error. ### `--input` -The `--input` parameter requires a path to comma-separated file containing information about the samples in the experiment. +The `--input` parameter requires a path to comma-separated file (CSV) containing information about the samples in the experiment: ```bash --input 'path/to/samplesheet.csv' @@ -58,7 +58,7 @@ The `--input` parameter requires a path to comma-separated file containing infor The [nf-core/demo usage documentation](https://nf-co.re/demo/dev/docs/usage/) describes the required `--input` as a comma-separated file (`.csv`). The `.csv` file must contain 3 columns with the headers `sample`, `fastq_1`, and `fastq_2`. -The samplesheet file may consist of both single- and paired-end data and may look something like the one below. +The samplesheet file may consist of both single- and paired-end data and may look something like the one below: ```csv title="samplesheet.csv" linenums="1" sample,fastq_1,fastq_2 @@ -69,7 +69,7 @@ SAMPLE3_SE,path/to/sample3_R1.fastq.gz, ### `--outdir` -The `--output` parameter is used to name the output directory where the results will be saved. It takes a string as its input. +The `--output` parameter is used to name the output directory where the results will be saved. It takes a string as its input: ```bash --output results @@ -81,13 +81,13 @@ The `--output` parameter is used to name the output directory where the results ## Testing `nf-core/demo` with profiles -A profile is a set of configuration attributes that can be added to your execution command by using the `-profile` option. +A profile is a set of configuration attributes that can be added to your execution command by using the `-profile` option: ```bash -profile ``` -Configuration profiles are defined using the special scope `profile` within configuration files. Profiles group the attributes that belong to the same profile using a common prefix. +Configuration profiles are defined using the special scope `profile` within configuration files. Profiles group the attributes that belong to the same profile using a common prefix: ```console title="example.config" linenums="1" profiles { @@ -105,7 +105,7 @@ profiles { Every nf-core pipeline comes with a `test` profile. This is a minimal set of configuration settings for the pipeline to run using a small test dataset that is hosted on the [nf-core/test-datasets](https://github.com/nf-core/test-datasets) repository. -As the `test` profile is expected to run it can be used to help diagnose local issues before you scale up your analysis. +The `test` profile is expected to run and can be used to help diagnose local issues before you scale up your analysis. The `test` profile for `nf-core/demo` is shown below: @@ -122,22 +122,24 @@ The `test` profile for `nf-core/demo` is shown below: ---------------------------------------------------------------------------------------- */ +process { + resourceLimits = [ + cpus: 4, + memory: '15.GB', + time: '1.h' + ] +} + params { config_profile_name = 'Test profile' config_profile_description = 'Minimal test dataset to check pipeline function' - // Limit resources so that this can run on GitHub Actions - max_cpus = 2 - max_memory = '6.GB' - max_time = '6.h' - // Input data input = 'https://raw.githubusercontent.com/nf-core/test-datasets/viralrecon/samplesheet/samplesheet_test_illumina_amplicon.csv' - } ``` -The `nf-core/demo` `test` profile already contains the input parameter (this will be explained in more detail shortly). This means that the `--input` parameter does not need to be added to the execution command. However, as the `outdir` parameter is not included in the `test` profile it must be added to the execution command using the `--outdir` flag. +The `nf-core/demo` `test` profile already contains the input parameter (explained in more detail below). This means that the `--input` parameter does not need to be added to the execution command. However, the `outdir` parameter is not included in the `test` profile and must be added to the execution command using the `--outdir` flag. ```bash nextflow run nf-core/demo -profile test --outdir results @@ -151,9 +153,9 @@ nextflow run nf-core/demo -profile test --outdir results nextflow run nf-core/demo -profile test --outdir results ``` - **This execution is expected to fail!** + !!! warning "This execution is expected to fail!" -As the software required to run each process (e.g., seqtk) is not available in the Gitpod environment the exercise above is expected to fail. +As the software required to run each process (e.g., seqtk) is not available in the Gitpod environment the exercise above is expected to fail: ```console Caused by: @@ -161,15 +163,17 @@ Caused by: ``` -Fortunately, nf-core pipelines come packed with directives for containers and environments that can be flexibly enabled using profiles for different software (e.g., `docker`, `singularity`, and `conda`). +Fortunately, nf-core pipelines come packed with directives for containers and environments that can be flexibly enabled using profiles for different software (e.g., `docker`, `singularity`, and `conda`): -`-profile singularity` +```bash +-profile singularity +``` In Gitpod, you can add the `singularity` profile to your execution command and Nextflow will download and enable Singularity software images to run each process. The singularity profile is defined in the nextflow.config file in the main pipeline repository. -```groovy title="nextflow.config" linenums="120" +```groovy title="nextflow.config" linenums="100" singularity { singularity.enabled = true singularity.autoMounts = true @@ -179,44 +183,46 @@ singularity { shifter.enabled = false charliecloud.enabled = false apptainer.enabled = false - } +} ``` !!! note "Multiple config files" - Multiple profiles can be included at execution by separating them with a comma (`,`), e.g., `-profile test,singularity`. + Multiple profiles can be included by separating them with a comma (e.g., `-profile test,singularity`). !!! question "Exercise" - Execute the command again, but this time with the singularity profile: + Amend your run command by adding the singularity profile: ```bash nextflow run nf-core/demo -profile test,singularity --outdir results ``` - The `nf-core/demo` pipeline should now run successfully! +The `nf-core/demo` pipeline should now run successfully! + +!!! note -If you were running this tutorial you will need to have Singularity installed for this command to run. + Singularity must be installed for this command to run. ## Using your own data -Instead of using the `test` profile you can use the `--input` parameter to choose your own samplesheet as an input. +Instead of using the `test` profile you can use the `--input` parameter to choose your own sample sheet as an input. As described above, the input is a CSV file with 3 columns and the headers `sample`, `fastq_1`, and `fastq_2`. -The pipeline will auto-detect whether a sample is single- or paired-end and if a sample has been sequenced more than once using the information provided in the samplesheet. +The nf-core/demo pipeline will auto-detect whether a sample is single- or paired-end and if a sample has been sequenced more than once using the information provided in the sample sheet by default. !!! question "Exercise" - Within the `data` folder there are three sets of paired-end reads for gut, liver, and lung samples. Create a samplesheet for this data. + Create a sample sheet for the paired-end reads for gut, liver, and lung samples in the data folder: - First, create a `.csv` file named `samplesheet.csv`: + 1. Create a CSV file named `samplesheet.csv`: ```bash code samplesheet.csv ``` - Next, add the header line, and, for each sample, an id and the complete paths to the paired-end reads: + 2. Add the header line, and, for each sample, an id and the complete paths to the paired-end reads: ```csv title="samplesheet.csv" linenums="1" sample,fastq_1,fastq_2 @@ -225,21 +231,21 @@ The pipeline will auto-detect whether a sample is single- or paired-end and if a lung,/workspace/gitpod/nf-customize/data/lung_1.fastq.gz,/workspace/gitpod/nf-customize/data/lung_2.fastq.gz ``` - **Make sure you save this file in your working directory (`/workspace/gitpod/nf-customize/`)** + !!! warning "Make sure you save this file in your working directory (`/workspace/gitpod/nf-customize/`)" You can use you new samplesheet with the `--input` parameter in your execution command. -In this case, the other parameters in the test profile (e.g., `config_profile_name` and `max_cpus`) can be ignored as they are not explicitly required by the pipeline or in this Gitpod environment. +In this case, the other parameters in the test profile (e.g., `config_profile_name`) can be ignored as they are not explicitly required by the pipeline or in this Gitpod environment. !!! question "Exercise" - Execute the `nf-core/demo` pipeline with the `singularity` profile and your newly created samplesheet as your input. + Run the `nf-core/demo` pipeline with the `singularity` profile and your newly created samplesheet as your input. - ``` + ```bash nextflow run nf-core/demo -profile singularity --input samplesheet.csv --outdir results ``` - The pipeline should run successfully! +The pipeline should run successfully! --- diff --git a/docs/nf_customize/04_config.md b/docs/nf_customize/04_config.md index ad8a0fd8..6a0d156f 100644 --- a/docs/nf_customize/04_config.md +++ b/docs/nf_customize/04_config.md @@ -4,7 +4,7 @@ Each nf-core pipeline comes with a set of “sensible defaults”. While the def **You do not need to edit the pipeline code to configure nf-core pipelines.** -When a pipeline is launched, Nextflow will look for configuration files in several locations. As each source can contain conflicting settings, the sources are ranked to decide which settings to apply. +Nextflow will look for configuration files in several locations when it is launched. As each source can contain conflicting settings, the sources are ranked to decide which settings to apply. Configuration sources are reported below and listed in order of priority: @@ -18,7 +18,7 @@ Configuration sources are reported below and listed in order of priority: While some of these files are already included in the nf-core pipeline repository (e.g., the `nextflow.config` file in the nf-core pipeline repository), some are automatically identified on your local system (e.g., the `nextflow.config` in the launch directory), and others are only included if they are specified using `run` options (e.g., `-params-file`, and `-c`). -Understanding how and when these files are interpreted by Nextflow is critical for the accurate configuration of a pipelines execution. +Understanding how and when these files are interpreted by Nextflow is critical for the accurate configuration of a pipeline execution. ## Parameters @@ -48,11 +48,11 @@ Parameters and their descriptions can also be viewed in the command line using t nextflow run nf-core/demo --help ``` -You can also view these on the [nf-core/demo parameters page](https://nf-co.re/demo/1.0.0/parameters/). +You can also view these on the [nf-core/demo parameters page](https://nf-co.re/demo/1.1.0/parameters/). ## Default configuration files -All parameters have a default configuration that is defined using the `nextflow.config` file in the pipeline project directory. Most parameters are set to `null` or `false` by default and are only activated by a profile or configuration file. +All parameters have a default configuration that is defined using the `nextflow.config` file in the pipeline project directory. Most parameters are set to `null` or `false` by default. There are also several `includeConfig` statements in the `nextflow.config` file that are used to include additional `.config` files from the `conf/` folder. Each additional `.config` file contains categorized configuration information for your pipeline execution, some of which can be optionally included: @@ -63,6 +63,8 @@ There are also several `includeConfig` statements in the `nextflow.config` file - `igenomes.config` - Included by the pipeline by default. - Default configuration to access reference files stored on [AWS iGenomes](https://ewels.github.io/AWS-iGenomes/). +- `igenomes_ignored.config` + - Empty genomes dictionary to use when igenomes is ignored. - `modules.config` - Included by the pipeline by default. - Module-specific configuration options (both mandatory and optional). @@ -73,7 +75,9 @@ There are also several `includeConfig` statements in the `nextflow.config` file - Only included if specified as a profile. - A configuration profile to test the pipeline with a full-size test dataset. -Notably, some configuration files contain the definition of profiles. For example, the `docker`, `singularity`, and `conda` profiles are defined in the `nextflow.config` file in the pipeline project directory. +!!! note + + Some configuration files contain the definition of profiles. For example, the `docker`, `singularity`, and `conda` profiles are defined in the `nextflow.config` file in the pipeline project directory. Profiles used by nf-core pipelines can be broadly categorized into two groups: @@ -102,7 +106,7 @@ Nextflow will also look for files that are external to the pipeline project dire - A parameter file that is provided using the `-params-file` option - A config file that are provided using the `-c` option -**You do not need to use all of these files to execute your pipeline.** +**You do not need to use all of these files to run your pipeline.** **Parameter files** @@ -116,7 +120,7 @@ Parameter files are `.json` files that can contain an unlimited number of parame } ``` -You can override default parameters by creating a `.json` file and passing it as a command-line argument using the `-param-file` option. +You can override default parameters by creating a `.json` file and passing it as a command-line argument using the `-param-file` option: ```bash nextflow run nf-core/demo -profile singularity -param-file @@ -124,15 +128,15 @@ nextflow run nf-core/demo -profile singularity -param-file !!! question "Exercise" - Add the `input` and `outdir` parameters to a params file. Give your `input` the complete path to your sample sheet and give your `outdir` the name `results_mycustomparams`. + Add the `input` and `outdir` parameters to a params file. Give `input` the complete path to your sample sheet and give `outdir` the name `results_mycustomparams`: - Start by creating `mycustomparams.json` and adding your parameters using the format described above: + 1. Create `mycustomparams.json`: ```bash code mycustomparams.json ``` - Then, add your input and output parameters. + 2. Add your `input` and `output` parameters: ```json title="mycustomparams.json" linenums="1" { @@ -141,13 +145,13 @@ nextflow run nf-core/demo -profile singularity -param-file } ``` - Finally, include the custom `mycustomparams.json` file in your execution command with the `-params-file` option: + 3. Run `nf-core/demo` with your custom `mycustomparams.json` file: ```bash nextflow run nf-core/demo -profile singularity -params-file mycustomparams.json ``` - The pipeline should run successfully. You should be able to see a new results folder `results_mycustomparams` in your current directory. +The pipeline should run successfully. You should be able to see a new results folder `results_mycustomparams` in your current directory. **Configuration files** @@ -159,7 +163,7 @@ nextflow run nf-core/demo -profile singularity -params-file mycustomparams.json Custom configuration files are the same format as the configuration file included in the pipeline directory. -Configuration properties are organized into [scopes](https://www.nextflow.io/docs/latest/config.html#config-scopes) by dot prefixing the property names with a scope identifier or grouping the properties in the same scope using the curly brackets notation. For example: +Configuration properties are organized into [scopes](https://www.nextflow.io/docs/latest/config.html#config-scopes) by dot prefixing the property names with a scope identifier or grouping the properties in the same scope using the curly brackets notation: ```console title="custom.config" linenums="1" alpha.x = 1 @@ -177,7 +181,7 @@ alpha { Scopes allow you to quickly configure settings required to deploy a pipeline on different infrastructure using different software management. -For example, the `executor` scope can be used to provide settings for the deployment of a pipeline on a HPC cluster. Similarly, the `singularity` scope controls how Singularity containers are executed by Nextflow. +For example, the `executor` scope can be used to provide settings for the deployment of a pipeline on an HPC cluster. Similarly, the `singularity` scope controls how Singularity containers are executed by Nextflow. A common scenario is for users to write a custom configuration file specific to running a pipeline on their infrastructure. @@ -185,7 +189,7 @@ A common scenario is for users to write a custom configuration file specific to Do not use `-c ` to specify parameters as this will result in errors. Custom config files specified with `-c` must only be used for tuning process resource specifications, other infrastructural tweaks (such as output directories), or module arguments (args). -Multiple scopes can be included in the same `.config` file using a mix of dot prefixes and curly brackets. +Multiple scopes can be included in the same `.config` file using a mix of dot prefixes and curly brackets: ```console title="example.config" linenums="1" executor.name = "sge" @@ -196,19 +200,19 @@ singularity { } ``` -A full list of scopes is described in detail in the [Nextflow documentation](https://www.nextflow.io/docs/latest/config.html#config-scopes). +See the [Nextflow documentation](https://www.nextflow.io/docs/latest/config.html#config-scopes) for a full list of scopes. !!! question "Exercise" Instead of using the `singularity` profile a custom configuration file can be used to enable singularity. Create a custom configuration file and enable singularity and singularity auto mounts using the singularity scope. - Start by creating `mycustomconfig.config`: + 1. Create `mycustomconfig.config`: ```bash code mycustomconfig.config ``` - Next, add your configuration to the singularity scope: + 2. Add your configuration to the singularity scope: ```console title="mycustomconfig.config" linenums="1" singularity { @@ -217,13 +221,13 @@ A full list of scopes is described in detail in the [Nextflow documentation](htt } ``` - Finally, include `mycustomconfig.config` file in your execution command with the `-c` option: + 3. Run `nf-core/demo` with your `mycustomconfig.config` in your execution command: ```bash nextflow run nf-core/demo -profile test --outdir results_config -c mycustomconfig.config ``` - The pipeline will run successfully. +The pipeline will run successfully. !!! note "Multiple config files" @@ -255,7 +259,7 @@ process { While some tool arguments are included as a part of a module. To make nf-core modules sharable across pipelines, most tool arguments are defined in the `conf/modules.conf` file in the pipeline code under the `ext.args` entry. -Importantly, having these arguments outside of the module also allows them to be customized at runtime. +Importantly, having these arguments outside the module also allows them to be customized at runtime. For example, if you wanted to add arguments to the `MULTIQC` process in the `nf-core/demo` pipeline, you could use the process scope and the `withName` selector: @@ -280,17 +284,22 @@ The extended execution path is built from the pipelines, subworkflows, and modul !!! question "Exercise" - Modify your existing `mycustomconfig.config` by adding a process scope with the `withName` selector. Add a custom title to your MultiQC report: + Modify your existing `mycustomconfig.config` by adding a process scope, with the `withName` selector, to add a custom title to your MultiQC report: - Start by opening `mycustomconfig.config` that contains your singularity scope: + 1. Open `mycustomconfig.config`: ```bash code mycustomconfig.config ``` - Next, add a `process` scope and using the `withName` selector for `MULTIQC`, add `--title` flag with a custom report name. + 2. Add a `process` scope and using the `withName` selector for `MULTIQC`, add `--title` flag with a custom report name. ```console title="mycustomconfig.config" linenums="1" + singularity { + enabled = true + autoMounts = true + } + process { withName: 'MULTIQC' { ext.args = { "--title \"my_custom_title\"" } @@ -298,10 +307,10 @@ The extended execution path is built from the pipelines, subworkflows, and modul } ``` - Finally, execute your run command again: + 3. Run `nf-core/demo` with your `mycustomconfig.config` in your execution command:: ```bash - nextflow run nf-core/demo -profile test,singularity --outdir results_process -c mycustomconfig.config + nextflow run nf-core/demo -profile test --outdir results_process -c mycustomconfig.config ``` View the `multiqc` folder inside your results directory: @@ -322,7 +331,11 @@ It is important to consider how the different configuration options interact dur nextflow run nf-core/demo -profile singularity -params-file mycustomparams.json -c mycustomconfig.config --outdir results_mixed ``` - You now have a new output directory named `results_mixed` despite the directory being named `results_customparams` in your custom parameters file. +You now have a new output directory named `results_mixed` despite the directory being named `results_customparams` in your custom parameters file. + +!!! question "Exercise" + + Consider how the different levels of configuration interacted. Mix and match configuration levels to rename your outputs. --- diff --git a/docs/nf_customize/05_tools.md b/docs/nf_customize/05_tools.md index 45f9838b..d8d0c6dd 100644 --- a/docs/nf_customize/05_tools.md +++ b/docs/nf_customize/05_tools.md @@ -12,12 +12,12 @@ For developers, the tools make it easier to develop pipelines using best practic View nf-core commands and options are available using the `nf-core --help` option. -## `nf-core list` +## `nf-core pipelines list` The nf-core `list` command can be used to print a list of remote nf-core pipelines and local pipeline information. ```bash -nf-core list +nf-core pipelines list ``` The output shows the latest pipeline version number and when it was released. You will also be shown if and when a pipeline was pulled locally and whether you have the latest version. @@ -25,7 +25,7 @@ The output shows the latest pipeline version number and when it was released. Yo Keywords can also be supplied to help filter the pipelines based on matches in titles, descriptions, or topics: ```bash -nf-core list dna +nf-core pipelines list dna ``` Options can also be used to sort the pipelines by latest release (`-s release`, default), when you last pulled a pipeline locally (`-s pulled`), alphabetically (`-s name`), or number by the number of GitHub stars (`-s stars`). @@ -35,17 +35,17 @@ Options can also be used to sort the pipelines by latest release (`-s release`, Filter the list of nf-core pipelines for those that are for `dna` and sort them by stars. Which `dna` pipeline has the most stars? ```bash - nf-core list dna -s stars + nf-core pipelines list dna -s stars ``` -## `nf-core launch` +## `nf-core pipelines launch` -A pipeline can have a large number of optional parameters. To help with this, the `nf-core launch` command is designed to help you write parameter files for when you launch your pipeline. +A pipeline can have a large number of optional parameters. To help with this, the `nf-core pipelines launch` command is designed to help you write parameter files for when you launch your pipeline. The nf-core `launch` command takes one argument - either the name of an nf-core pipeline which will be pulled automatically **or** the path to a directory containing a Nextflow pipeline: ```bash -nf-core launch +nf-core pipelines launch ``` You will first be asked about which version of the pipeline you would like to execute. Next, you will be given the choice between a web-based graphical interface or an interactive command-line wizard tool to enter the pipeline parameters. Both interfaces show documentation alongside each parameter, will generate a run ID, and will validate your inputs. @@ -86,17 +86,17 @@ The command line wizard will conclude by asking if you want to launch the pipeli You can also use the launch command directly from the [nf-core launch website](https://nf-co.re/launch). In this case, you can configure your pipeline using the wizard and then copy the outputs to your terminal or use the run id generated by the wizard. You will need to be connected to the internet to use the run id. ```bash -nf-core launch --id +nf-core pipelines launch --id ``` -## `nf-core download` +## `nf-core pipelines download` You may need to execute an nf-core pipeline on a server or HPC system that has no internet connection. In this case, you will need to fetch the pipeline files and manually transfer them to your offline system. The nf-core tooling has the `download` command to make this process easier and ensure accurate retrieval of correctly versioned code and software containers. The nf-core `download` command will download both the pipeline code and the institutional nf-core/configs files. It can also optionally download singularity image file. ```bash -nf-core download +nf-core pipelines download ``` The download tool will interactively prompt you for the required information is no arguments are supplied. Each prompt option has a flag and if all flags are supplied then it will run without a request for any additional user input: @@ -131,12 +131,12 @@ Alternatively, you could build your own execution command with the command line Include default institutional configuration: 'False' ``` - If the command has been executed successfully you will see outputs indicating the singularity images and pipeline are being downloaded. +If the command has been executed successfully you will see outputs indicating the singularity images and pipeline are being downloaded. If you normally work on an offline system you would now move these files to your offline system and specify paths to these files using environmental variables, for example, the `NXF_SINGULARITY_CACHEDIR`. -Follow [this link](https://nf-co.re/docs/usage/getting_started/offline) to find out more about running a pipeline offline. +See [Running offline](https://nf-co.re/docs/usage/getting_started/offline) to find out more about running a pipeline offline. --- -Congratulations! You have now utilived nf-core tools for finding, launching and downloading pipeline! +Congratulations! You have now utilized nf-core tools for finding, launching and downloading pipeline!