Workflows
What is a Workflow?Filters
This notebook is about pre-processing the Auditory Brainstem Response (ABR) raw data files provided by Ingham et. al to create a data set for Deep Learning models.
The unprocessed ABR data files are available at Dryad.
Since the ABR raw data are available as zip-archives, these have to be unzipped and the extracted raw data files parsed so that the time ...
Workflow for quality assessment of paired reads and classification using NGTax 2.0 and functional annotation using picrust2. In addition files are exported to their respective subfolders for easier data management in a later stage. Steps:
- FastQC (read quality control)
- NGTax 2.0
- Picrust 2
- Export module for ngtax
Analysis of RNA-seq data starting from BAM and focusing on mRNA, lncRNA and miRNA
This workflow is based on the idea of comparing different gene sets through their semantic interpretation. In many cases, the user studies a specific phenotype (e.g. disease) by analyzing lists of genes resulting from different samples or patients. Their pathway analysis could result in different semantic networks, revealing mechanistic and phenotypic divergence between these gene sets. The workflow of BioTranslator Comparative Analysis compares quantitatively the outputs of pathway analysis, ...
BioTranslator performs sequentially pathway analysis and gene prioritization: A specific operator is executed for each task to translate the input gene set into semantic terms and pinpoint the pivotal-role genes on the derived semantic network. The output consists of the set of statistically significant semantic terms and the associated hub genes (the gene signature), prioritized according to their involvement in the underlying semantic topology.
Continuous flexibility analysis of SARS-CoV-2 Spike prefusion structures
SAMBA is a FAIR scalable workflow integrating, into a unique tool, state-of-the-art bioinformatics and statistical methods to conduct reproducible eDNA analyses using Nextflow. SAMBA starts processing by verifying integrity of raw reads and metadata. Then all bioinformatics processing is done using commonly used procedure (QIIME 2 and DADA2) but adds new steps relying on dbOTU3 and microDecon to build high quality ASV count tables. Extended statistical analyses are also performed. Finally, SAMBA ...
Type: Nextflow
Creators: Cyril Noel, Alexandre Cormier, Laura Leroi, Patrick Durand, Laure Quintric
Submitter: Cyril Noel
This repository contains the workflow used to find and characterize the HI sources in the data cube of the SKA Data Challenge 2. It was developed to process a simulated SKA data cube data cube, but can be adapted for clean HI data cubes from other radio observatories.
The workflow is managed and executed using snakemake workflow management system. It uses https://spectral-cube.readthedocs.io/en/latest/ based on ...