BIOINFORMATICS II lab microbiome tutorial

Materials below are intended for Jagiellonian University Bioinformatics 2 course. They include all information covered during the lab session.

For more information on Qiita, including Qiita philosophy and documentation, please visit Qiita website.

A description of many of the terms used in this tutorial can be found in this glossary.

This tutorial is adapted from the University of California San Diego Center for Microbiome Innovation (CMI) Qiita/GNPS workshop. You can find more information on the CMI here.

For more comprhensive tutorial on Qiita, please visit the CMI-workshop website. More advanced tutorials in QIIME 2 are available on the QIIME 2 website.

If you have questions about this material, please contact Tomasz Kosciolek.

Qiita tutorials:

This tutorial will walk you through creation of your account and a test study in Qiita.

Getting example data

There are two separate example datasets made available to you - a processing dataset containing raw sequencing files which we will process to generate information about the identity and relative amounts of microbes in our samples (n=14), and an analysis dataset which contains a unique set of pre-processed samples (n=30) which we will use for statistical and community analyses.

NOTE

During this lab we are only going to perform the analysis step. Processing information in included only for background and context.

Processing dataset

You can download the processing dataset directly from GitHub. These files contain 16S rRNA microbiome data for 14 human skin samples. It is a subset of data that we will use later for analysis. Real sequencing data can be tens of gigabytes in size!

The files are:

  • CMI_workshop_lane1_S1_L001_R1_001.fastq.gz # 16S sequences - forward reads
  • CMI_workshop_lane1_S1_L001_R2_001.fastq.gz # 16S sequences - reverse reads
  • CMI_workshop_lane1_S1_L001_I1_001.fastq.gz # 16S sequences - barcodes
  • sample_info.txt # The sample information file
  • prep_info_16S.txt # The preparation information file

Analysis dataset

Example data that you can use for analysis are available to you directly on Qiita. You don’t need to download anything to your hard drive. Instructions how to access these data are provided in the analysis tutorial.

Setting up Qiita

Signing up for a Qiita account

Open your browser (it must be Chrome or Firefox) and go to Qiita (https://qiita.ucsd.edu).

Click on “Sign Up” on the upper-right-hand corner.

_images/sign_up.png

The “New User” link brings you to a page on which you can create a new account. Optional fields are indicated explicitly, while all other fields are required.

_images/user_information.png

Once the form is submitted, an email will be sent to you containing instructions on how to verify your email address.

Logging into your account and resetting a forgotten password

Once you have created your account, you can log into the system by entering your email and password.

_images/top_screen.png

If you forget your password, you will need to reset it. Click on “Forgot Password”.

This will take you to a page on which to enter your email address; once you click the “Reset Password” button, the system will send you further instructions on how to reset your lost password.

_images/forgot_password.png

Updating your settings and changing your password

If you need to reset your password or change any general information in your account, click on your email at the top right corner of the menu bar to access the page on which you can perform these tasks.

_images/forgot_password.png

Studies in Qiita

Studies are the source of data for Qiita. Studies can contain only one set of samples but can contain multiple sets of raw data, each of which can have a different preparation – for example, 16S, shotgun metagenomics, and metabolomics, or even multiple preparations of the same type (e.g., a plate rerun, biological and technical replicates, etc).

In the analysis tutorial, our study contains 30 samples, each with two types of data: 16S and metabolomic. To represent this project in Qiita, we created a single study with a single sample information file that contains all 30 samples. Then, we linked separate preparation files for each data type.

NOTE

You may skip the remainder of this section and proceed to Analysis of Closed Reference Process

Creating an example study

To create a study, click on the “Study” menu and then on “Create Study”. This will take you to a new page that will gather some basic information to create your study.

_images/create_study.png

The “Study Title” has to be unique system-wide. Qiita will check this when you try to create the study, and may ask you to alter the study name if the one you provide is already in use.

_images/create_new_study3.png

A principal investigator is required, and a list of known PIs is provided. If you cannot find the name you are looking for in this list, you can choose to add a new one.

Select the environmental package appropriate to your study. Different packages will request different specific information about your samples. For more details, see the publication. For this test study for the processing tutorial, choose human-skin.

There is also an option to specify time series type (“Event-Based Data”) if you have such data. In our case, the samples come from a time series study design, so you should select “multiple intervention, real”. For more information on time series types, you can check out the in-depth tutorial on the Qiita website.

Once your study has been created, you will be informed by a green message; click on the study name to begin adding your data.

_images/green_message2.png

Adding sample information

Sample information is the set of metadata that pertains to your biological samples: these are the measured variables that are motivating you to look for response variables in the microbiome. IMPORTANT: your metadata are your study; it is imperative that those data are consistent, correct, and sufficiently detailed. (To learn more, including how to format your own sample info file, check out the in-depth documentation on the Qiita website.)

The first point of entrance to a study is the study description page. Here you will be able to edit the study info, upload files, and manage all other aspects of your study.

_images/new_study_link4.png

Since we are using a practice set of data, under “Study Tags” write “Tutorial” and select “Save Tags”. As part of our routine clean up efforts, this tag will allow us to find and remove studies and analyses generated using the template data and information.

_images/study_tag.png

The first step after study creation is uploading files. Click on the “Upload Files” button: as shown in the figure below, you can now drag-and-drop files into the grey area or simply click on “select from your computer” to select the fastq, fastq.gz or txt files you want to upload.

Note: Per our Terms of Condition for use, by uploading files to Qiita you are certifying that they do not contain: 1) Protected health information within the meaning of 45 Code of Federal Regulations part 160 and part 164, subparts A and E; see checklist 2) Whole genome sequencing data for any human subject; HMP human sequence removal protocol 3) Any data that is copyrighted, protected by trade secret, or otherwise subject to third party proprietary rights, including privacy and publicity rights, unless you are the owner of such rights or have permission from the rightful owner(s) to transfer the data and grant it to Qiita, on behalf of the Regents of the University of California, all of the license rights granted in our Terms.

Uploads can be paused at any time and restarted again, as long as you do not refresh, navigate away from the page, or log out of the system from another browser window.

To proceed, drag the file named “sample_info.txt” into the upload box. It should upload quickly and appear below “Files” with a checkbox next to it below.

_images/upload_box3.png

Once your file has uploaded, click on “Go to study description” and, once there, click on the “Sample Information” tab.  Select your sample information from the dropdown menu next to “Upload information” and click “Create”.

_images/sample_information_upload4.png

If something is wrong with the sample information file, Qiita will let you know with a red banner at the top of the screen.

_images/sample-information-failure.png

If the file processes successfully, you should be able to click on the “Sample Information” tab and see a list of the imported metadata fields.

_images/sample_information_works4.png

To check out the different metadata values select the “Sample-Prep Summary” tab. On this page, select a metadata column to visualize in the “Add sample column information to table” dropdown menu and click “Add column.”

_images/sample_summary5.png

Next, we’ll add 16S raw data and process it.


Next: Adding a preparation template and linking it to raw data

NOTE

Do not follow this section during the Bioinformatics 2 lab. Go directly to Analysis of Closed Reference Process. This information is included here for context and everyone is encouraged to familiarize themselves with it after class.

Now, we’ll upload some actual microbiome data to explore. To do this, we need to add the data themselves, along with some information telling Qiita about how those data were generated.

Adding a preparation template and linking it to raw data

Where the sample info file has the biological metadata associated with your samples, the preparation info file contains information about the specific technical steps taken to go from sample to data. Just as you might use multiple data-generation methods to get data from a single sample – for example, target gene sequencing and shotgun metagenomics – you can have multiple prep info files in a single study, associating your samples with each of these data types. You can learn more about prep info files at the Qiita documentation.

Go back to the “Upload Files” interface. In the example data, find and upload the 3 “.fastq.gz files” and the “prep_info_16S.txt” file.

_images/upload_box4.png

These files will appear under “Files” when they finish uploading.

Then, go to the study description. Now you can click the “Add New Preparation” button. This will bring up the following dialogue:

_images/add_prep_ID4.png

Select “prep_info_16S.txt” from the “Select file” dropdown, and “16S” as the data type. Optionally, you can also select one of a number of investigation types that can be used to associate your data with other like studies in the database. Click “Create New Preparation”.

You should now be brought to a “Processing” tab of your preparation info:

_images/prep_processing.png

By clicking on the “Summary” tab on this page you can see the preparation info that you uploaded.

_images/prep_summary.png

In addition, you should see a “16S” button appear under “Data Types” on the menu to left:

_images/data_type_16S.png

You can click this to reveal the individual prep info files of that data type that have been associated with this study:

_images/data_type5.png

If you have multiple 16S preparations (for example, if you sequenced using several different primer sets), these would each show up as a separate entry here.

Now, you can associate the sequence data from your study with this preparation.

_images/prep_processing.png

In the prep info dialogue, there is a dropdown menu below the words No files attached to this preparation, labeled “Select type”. Click “Choose a type” to see a list of available file types. In our case, we’ve uploaded FASTQ-formatted file for all samples in our study, so we will choose “FASTQ - None”. In some cases outside of this tutorial, you may have per sample FASTQ files, so take care in considering which data type you are handling.

Magically, this will prompt Qiita to associate your uploaded files with the corresponding samples in your preparation info. (Our prep info file has a column named run_prefix, which associated the sample_name with the file name prefix for that particular sample).

You should see this as filenames showing up in the green: raw barcodes (file with I1 in its name), raw forward seqs (R1 in name) and raw reverse seqs (R2 in name) columns below the import dropdown. You’ll want to give the set of these FASTQ files a name (Add a name for the file field below Select type: FASTQ - None), and then click “Add files” below.

_images/prep_info_sequences4.png

That’s it! Your data are ready for processing.

Exploring the raw data

Click on the 16S menu on the left. Now that you’ve associated sequence files with this prep, you’ll have a “Processing network” displayed:

_images/file_network5.png

If you see this message:

_images/wait_message.png

It means that your files need time to load. Refresh your screen after about 1 minute.

Your collection of FASTQ files for this prep are all represented by a single object in this network, currently called “CMI tutorial - 14 skin samples”. Click on the object.

Now, you’ll have a series of choices for interacting with this object. You can click “Edit” to rename the object, “Process” to perform analyses, or “Delete” to delete it. In addition, you’ll see a list of the actual files associated with this object.

_images/available_files4.png

Scroll to the bottom, and you’ll also see an option to generate a summary of the object.

_images/generate-summary3.png

If you click this button, it will be replaced with a notification that the summary generation has been added to the processing queue.

To check on the status of the processing job, you can click the rightmost icon at the top of the screen:

_images/processing-icon2.png

This will open a dialogue that gives you information about currently running jobs, as well as jobs that failed with some sort of error. Please note, this dialogue keeps the entire history of errors that Qiita encountered for your jobs, so take notice of dates and times in the Heartbeat column.

_images/processing-summary3.png

The summary generation shouldn’t take too long. You may need to refresh your screen. When it completes, you can click back on the FASTQ object and scroll to the bottom of the page to see a short peek at the data in each of the FASTQ files in the object. These summaries can be useful for troubleshooting.

_images/summary3.png

Now, we’ll process the raw data into something more interesting.

Processing 16S data

Scroll back up and click on the “CMI tutorial - 14 skin samples(FASTQ)” artifact, and select “Process”. Below the files network, you will now see a “Choose command” dropdown menu. Based on the type of object, this dropdown menu will give a you a list of available processing steps.

For 16S “FASTQ” objects, the only available command is “Split libraries FASTQ”. The converts the raw FASTQ data into the file format used by Qiita for further analysis (you can read more extensively about this file type here).

Select the “Split libraries FASTQ” step. Now, you will be able to select the specific combination of parameters to use for this step in the “Choose parameter set” dropdown menu.

_images/split_libraries4.png

For our files, choose “Multiplexed FASTQ; Golay 12 base pair reverse complement mapping file barcodes with reverse complement barcodes”. The specific parameter values used will be displayed below. For most raw data coming out of the Knight Lab you will use the same setting.

Click “Add Command”.

You’ll see the files network update. In addition to the original white object, you should now see the processing command (represented in yellow) and the object that will be produced from that command (represented in grey).

_images/demultiplexed_workflow4.png

You can click on the command to see the parameters used, or on an object to perform additional steps.

Next we want to trim to a particular length, to ensure our samples will be comparable to other samples already in the database. Click back on the “demultiplexed (Demultiplexed)”. This time, select the Trimming operation. Currently, there are seven trimming length options. Let’s choose “100 basepairs”, which trims to the first 100bp, for this run, and click “Add Command”.

_images/trimming_command4.png

Click “Add Command”, and you will see the network update:

_images/trimming_workflow.png

Note that the commands haven’t actually been run yet! (We’ll still need to click “Run” at the top.) This allows us to add multiple processing steps to our study and then run them all together.

We’re going to process our sequences files using two different workflows. In the first, we’ll use a conventional reference-based OTU picking strategy to cluster our 16S sequences into OTUs. This approach matches each sequence to a reference database, ignoring sequences that don’t match the reference. In the second, we will use deblur, which uses an algorithm to remove sequence error, allowing us to work with unique sequences instead of clustering into OTUs. Both of these approaches work great with Qiita, because we can compare the observations between studies without having to do any sort of re-clustering!

The closed-reference workflow

To do closed reference OTU picking, click on the “Trimmed Demultiplexed 100 (Demultiplexed)” object and select the “Pick closed-reference OTUs” command. We will use the “Defaults” parameter set for our data, which are relatively small. For a larger data set, we might want to use the “Defaults - parallel” implementation.

_images/closed_reference_OTU4.png

By default, Qiita uses the GreenGenes 16S reference database. You can also choose to use the Silva 119 18S databsase, or the UNITE 7 fungal ITS database.

Click “Add Command”, and you will see the network update:

_images/OTU_workflow4.png

Here you can see the blue “Pick closed-reference OTUs” command added, and that the product of the command is a BIOM-formatted OTU table.

That’s it!

The deblur workflow

The deblur workflow is only marginally more complex. Although you can deblur the demultiplexed sequences directly, “deblur” works best when all the sequences are the same length. By trimming to a particular length, we can also ensure our samples will be comparable to other samples already in the database.

Click back on the “Trimmed Demultiplexed 100 (Demultiplexed)” object. This time, select the Deblur operation. Choose “Deblur” from the “Choose command” dropdown, and “Defaults” for the parameter set.

_images/trimmed_deblur_command4.png

Add this command to create this workflow:

_images/full_workflow5.png

Now you can see that we have the same “Trimmed Demultiplexed (Demultiplexed)” object being used for two separate processing steps – closed-reference OTU picking, and deblur.

As you can see, “deblur” produces two BIOM-formatted OTU tables as output. The “deblur reference hit table (BIOM)” contains deblurred sequences that have been filtered to try and exclude things like organellar mitochondrial reads, while “deblur final table (BIOM)” has all the sequences.

Running the workflow

Now, we can see the whole set of commands and their output files:

_images/full_workflow5.png

Click “Run” at the top of the screen, and Qiita will start executing all of these jobs. You’ll see a “Workflow submitted” banner at the top of your window.

The full workflow can take time to load depending on the amount of samples and Qiita workload. You can keep track of what is running by looking at the colors of the command artifacts. If yellow, the commands are being run now. If green, the commands have successfully been run. If red, the commands have failed.

_images/full_workflow6.png

Once objects have been generated, you can generate summaries for them just as you did for the original “FASTQ” object.

The summary for the “demultiplexed (Demultiplexed)” object gives you information about the length of sequences in the object:

_images/sequences.png

The summary for a BIOM-format OTU table gives you a table summary, details regarding the frequency per sample, and a histogram of the number of features per sample:

_images/demultiplex_histogram2.png

Next: Analysis of Closed Reference Process

Analysis of Closed Reference Process

To create an analysis, select “Create new analysis” from the top menu.

This will take you to a list of studies with samples available to you for analysis, divided between your studies and publicly available studies (“Public Studies”).

_images/analysis_studies_page3.png

Find the “CMI workshop analysis” study in Public Studies. You can use the search window at the top right, or filter by tags (“CMIWorkshop” tag). Click the green plus sign at the left of the row. This will expand the study to expose all the objects from that study that are available to you for analysis.

_images/study_expanded3.png

To look more closely at the details of the artifact, select “Per Artifact (1).” Here you can add each of these objects to the analysis by selecting the “Add” button. We will just add the Closed Reference OTU table object by clicking “Add” in that row.

_images/your_study3.png

Now, the second-right-most icon at the top bar should turn green, indicating that there are samples selected for analysis.

_images/clipboard.png

Clicking on the icon will take you to a page where you can refine the samples you want to include in your analysis. Here, all 30 of our samples are currently included:

_images/selected_samples2.png

You could optionally exclude particular samples from this set by clicking on “Show/Hide samples”, which will show each individual sample name along with a “remove” option. (Removing them here will mask them from the analysis, but will not affect the underlying files in any way.)

This should be good for now. Click the “Create Analysis” button, enter a name and description, then click “Create Analysis”.

_images/create_analysis_button2.png

This brings you to the processing network page. Here, pulling down the “Processing Network” tab. This may take 2 to 5 minutes to load. You can analyze data that has been run.

_images/processing_network_photo4.png

Before we process the data, let’s have a look at the summary of the contents of the biom file. Select the “dflt_name (BIOM)” artifact to see a summary of this file displaying a table summary, details regarding the frequency per sample, histogram of the number of features per sample:

_images/summaryinfo.png

As you can see, this file contains 30 samples with roughly 36,000 features. The features in our case are OTUs (Operational Taxanomic Units), because the features were generated using the closed-reference OTU picking.

Question

Are OTUs equivalent to bacterial species? Please, provide a justification to your answer. You may find the Qiita glossary useful.

Now we can begin analyzing these samples. Let’s go ahead and select “dflt_name (BIOM)” then select “Process”. This will take us to the commands selection page. Once there, the commands pull down tab can be accessed which will display twenty-five actions.

_images/command_options4.png

The text in brackets is the actual underlying commands from QIIME2. We will now go through the use of some of the most-used commands which will enable you to generate summaries, plot your data, and calculate statistics to help you get the most out of your data.

Rarefying Data

For certain analyses such as those we are about to conduct, the data should be rarefied. This means that all the samples in the analysis will have their features, in this case OTUs, randomly subsampled to the same, desired number, reducing potential alpha and beta diversity biases. Samples with fewer than this number of features will be excluded, which can also be useful for excluding things like blanks. To choose a good cutoff for your data, view the histogram that was made when we generated the summary of the data.

_images/histogram2.png

An appropriate cutoff would exclude clear outliers, but retain most of the samples. Here we have already removed blanks from our data and eliminated the outliers prior to analysis so we will just use the minimum number of features observed in our samples (11030) as the cutoff.

To rarefy the data, select “Rarefy table” from the drop-down menu. The parameters will appear below the workflow diagram:

_images/rarefy_parameter.png

Several parameters will have only one option which will be automatically selected for you. In the field, “The total frequency that each sample should be rarefied to…(sampling depth)”, we will specify the number of features to rarefy our samples to. Enter “11030” in this box, and click “Add Command”.

_images/rarify_parameter_with_sampling_depth3.png

Click the “Run” button above the workflow network to start the process of rarefaction. Then, click on the “dflt_name (BIOM)” artifact to see blue “Jobs using this data” button. Once you click on it, you can see the current status of your job. You can also view it clicking on the server button in the top-right corner of the screen:

_images/server.png

The view will return to the original screen, while the rarefied feature-table generation job runs. Your browser wil automatically refresh every 15 seconds until the “rarefied table (BIOM)” artifact appears:

_images/rarify_workflow4.png

Select the newly generated “rarefied_table (BIOM)” artifact. This time instead of seeing a histogram of the rarefied samples, you instead see a brief summary confirming that your samples have all be rarefied to the same depth. Now that the data are rarefied, we can begin the analysis.

Taxa Bar Plots

NOTE

Taxonomy is outside the scope of this lab session. However, if you are interested in this topic, you are encouraged to follow the CMI Qiita/GNPS tutorial afterwards.

Alpha Diversity Analysis

Now, let’s analyze the alpha diversity of your samples. Alpha diversity metrics describe the diversity of features within a sample or a group of samples. This is used to analyze the diversity within rather than between samples or a group of samples.

Observed Operational Taxonomic Units

One type of analysis for alpha diversity, and the simplest, is looking at the number of observed, unique features, or OTUs in this example, also known as feature richness. This type of analysis will provide the number of unique OTUs found in a sample or group of samples.

To perform an alpha diversity analysis of feature richness, select the rarefied “rarefied table (BIOM)” artifact in the processing network and select “Process”. Select “Alpha diversity” from the drop-down menu. The parameters will appear below the workflow diagram:

_images/observed_OTU_parameter4.png

Several parameters have been automatically selected for you since these options cannot be changed. In the field, “The alpha diversity metric… (metric)”, we will specify the alpha diversity metric to run in our analysis. Select “Number of distinct features” from the drop-down menu in this box, and click “Add Command”.

Once the command is added the workflow should appear as follows:

_images/observed_OTU_workflow4.png

Click the run button to start the process of the alpha diversity analysis. The view will return to the original screen, while the alpha diversity analysis job runs.

Faith’s Phylogenetic Diversity Index

Another alpha diversity analysis in this tutorial uses Faith’s phylogenetic diversity index. This index also measured abundance and diversity but considers the phylogenetic distance spanning all features in a sample. The results can also be displayed as a phylogeny, rather than as a plot.

To perform an alpha diversity analysis using Faith’s phylogenetic diversity index, select the “rarefid table (BIOM)” artifact in the processing network and select “Process”. Select “Alpha diversity (phylogenetic)” from the drop-down menu. The parameters will appear below the workflow diagram:

_images/faith_pd_parameter4.png

Several parameters have been automatically selected for you. For example, in the field, “The alpha diversity metric… (metric)”, “Faith’s Phylogenetic Diversity” has already been chosen from the drop-down menu in this box. In the “Phylogenetic tree” field select “/databases/gg/13_8/trees/97_otus_no_none.tree” then click “Add Command”.

Once the command is added the workflow should appear as follows:

_images/faith_pd_workflow4.png

Click the run button to start the process of the alpha diversity analysis. The view will return to the original screen, while the alpha diversity analysis job runs.

Alpha Diversity Outputs

Each alpha diversity analysis will output an interactive boxplot that shows how that alpha diversity metric correlates with different metadata categories:

_images/alpha_diversity_boxplot.png

To change the category, choose the “Category” pull-down menu and choose the metadata category you would like to analyze:

_images/alpha_diversity_categories.png

You will also be given the outcomes to Kruskal-Wallis tests:

_images/Kruskal_Wallis.png

Question

Which alpha diversity metric produces a higher between-subject effect size?

Beta Diversity Analysis

One can also measure beta diversity in Qiita. Beta diversity measures feature turnover among samples (i.e., the diversity between samples rather than within each sample). This is used to compare samples to one another.

Bray-Curtis Dissimilarity

One commonly used beta diversity metric is Bray-Curtis dissimilarity. This metric quantifies how dissimilar samples are to one another.

To perform an anlaysis of beta diversity using the Bray-Curtis dissimilarity metric, select the “rarefied table (BIOM)” artifact in the processing network and select “Process”. Then select “Beta diversity” from the drop-down menu. The parameters will appear below the workflow diagram:

_images/bray_curtis_beta_diversity5.png

Several parameters have been automatically selected for you. In the field, “The beta diversity metric… (metric), we will specify the beta diversity analysis to run. Select “Bray-Curtis dissimilarity” from the drop-down menu in this box, and click “Add Command”.

To create a principal coordinates plot of the Bray-Curtis dissimilarity distance matrix, select the “distance matrix (distance matrix)” artifact and select “Process”. Select “Perform Principal Coordinate Analysis (PCoA)” from the drop-down menu. The parameters will appear below the workflow diagram:

_images/bray_curtis_pcoa5.png

All of the parameter have automatically selected for you just click “Add Command”.

Once the command is added the workflow should appear as follows:

_images/bray_curtis_workflow4.png

Click the run button to start the process of the beta diversity analysis. The view will return to the original screen, while the beta diversity analysis job runs.

Unweighted UniFrac Analysis

Another commonly used distance metric for measuring beta diversity is unweighted UniFrac distance. Unweighted refers to that the metric considers only feature richness and not abundance, when comparing samples to one another. This differs from the weighted UniFrac distance metric, which takes into account both feature richness and abundance, for each sample.

To perform unweighted UniFrac analysis, select the “rarefied table (BIOM)” artifact in the processing network and select “Process”. Then select “Beta diversity (phylogenetic)” from the drop-down menu. The parameters will appear below the workflow diagram:

_images/unweighted_beta_diversity6.png

All of the parameters have been automatically selected for you, just click “Add Command”.

To create a principal coordinates plot of the unweighted Unifrac distance matrix, select the “distance_matrix (distance_matrix)” artifact that will be generated using Unweighted UniFrac distance. Note that, unless you rename each distance matrix (see below: Altering Workflow Analysis Names), they will appear identical until you select them to view their provenance information. Once you have selected the distance matrix artifact, select “Perform Principal Coordinate Analysis (PCoA)” from the drop-down menu. The parameters will appear below the workflow diagram:

_images/unweighted_pcoa4.png

All of the parameters have been automatically selected for you just click “Add Command”. Once the command is added the workflow should appear as follows:

_images/unweighted_workflow4.png

Click the run button to start the process of the beta diversity analysis. The view will return to the original screen, while the beta diversity analysis job runs.

Question

Is there a scenario in which unweighted UniFrac value can be < 0? Which of the two distance metrics used produces more homogenous results (eg. smaller variance)?

Principal Coordinate Analysis

Clicking on the “pcoa (ordination_results)” (Principal Coordinate Analysis) artifact will open an interactive visualization of the similarity among your samples. Generally speaking, the more similar the samples with respect to their features, the closer the are likely to be in the PCoA ordination plot. The Emperor visualization program offers a very useful way to explore how patterns of similarity in your data associate with different metadata categories.

Once the Emperor visualization program loads, the PCoA result will look like:

_images/full_pcoa2.png

You will see tabs including “Color”, “Visibility”, “Opacity”, “Scale”, “Shape”, “Axes”, and “Animations”.

Under “Color” you will notice two pull-down menus:

_images/color_tab2.png

Under “Select a Color Category” you can select how the samples will be grouped. Under “Classic QIIME Colors”, you can select how each group will be colored.

Under the “Visibility” tab you will notice 1 pull-down menu:

_images/visibility_tab2.png

Under “Select a Visibility Category” you can select which group will be displayed on the PCoA plot.

Under the “Opacity” tab you will notice 1 pull-down menu:

_images/opacity_tab.png

Under “Select an Opacity Category” you can select the categories in which the opacity will change on the PCoA plot. Once chosen, these groups will be displayed under “Global Scaling” and, when selected, you can change the opacity of each group separately. Under “Global Scaling” you can change the opacity of all of the samples.

Under the “Scale” tab you will notice 1 pull-down menu:

_images/scale_tab2.png

Under “Select a Scale Category” you can choose the grouping of your samples. Under “Global Scaling” you can change the point size for each group on the PCoA plot.

Under the “Shape” tab you will notice 1 pull-down menu:

_images/shape_tab2.png

Under “Select a Shape Category” you can alter the shape of each group on the PCoA plot to the following:

_images/shape_options.png

Under the “Axis” tab you will notice 5 pull-down menus:

_images/axis_tab2.png

The first 3 pull-down menus located under “Visible” allow you to change the axis that are being displayed. The “Axis and Labels Color” menu allow you to change the color of your axis and label of the PCoA. The “Background Color” menu allows you to change the color of the background of the PCoA. The % Variation Expanded graph displays how different the most dissimilar samples are by percentage for each axis that can be used.

Under the “Animations” tab you will notice 2 pull-down menus:

_images/animations_tab.png

Under “Category to sort samples” you can choose the category that you will be sorting the samples by. Under “Category to group sample” you can choose the category that you will be grouping the samples by.

Let’s take a few minutes now to explore the various features of Emperor. Open a new browser window with the Emperor tutorial and follow along with your test data.

Question

From the unweighted UniFrac PCoA plot, what is the main driver of bacterial community separation, subject (host_subject_id), body side (side), or phase of the experiment (phase_discreet)? Is the same true for Bray-Curtis results?

Beta Diversity Group Significance

Another way to study the beta diversity is by measuring the beta diversity group significance. Beta diversity group significance measures whether groups of samples are significantly different from one another using a permutation-based statistical test. Sample groups are designated by metadata variables.

If you have completed the tutorial up to this point, you can begin analysis of beta diversity group significance from one of your beta diversity distance matrices (jump down two paragraphs). Here we begin with the rarefied feature-table. To perform a beta group significance analysis, select the “rarefied table (BIOM)” artifact in the processing network and select “Process”. Select “Beta diversity” from the drop-down menu. The parameters will appear below the workflow diagram:

_images/beta_group_significance_beta4.png

Several parameters have been automatically selected for you. In the field, “The beta diversity metric… (metric)”, we will specify the beta diversity distance metric to use in our analysis. Note that if you attempt to create a distance matrix that already exists in the Processing network, you will get an error stating such. For example, if you have already created a beta diversity distance matrix using the Bray-Curtis dissimilarity metric, you will have to select a unique metric here (e.g., “Aitchison distnace”). In the “Phylogenetic tree” field enter “/databases/gg/13_8/trees/97_otus.tree”, and click “Add Command”.

To create the beta group significance analysis, select the “distance_matrix (distance_matrix)” artifact of interest in the Processing network, and select “Beta diversity group significance” from the drop-down menu. The parameters will appear below the workflow diagram:

_images/significance_matrix4.png

Several parameters have been automatically selected for you. In the “Metadata column to use” field we will specify the category from the metadata file to be used for determining significance between groups (e.g., subject). Using the “Perform pairwise tests…” checkbox we can indicate if we would like the group significance to be run “Pairwise”, otherwise the analysis will be done across all groups (i.e., Non-pairwise). Note that for metadata variables for which there are only two groups, this distinction makes no difference. In the field, “The group significance test… (method)”, we will specify the correlation test that will be applied (e.g., PERMANOVA [Permutational multivariate analysis of variance]). Then click “Add Command”. Once the command is added the workflow should appear as follows:

_images/beta_group_significance_workflow4.png

Click the run button to start the process of the beta diversity group significance analysis. The view will return to the original screen, while the beta diversity group significance analysis job runs.

Beta Group Significance Output Analysis

Once the beta group significance “visualization (q2_visualization)” artifact is chosen in the network, the beta diversity group significance Overview, which in our case shows results from the PERMANOVA (i.e., across all groups) and Group significance plots will appear:

_images/beta_significance_overview.png

The results from pairwise PERMANOVA tests will also be displayed if included in the analysis:

_images/permanova_results2.png

The command ‘Beta diversity group significance’ provides PERMANOVA that can be run on a single categorical metadata variable. If you instead would like to provide multiple terms in the form of an equation, you can use the command ‘adonis PERMANOVA test for beta group significance’. This latter command implements the ‘adonis’ function from the R package, vegan.

NOTE

The sections below are optional. You can do them only if you have the time.

Filtering Data

Using QIITA you can also filter your data. This allows you to filter out samples.

To filter the data, select the “rarefied table (BIOM)” artifact in the processing network and select “Process”. Then select “Filter samples from table” from the drop-down menu. The parameters will appear below the workflow diagram:

_images/filtered_unweighted_filtering6.png

Several parameters have been automatically selected for you. In the “SQLite WHERE-clause” field we are filtering out all samples except for certain samples. In this case we wanted to filter out all samples except those in which subject = 'Volunteer 3', and click “Add Command”. If instead you want to filter out all of Volunteer 3’s samples, either use the SQLite WHERE-clause above while also checking the box “If true, the samples selected… will be excluded”, or alternatively use the SQlite WHERE-clause subject != 'Volunteer 3', and click “Add Command”. If you want to filter for samples containing an apostrophe, write it out in the following format: subject = \"Volunteer 3's samples\". Keep in mind that all fields are case sensitive.

Click “Run” to execute the filtering process.

An example of how you can use filtering in your analysis is explained in the following “Filtered Unweighted UniFrac Analysis” section.

Filtered Unweighted UniFrac Analysis

By filtering, you can perform unweighted UniFrac analysis but this time without certain sample.

After filtering your data (shown in the previous “Filtering Data” section), you can perform a beta diversity analysis by selecting the “filtered_table (BIOM)” in the Processing network and clicking “Process”. Select “Beta diversity (phylogenetic)” from the drop-down menu. The parameters will appear below the workflow diagram:

_images/unweighted_beta_diversity6.png

All of the parameters have been automatically selected for you, just click “Add Command”.

To create a principal coordinates plot of the unweighted Unifrac distance matrix, select the “distance_matrix (distance_matrix)” artifact that you set up above, and select “Perform Principal Coordinate Analysis (PCoA)” from the drop-down menu. The parameters will appear below the workflow diagram:

_images/filtered_unweighted_pcoa4.png

All of the parameters have been automatically selected for you just click “Add Command”. Once the command is added the workflow should appear as follows:

_images/filtered_unweighted_workflow4.png

Click the run button to start the process of the beta diversity analysis. The view will return to the original screen, while the beta diversity analysis job runs.

Altering Workflow Analysis Names

To alter the name of a result, click the artifact then use the edit button on the processing network page.

_images/rename_data_on_workflow2.png

This will cause a window to pop-up where you can input the name you’d like to replace it with.

_images/rename_data_popup.png