Computing Quantile Normalization in Python


Quantile Normalization in Python

When working with high-dimensional information, preprocessing and normalizing the info are key necessary steps in doing information evaluation. Quantile normalization is one such statistical strategies that may be helpful in analyzing high-dimensional datasets. One of many fundamental targets performing normalization like Quantile normalization is to remodel the uncooked information such that we are able to take away any undesirable variation on account of technical artifacts and protect the precise variation that we’re curious about finding out. Quantile normalization is broadly adopted in fields like genomics, however it may be helpful in any high-dimensional setting.

On this submit, we are going to discover ways to implement quantile normalization in Python utilizing Pandas and Numpy. We are going to implement the quantile normalization algorithm step-by-by with a toy information set. Then we are going to wrap that as a perform to use a simulated dataset. Lastly we are going to examples of couple of visualizations to see how the info seemed earlier than and after quantile normalization.

Allow us to first load the packages wanted for implementing Quantile Normalization in Python and illustrating the steps to compute quantile normalization.

import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from scipy.stats import poisson

Say, you have got a whole lot or hundreds of observations from a number of samples. Quantile normalization is a normalization technique that assumes statistical distribution of every pattern is strictly the identical.

Normalization is achieved by forcing the noticed distributions to be the identical and the typical distribution, obtained by taking the typical of every quantile throughout samples, is used because the reference.

The determine beneath properly illustrates the steps wanted to carry out quantile normalization. And we are going to comply with the steps to implement it in Python. The determine is taken from a latest paper in bioRxiv, titled “When to Use Quantile Normalization?”. Take a look at the paper for extra particulars on quantile normalization.

Algorithm to Compute Quantile Normalization

Algorithm to Compute Quantile Normalization (Credit score: Hick and Irizarry, bioRxiv, 2014)

Allow us to create a dataframe with some toy information to do quantile normalization. The dataframe right here accommodates the identical information because the WikiPedia web page on quantile normalization.

df = pd.DataFrame({'C1': {'A': 5, 'B': 2, 'C': 3, 'D': 4},
                   'C2': {'A': 4, 'B': 1, 'C': 4, 'D': 2},
                   'C3': {'A': 3, 'B': 4, 'C': 6, 'D': 8}})

Our toy dataframe has three columns and 4 rows.


       C1       C2      C3
A       5       4       3
B       2       1       4
C       3       4       6
D       4       2       8

Step 1: Order values in every column

Step one in performing quantile normalization is to type every column (every pattern) independently. To type all of the columns independently, we use NumPy type() perform on the values from the dataframe. Since we lose the column and index names with Numpy, we create a brand new sorted dataframe utilizing the sorted outcomes with index and column names.

df_sorted = pd.DataFrame(np.type(df.values, axis=0), index=df.index, columns=df.columns)

The dataframe after sorting every column appears to be like like this. By doing this, we’re grouping observations with excessive/low values collectively.


        C1      C2      C3
A       2       1       3
B       3       2       4
C       4       4       6
D       5       4       8

Step2: Compute Row Means

Since we now have sorted every pattern’s information independently, the typical worth every obeservation i.e. every row is in ascending order.

Subsequent step is compute the typical of every observartion. We use the sorted dataframe and compute imply of every row utilizing Panda’s imply() with axis=1 argument.

df_mean = df_sorted.imply(axis=1)

We get imply values of every row after sorting with unique index.


A    2.000000
B    3.000000
C    4.666667
D    5.666667
dtype: float64

These imply values will exchange the orginal information in every column, such that we protect the order of every remark or featur in Samples/Columns. This mainly forces all of the samples to have the identical distributions.

Notice that the imply values in ascending order, the primary worth is lowest rank and the final is of highest rank. Allow us to change the index to replicate that the imply we computed is ranked from from low to excessive. To do try this we use index perform assign ranks sorting from 1. Notice our index begins at 1, reflecting that it’s a rank.

df_mean.index = np.arange(1, len(df_mean) + 1)
1    2.000000
2    3.000000
3    4.666667
4    5.666667
dtype: float64

Step3: Use Common Values to every pattern within the unique order

The third and remaining step is to make use of the row common values (imply quantile) and exchange them instead of uncooked information in the precise order. What this implies is, if the unique information of first pattern at first ingredient is the smallest within the pattern, we are going to exchange the unique worth with new smallest worth of row imply.

In our toy instance, we are able to see that the primary ingredient of the third column C3 is 2 and it’s the smallest in column C3. So we are going to use the smallest row imply 2 as its alternative. Equally, the second ingredient of C3 in unique information has Four and it’s the second smallest in C3, so we are going to exchange with 3.0, which is the second smallest in row imply.

To implement this, we have to get rank of unique information for every column independently. We will use Pandas’ rank perform to get that.


C1      C2      C3
A       4       3       1
B       1       1       2
C       2       3       3
D       3       2       4

Now that we now have the rank dataframe, we are able to use the rank to exchange it with common values. A method to try this is to conver the rank dataframe in vast to rank information body in tidy lengthy type. We will use stack() perform to reshape the info in vast type to tidy/lengthy type.

A  C1    4
   C2    3
   C3    1
B  C1    1
   C2    1
   C3    2
C  C1    2
   C2    3
   C3    3
D  C1    3
   C2    2
   C3    4
dtype: int64

Then all we have to do is to map our row imply information with rank as index to rank colum of the tidy information. We will properly chain every operation and get information that’s quantile normalized. Within the code beneath, we now have reshaped the tidy normalized information to vast type as want.

df_qn =df.rank(technique="min").stack().astype(int).map(df_mean).unstack()

Now we now have our quantile normalized dataframe.

        C1      C2      C3
A       5.666667        4.666667        2.000000
B       2.000000        2.000000        3.000000
C       3.000000        4.666667        4.666667
D       4.666667        3.000000        5.666667

Python Perform to Compute Quantile Normalization

Step-by-step code for the toy instance is useful to know how quantile normalization is applied. Allow us to wrap the statements in to a perform and check out on barely reasonable information set.

def quantile_normalize(df):
    enter: dataframe with numerical columns
    output: dataframe with quantile normalized values
    df_sorted = pd.DataFrame(np.type(df.values,
    df_mean = df_sorted.imply(axis=1)
    df_mean.index = np.arange(1, len(df_mean) + 1)
    df_qn =df.rank(technique="min").stack().astype(int).map(df_mean).unstack()

Allow us to generate dataset with three columns and 5000 rows/remark. We use Poisson random distribution with completely different imply to generate the three columns of information.

c1= poisson.rvs(mu=10, measurement=5000)
c2= poisson.rvs(mu=15, measurement=5000)
c3= poisson.rvs(mu=20, measurement=5000)

Visualizaing the Impact of Quantile Normalization

One of many methods to viusalize the unique uncooked information is to make density plot. Right here we use Pandas’ plotting functionality to make a number of density plots of the uncooked information.


We will see that every distribution is distinct as we meant.

Density Plot of Raw Data

Density Plot of Uncooked Information: Earlier than Quantile Normalization

Allow us to apply our perform to compute quantile normalized information.

# compute quantile normalized information

Allow us to make the density plot once more, however this time with the quantile normalized information.

plt.title("Density plot after Quantile Normalization")

We will see that the density plot of quantile normalized information appears to be like similar to one another as we exprected.

Density plot after Quantile Normalization

Density plot after Quantile Normalization

One other approach visualize the impact of quantile normalization to a knowledge set is to make use of boxplot of every column/variable.

Let u make boxplots of unique information earlier than normalization. We use Seaborn’s boxplot to make boxplot utilizing the vast type of information.

# set x-axis label
plt.xlabel("Samples", measurement=18)
# set y-axis label
plt.ylabel("Measurement", measurement=18)
plt.title("Boxplot of uncooked information earlier than Quantile Normalization")

We will see that the three distributions have completely different imply/median.

Boxplot before Quantile Normalization Seaborn

Boxplot earlier than Quantile Normalization Seaborn

Now allow us to make boxplots utilizing quantile normalized information.

# set x-axis label
plt.xlabel("Samples", measurement=18)
# set y-axis label
plt.ylabel("Measurement", measurement=18)
plt.title("Boxplot after Quantile Normalization")

By design we are able to see that each one three boxplots equivalent to the three column look very related.

Boxplot after Quantile Normalization Seaborn

Boxplot after Quantile Normalization Seaborn