Knowledge Base / Non-Parametric Tests Inferential Statistics 10 min read

Non-Parametric Tests

Master distribution-free statistical tests.

How to Use Non-Parametric Alternatives Using DataStatPro: When and How to Apply Distribution-Free Tests

Learning Objectives

By the end of this tutorial, you will be able to:

When to Use Non-Parametric Tests

Non-parametric tests are preferred when:

Advantages of Non-Parametric Tests

Limitations of Non-Parametric Tests

Common Non-Parametric Tests and Their Parametric Equivalents

Parametric TestNon-Parametric AlternativeUse Case
One-sample t-testWilcoxon signed-rank testSingle group vs hypothesized median
Independent t-testMann-Whitney U testTwo independent groups
Paired t-testWilcoxon signed-rank testTwo related groups
One-way ANOVAKruskal-Wallis testMultiple independent groups
Repeated measures ANOVAFriedman testMultiple related groups
Pearson correlationSpearman correlationRelationship between variables
Chi-square goodness of fitKolmogorov-Smirnov testDistribution comparison

Step-by-Step Guide: Mann-Whitney U Test

When to Use

Use Mann-Whitney U test when:

Step 1: Data Preparation

  1. Access Non-Parametric Tests

    • Navigate to InferenceNon-Parametric Tests
    • Select Mann-Whitney U Test
  2. Data Requirements

    • One grouping variable (2 levels)
    • One dependent variable (ordinal or continuous)
    • Independent observations

Step 2: Running the Analysis

  1. Variable Selection

    • Choose dependent variable (outcome measure)
    • Select grouping variable (group membership)
    • Verify group labels are correct
  2. Test Options

    • Choose two-tailed or one-tailed test
    • Set significance level (typically α = 0.05)
    • Request descriptive statistics

Step 3: Interpreting Results

  1. Test Statistic

    • U statistic (smaller of U₁ and U₂)
    • Z approximation for large samples
    • Exact p-value for small samples
  2. Effect Size

    • r = Z/√N (small: 0.1, medium: 0.3, large: 0.5)
    • Common language effect size
    • Probability of superiority

Example Output Interpretation

Mann-Whitney U Test Results:
U = 145.5, Z = -2.34, p = .019
Effect size r = .31 (medium effect)
Group 1 median = 23.5, Group 2 median = 18.0

Step-by-Step Guide: Kruskal-Wallis Test

When to Use

Use Kruskal-Wallis test when:

Step 1: Analysis Setup

  1. Access Kruskal-Wallis Test

    • Go to Non-Parametric TestsKruskal-Wallis
    • Prepare data with grouping variable (3+ levels)
  2. Assumption Checking

    • Verify independence of observations
    • Check that group distributions have similar shapes
    • Ensure adequate sample sizes (5+ per group)

Step 2: Running the Test

  1. Variable Selection

    • Select dependent variable (outcome)
    • Choose grouping variable (3+ groups)
    • Request post-hoc comparisons if significant
  2. Post-Hoc Analysis

    • Dunn's test for pairwise comparisons
    • Bonferroni correction for multiple testing
    • Steel-Dwass method for all pairwise comparisons

Step 3: Interpretation

  1. Overall Test

    • H statistic (chi-square approximation)
    • Degrees of freedom = k - 1 (k = number of groups)
    • p-value for overall group differences
  2. Effect Size

    • Epsilon-squared (ε²) = (H - k + 1)/(N - k)
    • Eta-squared (η²) for comparison with ANOVA

Step-by-Step Guide: Wilcoxon Signed-Rank Test

When to Use

Use Wilcoxon signed-rank test for:

One-Sample Version

  1. Setup

    • Test if sample median equals hypothesized value
    • Null hypothesis: median = μ₀
    • Alternative: median ≠ μ₀ (or directional)
  2. Procedure

    • Calculate differences from hypothesized median
    • Rank absolute differences (excluding zeros)
    • Sum ranks for positive and negative differences
    • Compare to critical values or use normal approximation

Paired-Samples Version

  1. Setup

    • Compare two related measurements
    • Calculate difference scores (Time2 - Time1)
    • Test if median difference = 0
  2. Example: Pre-Post Treatment

Participant | Pre-Score | Post-Score | Difference | Rank
001         | 15        | 18         | +3         | 4
002         | 22        | 20         | -2         | 2.5
003         | 19        | 25         | +6         | 7
...

Step-by-Step Guide: Spearman Rank Correlation

When to Use

Use Spearman correlation when:

Step 1: Data Preparation

  1. Access Correlation Analysis

    • Navigate to Correlation & RegressionCorrelation
    • Select Spearman Rank Correlation
  2. Variable Selection

    • Choose two or more variables
    • Variables can be ordinal or continuous
    • Check for missing data patterns

Step 2: Interpretation

  1. Correlation Coefficient (ρ)

    • Range: -1 to +1
    • Interpretation similar to Pearson r
    • Based on ranks rather than raw scores
  2. Significance Testing

    • t-test for significance (large samples)
    • Exact tables for small samples
    • Bootstrap confidence intervals

Comparison with Pearson Correlation

Pearson r = 0.45, p = .023
Spearman ρ = 0.62, p = .008

Interpretation: Strong monotonic relationship
but moderate linear relationship

Advanced Non-Parametric Techniques

Friedman Test (Repeated Measures)

  1. Use Case

    • Three or more related measurements
    • Alternative to repeated measures ANOVA
    • Ordinal data or violated ANOVA assumptions
  2. Post-Hoc Analysis

    • Nemenyi test for pairwise comparisons
    • Wilcoxon signed-rank for specific pairs
    • Bonferroni correction for multiple tests

Kendall's Tau

  1. Advantages over Spearman

    • Better for small samples
    • More robust to outliers
    • Easier interpretation (probability-based)
  2. Tau-b vs Tau-c

    • Tau-b: For square tables (equal categories)
    • Tau-c: For rectangular tables (unequal categories)

Kolmogorov-Smirnov Tests

  1. One-Sample KS Test

    • Compare sample to theoretical distribution
    • Test normality, uniformity, etc.
    • More powerful than Shapiro-Wilk for large samples
  2. Two-Sample KS Test

    • Compare distributions of two groups
    • Tests for any distributional differences
    • Not just location differences

Real-World Example: Clinical Trial Analysis

Scenario

Comparing pain reduction scores (0-10 scale) between three treatment groups with small sample sizes and skewed data.

Data Characteristics

Analysis Strategy

  1. Primary Analysis: Kruskal-Wallis test
  2. Post-Hoc: Dunn's test with Bonferroni correction
  3. Effect Size: Epsilon-squared
  4. Visualization: Box plots with individual points

Results Interpretation

Kruskal-Wallis H = 8.47, df = 2, p = .014
ε² = .19 (medium effect size)

Post-hoc comparisons (Dunn's test):
Treatment A vs Control: Z = 2.34, p = .057
Treatment B vs Control: Z = 2.89, p = .012*
Treatment A vs B: Z = 0.55, p = 1.000

Choosing Between Parametric and Non-Parametric Tests

Decision Framework

  1. Check Sample Size

    • n < 30: Consider non-parametric
    • n ≥ 30: Parametric may be robust
  2. Assess Normality

    • Shapiro-Wilk test (n < 50)
    • Kolmogorov-Smirnov test (n ≥ 50)
    • Visual inspection (Q-Q plots, histograms)
  3. Consider Data Type

    • Ordinal: Non-parametric preferred
    • Interval/Ratio: Either approach possible
  4. Evaluate Outliers

    • Extreme outliers: Non-parametric more robust
    • Mild outliers: Parametric may be acceptable

Power Considerations

Publication-Ready Reporting

Mann-Whitney U Test Example

"A Mann-Whitney U test was conducted to compare pain scores between treatment and control groups. The treatment group (Mdn = 3.5, IQR = 2.0-5.0) had significantly lower pain scores than the control group (Mdn = 6.0, IQR = 4.5-7.5), U = 45.5, z = -3.21, p = .001, r = .52, representing a large effect size."

Kruskal-Wallis Test Example

"A Kruskal-Wallis test revealed significant differences in satisfaction scores among the three treatment conditions, H(2) = 12.67, p = .002, ε² = .18. Post-hoc pairwise comparisons using Dunn's test with Bonferroni correction showed that Treatment A (mean rank = 28.5) and Treatment B (mean rank = 31.2) both differed significantly from Control (mean rank = 15.3), but did not differ from each other."

APA Style Table

Table 1
Descriptive Statistics and Non-Parametric Test Results

Group        n    Median   IQR      Mean Rank   Test Statistic
Treatment A  15   4.0      2.5-6.0  23.4       H = 8.47*
Treatment B  15   3.5      2.0-5.5  25.1       df = 2
Control      15   6.5      5.0-8.0  13.5       p = .014

Note. *p < .05. IQR = Interquartile Range.

Troubleshooting Common Issues

Problem: Tied Ranks

Solution: Most software handles ties automatically using average ranks or continuity corrections.

Problem: Very Small Samples

Solution: Use exact tests rather than normal approximations. Consider permutation tests.

Problem: Effect Size Interpretation

Solution: Use established guidelines (small/medium/large) and report confidence intervals when possible.

Problem: Multiple Comparisons

Solution: Apply appropriate corrections (Bonferroni, FDR) and report both corrected and uncorrected p-values.

Frequently Asked Questions

Q: Can I use non-parametric tests with normal data?

A: Yes, but you'll lose some statistical power. Parametric tests are generally preferred when assumptions are met.

Q: How do I calculate effect sizes for non-parametric tests?

A: Use r = Z/√N for Mann-Whitney, ε² for Kruskal-Wallis, and report medians and IQRs for descriptive effect sizes.

Q: What if my data have many ties?

A: Most non-parametric tests handle ties well. Extensive ties may reduce power but don't invalidate results.

Q: Should I transform data or use non-parametric tests?

A: Try transformations first if they make theoretical sense. Use non-parametric tests if transformations don't work or aren't appropriate.

Q: Can I use non-parametric tests for complex designs?

A: Options are limited. Consider robust regression, permutation tests, or mixed-effects models for complex designs.

Related Tutorials

Next Steps

After mastering non-parametric tests, consider exploring:


This tutorial is part of DataStatPro's comprehensive statistical analysis guide. For more advanced techniques and personalized support, explore our Pro features.