
How to Check Normality in SPSS: A Step-by-Step Guide for Researchers
Your supervisor says "check normality first" but never explains how. Shapiro-Wilk, Kolmogorov-Smirnov, skewness, kurtosis, Q-Q plots — which test do you actually need? This practical guide walks you through every step in SPSS with real screenshots and decision rules.
The Normality Question Every Researcher Faces
Almost every parametric statistical test assumes that your data (or residuals) follow a normal distribution. Before running t-tests, ANOVA, regression, or SEM, you need to check this assumption. But normality testing is one of the most confusing topics for researchers — partly because there are multiple ways to assess it, and partly because textbooks disagree on which method to use. This guide gives you a clear, practical workflow you can follow in SPSS for any dataset.
Step 1: Visual Inspection with Histograms and Q-Q Plots
Start with Analyze then Descriptive Statistics then Explore in SPSS. Under Plots, check the Normality plots with tests box. SPSS will produce histograms with normal curves and Q-Q (quantile-quantile) plots. In the Q-Q plot, your data points should fall approximately along the diagonal line. Large systematic deviations — S-curves, clusters away from the line, or heavy tails — suggest non-normality. Visual inspection is your first and most intuitive check, but never rely on it alone.
Step 2: Skewness and Kurtosis Values
Check the skewness and kurtosis statistics in your Explore output. The general rule: if skewness and kurtosis values fall between negative 2 and positive 2, normality is reasonable for most analyses. Some stricter guidelines suggest negative 1 to positive 1 for SEM. To get z-scores, divide each value by its standard error — if the z-score exceeds plus or minus 1.96, the departure from normality is statistically significant at the 0.05 level. But with large samples (over 200), even trivial departures become significant, so always interpret skewness and kurtosis alongside visual plots.
Step 3: Formal Tests — Shapiro-Wilk vs Kolmogorov-Smirnov
SPSS provides both tests in the Explore output. Use Shapiro-Wilk for samples under 50 (it is more powerful for small samples). Kolmogorov-Smirnov (with Lilliefors correction) is provided for larger samples, but it is less sensitive and often fails to detect non-normality. A non-significant result (p above 0.05) means normality is not rejected. However, with large samples (over 300), these tests almost always reject normality even when the deviation is trivially small. This is why you should never rely on formal tests alone — always combine them with visual inspection and skewness/kurtosis values.
What to Do When Data Is Not Normal
If your data violates normality, you have several options depending on the severity and your analysis type. Mild non-normality: most parametric tests (t-test, ANOVA, regression) are robust to mild violations, especially with samples over 30. Moderate non-normality: consider data transformation (log, square root, or inverse), or use bootstrapping (available in SPSS under the Bootstrap button). Severe non-normality: switch to non-parametric alternatives (Mann-Whitney instead of t-test, Kruskal-Wallis instead of ANOVA) or use robust estimators. For SEM, use the MLR (robust maximum likelihood) or WLSMV estimator instead of standard ML.
Need Help With Your Data Analysis?
Normality checking is just one step in a proper analysis workflow. At Future House Academy, we handle the entire pipeline — from data screening and assumption testing to final analysis and APA 7 reporting. Whether you are working in SPSS, R, or AMOS, we ensure your analysis meets the standards reviewers and supervisors expect. If you are unsure whether your data meets the assumptions for your planned analysis, contact us for a free initial consultation. We will review your dataset and recommend the best approach for your specific situation.