OUP: Abdi: Experimental Design and Analysis for Psychology
- Oxford University Press

We use cookies to enhance your experience on our website. By continuing to use our website, you are agreeing to our use of cookies. You can change your cookie settings at any time. Find out more

A complete course in experimental design and analysis for those students looking to build a working understanding of data collection and analysis in a research context.

The authors' lively, entertaining writing style helps to engage and motivate students while they study these often challenging concepts and skills.

A focus on examples and exercises throughout the text encourages the development of a proper understanding through hands-on learning.

The development and use of definitional formulas throughout provides for increased understanding of statistical procedures and enables the serious student to continue to expand statistical knowledge.

Inclusion of Monte Carlo simulations and re-sampling techniques provides unique coverage of these topics in a student-focused text.

Extensive online support enhances the value of the book as a teaching and learning tool, offering both extensive activities and problems, and guidance on the use of key statistical software, to help readers gain a true working understanding of the subject.

Careful data collection and analysis lies at the heart of good research, through which our understanding of psychology is enhanced. Yet the students who will become the next generation of researchers need more exposure to statistics and experimental design than a typical introductory course presents.

Experimental Design and Analysis for Psychology provides a complete course in data collection and analysis for students who need to go beyond
the basics.

Acting as a true course companion, the text's engaging writing style leads readers through a range of often challenging topics, blending examples and exercises with careful explanations and custom-drawn figures to ensure even the most daunting concepts can be fully understood.

Opening with a review of key concepts, including probability, correlation, and regression, the book goes on to explore the analysis of variance and factorial designs, before moving on to consider a range of more specialised, but yet powerful, statistical tools, including the General Linear Model, and the concept of unbalanced designs.

Not just a printed book, Experimental Design and Analysis for Psychology is enhanced by a range of
online materials, all of which add to its value as an ideal teaching and learning resource.

The Online Resource Centre features:

For registered adopters: Figures from the book, available to download. Answers to exercises featured in the book. Online-only Part III: bonus chapters featuring more advanced material, to extend the coverage of the printed book.

For students: A downloadable workbook, featuring exercises for self-study. SAS, SPSS and R companions, featuring program code and output for all major examples in the book tailored to these three software packages.

Readership: Upper level
undergraduates and beginning graduates who are studying a second course in statistics/quantitative research methods, or who are needing to design or analyse data as part of a research project.

Herve Abdi, University of Texas, Dallas, USA, Betty Edelman, University of Texas, Dallas, USA, Dominique Valentin, University of Bourgogne, Dijon, France, and W. Jay Dowling, University of Texas, Dallas, USA

"The structure of the book makes a lot of sense, and the chapters I have seen are well-written.
" - David Lane, Rice University

"Overall, I think the text has the potential to be an effective one...The writing style is excellent, and the amount and quality of in-text supporting material is excellent as well." - James Bovaird, University of Nebraska-Lincoln

1 Introduction to Experimental Design
1.1: Introduction
1.2: Independent and dependent variables
1.3: Independent variables
1.4: Dependent variables
1.5: Choice of subjects and representative design of experiments
1.7: Key notions of the chapter 2 Correlation
2.1: Introduction
2.2: Correlation: Overview and Example
2.3: Rationale and computation of the coefficient of correlation
2.4: Interpreting correlation and scatterplots
2.5: The importance of scatterplots
2.6: Correlation and similarity of distributions
2.7: Correlation and Z-scores
2.8: Correlation and causality
2.9: Squared correlation as common variance
2.10: Key notions of the chapter
2.11: Key formulas of the chapter
2.12: Key questions of the chapter 3 Statistical Test: The F test
3.1: Introduction
3.2: Statistical Test
3.3: Not zero is not enough!
3.4: Key notions of the chapter
3.5: New notations
3.6: Key formulas of the chapter
3.7: Key questions of the chapter 4 Simple Linear Regression
4.1: Introduction
4.2: Generalities
4.3: The regression line is the "best-fit" line
4.4: Example: Reaction Time and Memory Set
4.5: How to evaluate the quality of prediction
4.6: Partitioning the total sum of squares
4.7: Mathematical Digressions
4.8: Key notions of the chapter
4.9: New notations
4.10: Key formulas of the chapter
4.11: Key questions of the chapter 5 Orthogonal Multiple Regression
5.1: Introduction
5.2: Generalities
5.3: The regression plane is the "best-fit" plane
5.4: Back to the example: Retroactive interference
5.5: How to evaluate the quality of the prediction
5.6: F tests for the simple coefficients of correlation
5.7: Partitioning the sums of squares
5.8: Mathematical Digressions
5.9: Key notions of the chapter
5.10: New notations
5.11: Key formulas of the chapter
5.12: Key questions of the chapter 6 Non-Orthogonal Multiple Regression
6.1: Introduction
6.2: Example: Age, speech rate and memory span
6.3: Computation of the regression plane
6.4: How to evaluate the quality of the prediction
6.5: Semi-partial correlation as increment in explanation
6.5: F tests for the semi-partial correlation coefficients
6.6: What to do with more than two independent variables
6.7: Bonus: Partial correlation
6.8: Key notions of the chapter
6.9: New notations
6.10: Key formulas of the chapter
6.11: Key questions of the chapter 7 ANOVA One Factor: Intuitive Approach and Computation of F
7.1: Introduction
7.2: Intuitive approach
7.3: Computation of the F ratio
7.4: A bit of computation: Mental Imagery
7.5: Key notions of the chapter
7.6: New notations
7.7: Key formulas of the chapter
7.8: Key questions of the chapter 8 ANOVA, One Factor: Test, Computation, and Effect Size
8.1: Introduction
8.2: Statistical test: A refresher
8.3: Example: back to mental imagery
8.4: Another more general notation: A and S(A)
8.5: Presentation of the ANOVA results
8.6: ANOVA with two groups: F and t
8.7: Another example: Romeo and Juliet
8.8: How to estimate the effect size
8.9: Computational formulas
8.10: Key notions of the chapter
8.11: New notations
8.12: Key formulas of the chapter
8.13: Key questions of the chapter 9 ANOVA, one factor: Regression Point of View
9.1: Introduction
9.2: Example 1: Memory and Imagery
9.3: Analysis of variance for Example 1
9.4: Regression approach for Example 1: Mental Imagery
9.5: Equivalence between regression and analysis of variance
9.6: Example 2: Romeo and Juliet
9.7: If regression and analysis of variance are one thing, why keep two different techniques?
9.8: Digression: when predicting Y from Ma., b=1
9.9: Multiple regression and analysis of variance
9.10: Key notions of the chapter
9.11: Key formulas of the chapter
9.12: Key questions of the chapter 10 ANOVE, one factor: Score Model
10.1: Introduction
10.2: ANOVA with one random factor (Model II)
10.3: The Score Model: Model II
10.4: F < 1 or The Strawberry Basket
10.5: Size effect coefficients derived from the score model: w2 and p2
10.6: Three exercises
10.7: Key notions of the chapter
10.8: New notations
10.9: Key formulas of the chapter
10.10: Key questions of the chapter 11 Assumptions of Analysis of Variance
11.1: Introduction
11.2: Validity assumptions
11.3: Testing the Homogeneity of variance assumption
11.4: Example
11.5: Testing Normality: Lilliefors
11.6: Notation
11.7: Numerical example
11.8: Numerical approximation
11.9: Transforming scores
11.10: Key notions of the chapter
11.11: New notations
11.12: Key formulas of the chapter
11.13: Key questions of the chapter 12 Analysis of Variance, one factor: Planned Orthogonal Comparisons
12.1: Introduction
12.2: What is a contrast?
12.3: The different meanings of alpha
12.4: An example: Context and Memory
12.5: Checking the independence of two contrasts
12.6: Computing the sum of squares for a contrast
12.7: Another view: Contrast analysis as regression
12.8: Critical values for the statistical index
12.9: Back to the Context
12.10: Significance of the omnibus F vs. significance of specific contrasts
12.11: How to present the results of orthogonal comparisons
12.12: The omnibus F is a mean
12.13: Sum of orthogonal contrasts: Subdesign analysis
12.14: Key notions of the chapter
12.15: New notations
12.16: Key formulas of the chapter
12.17: Key questions of the chapter 13 ANOVA, one factor: Planned Non-orthogonal Comparisons
13.1: Introduction
13.2: The classical approach
13.3: Multiple regression: The return!
13.4: Key notions of the chapter
13.5: New notations
13.6: Key formulas of the chapter
13.7: Key questions of the chapter 14 ANOVA, one factor: Post hoc or a posteriori analyses
14.1: Introduction
14.2: Scheffe's test: All possible contrasts
14.3: Pairwise comparisons
14.4: Key notions of the chapter
14.5: New notations
14.6: Key questions of the chapter 15 More on Experimental Design: Multi-Factorial Designs
15.1: Introduction
15.2: Notation of experimental designs
15.3: Writing down experimental designs
15.4: Basic experimental designs
15.5: Control factors and factors of interest
15.6: Key notions of the chapter
15.7: Key questions of the chapter 16 ANOVA, two factors: AxB or S(AxB)
16.1: Introduction
16.2: Organization of a two-factor design: AxB
16.3: Main effects and interaction
16.4: Partitioning the experimental sum of squares
16.5: Degrees of freedom and mean squares
16.6: The Score Model (Model I) and the sums of squares
16.7: An example: Cute Cued Recall
16.8: Score Model II: A and B random factors
16.9: ANOVA AxB (Model III): one factor fixed, one factor random
16.10: Index of effect size
16.11: Statistical assumptions and conditions of validity
16.12: Computational formulas
16.13: Relationship between the names of the sources of variability, df and SS
16.14: Key notions of the chapter
16.15: New notations
16.16: Key formulas of the chapter
16.17: Key questions of the chapter 17 Factorial designs and contrasts
17.1: Introduction
17.2: Vocabulary
17.3: Fine grained partition of the standard decomposition
17.4: Contrast analysis in lieu of the standard decomposition
17.5: What error term should be used?
17.6: Example: partitioning the standard decomposition
17.7: Example: a contrtast non-orthogonal to the canonical decomposition
17.8: A posteriori Comparisons
17.9: Key notions of the chapter
17.10: Key questions of the chapter 18 ANOVA, one factor Repeated Measures design: SxA
18.1: Introduction
18.2: Advantages of repeated measurement designs
18.3: Examination of the F Ratio
18.4: Partitioning the within-group variability: S(A) = S + SA
18.5: Computing F in an SxA design
18.6: Numerical example: SxA design
18.7: Score Model: Models I and II for repeated measures designs
18.8: Effect size: R, R and R
18.9: Problems with repeated measures
18.10: Score model (Model I) SxA design: A fixed
18.11: Score model (Model II) SxA design: A random
18.12: A new assumption: sphericity (circularity)
18.13: An example with computational formulas
18.14: Another example: proactive interference
18.15: Key notions of the chapter
18.16: New notations
18.17: Key formulas of the chapter
18.18: Key questions of the chapter 19 ANOVA, Ttwo Factors Completely Repeated Measures: SxAxB
19.1: Introduction
19.2: Example: Plungin'!
19.3: Sum of Squares, Means squares and F ratios
19.4: Score model (Model I), SxAxB design: A and B fixed
19.5: Results of the experiment: Plungin'
19.6: Score Model (Model II): SxAxB design, A and B random
19.7: Score Model (Model III): SxAxB design, A fixed, B random
19.8: Quasi-F: F'
19.9: A cousin F''
19.10: Validity assumptions, measures of intensity, key notions, etc
19.11: New notations
19.12: Key formulas of the chapter 20 ANOVA Two Factor Partially Repeated Measures: S(A)xB
20.1: Introduction
20.2: Example: Bat and Hat
20.3: Sums of Squares, Mean Squares, and F ratio
20.4: The comprehension formula routine
20.5: The 13 point computational routine
20.6: Score model (Model I), S(A)xB design: A and B fixed
20.7: Score model (Model II), S(A)xB design: A and B random
20.8: Score model (Model III), S(A)xB design: A fixed and B random
20.9: Coefficients of Intensity
20.10: Validity of S(A)xB designs
20.11: Prescription
20.12: New notations
20.13: Key formulas of the chapter
20.14: Key questions of the chapter 21 ANOVA, Nested Factorial Designs: SxA(B)
21.1: Introduction
21.2: Example: Faces in Space
21.3: How to analyze an SxA(B) design
21.4: Back to the example: Faces in Space
21.5: What to do with A fixed and B fixed
21.6: When A and B are random factors
21.7: When A is fixed and B is random
21.8: New notations
21.9: Key formulas of the chapter
21.10: Key questions of the chapter 22 How to derive expected values for any design
22.1: Introduction
22.2: Crossing and nesting refresher
22.3: Finding the sources of variation
22.4: Writing the score model
22.5: Degrees of freedom and sums of squares
22.6: Example
22.7: Expected values
22.8: Two additional exercises A Descriptive Statistics B The sum sign: E C Elementary Probability: A Refresher D Probability Distributions E The Binomial Test F Expected Values Statistical tables

The specification in this catalogue, including without limitation price, format, extent, number of illustrations, and month of publication, was as accurate as possible at the time the catalogue was compiled. Occasionally, due to the nature of some contractual restrictions, we are unable to ship a specific product to a particular territory. Jacket images are provisional and liable to change before publication.