Introduction

This report describes the results of a preregistered study available at: https://osf.io/gkd8s.


Note also that this data has been cleaned beforehand. Several datasets over three measurement times were merged (joined) through an inner join so as to keep only participants who at least participated at each step of the study. Missing data will be imputed later on. Duplicates were addressed with the rempsyc::best_duplicate function, which keeps the duplicate with the least amount of missing values, and in case of ties, takes the first occurrence. However, for duplicate participation in the activities and exercises, rather than the first occurrence, the occurrence with the higher completion percentage (% of total activity time) was taken instead.

Packages & Data

Packages

library(dplyr)
library(interactions)
library(performance)
library(see)
library(report)
library(datawizard)
library(modelbased)
library(ggplot2)
library(bestNormalize)
library(psych)
library(GPArotation)
library(visdat)
library(missForest)
library(doParallel)
library(ggplot2)
library(emmeans)
library(sjlabelled)
library(tidyr)
library(tools)
library(flextable)
library(pwr)
library(patchwork)
library(ggpubr)
library(survival)
library(survminer)
if(packageVersion("rempsyc") < "0.1.7.6") stop("Please install 'rempsyc' package version > '1.1.7.6'")
library(rempsyc)
# remotes::install_github("Buedenbender/datscience")
library(datscience)
# remotes::install_version("eefAnalytics", version = "1.0.6", repos = "http://cran.us.r-project.org")
if(packageVersion("eefAnalytics") > "1.0.6") stop("Please install 'eefAnalytics' package version '1.0.6'")
library(eefAnalytics)

Data

# Read data
# data <- readRDS("Data/finaldataset_n496.rds")
# inner.join <- FALSE
data <- readRDS("Data/finaldataset_n217.rds")
inner.join <- TRUE

report_participants(data, threshold = 1) %>% cat

217 participants (Mean age = 24.9, SD = 4.1, range: [18, 35]; Gender: 79.3% women, 20.7% men, 0.00% non-binary)

# Allocation ratio
report(data$T1_Group)

x: 3 levels, namely Meditation (n = 65, 29.95%), Reflection (n = 56, 25.81%) and Waitlist (n = 96, 44.24%)

report(data$T2_Condition)

x: 2 levels, namely Control (n = 119, 54.84%) and Depleted (n = 98, 45.16%)

Data Cleaning

In this section, we are preparing the data for analysis: (a) taking care of preliminary exclusions, (b) checking for and exploring missing values, (d) imputing missing data with missForest, (e) computing scale means, and (f) extracting reliability indices for our scales.

Preliminary exclusions

Here, we report on participant exclusions and corresponding changes on the sample.

Participation %

Second, we know that we only want to keep participants who had a participation level of at least 2/3 of all activities and exercises. Let’s see the distribution of participants’ participation, by group. However, we do not want to exclude participants from the control group, so we will give them an artificial 100% participation rate. How many people would we exclude?

data <- data %>% 
  mutate(part.percent = convert_na_to(part.percent, 1))

data %>% 
  filter(part.percent < 2/3) %>% 
  count(T1_Group)
T1_Group n
Meditation 6
Reflection 1

7 people. Let’s exclude them.

data2 <- data

data <- data %>% 
  filter(part.percent >= 2/3)

report_participants(data, threshold = 1) %>% cat
## 210 participants (Mean age = 24.8, SD = 4.0, range: [18, 35]; Gender: 80.5% women, 19.5% men, 0.00% non-binary)
# Allocation ratio
report(data$T1_Group)
## x: 3 levels, namely Meditation (n = 59, 28.10%), Reflection (n = 55, 26.19%)
## and Waitlist (n = 96, 45.71%)
report(data$T2_Condition)
## x: 2 levels, namely Control (n = 113, 53.81%) and Depleted (n = 97, 46.19%)

Attention Checks

Let’s also exclude those who failed 2 or more attention checks (i.e., keep with those with a score of two or more).

data <- data %>% 
    mutate(att_check = rowSums(
      select(., T1_attention1, T2_attention1, T3_attention1), na.rm = TRUE))

data %>% 
  count(att_check)
att_check n
0 3
1 4
2 13
3 190
data %>% 
  group_by(T1_Group) %>% 
  count(att_check)
T1_Group att_check n
Meditation 1 1
Meditation 2 1
Meditation 3 57
Reflection 1 2
Reflection 2 5
Reflection 3 48
Waitlist 0 3
Waitlist 1 1
Waitlist 2 7
Waitlist 3 85

There’s 7 more exclusions here.

data <- data %>% 
  filter(att_check >= 2)

Demographics

Sample

report_participants(data, threshold = 1) %>% cat

203 participants (Mean age = 24.8, SD = 3.9, range: [18, 35]; Gender: 81.3% women, 18.7% men, 0.00% non-binary)

report_participants(data, threshold = 1, by = "T1_Group") %>% cat

For the ‘T1_Group - Meditation’ group: 58 participants (Mean age = 25.2, SD = 4.5, range: [18, 35]; Gender: 82.8% women, 17.2% men, 0.00% non-binary), for the ‘T1_Group - Reflection’ group: 53 participants (Mean age = 25.1, SD = 3.7, range: [19, 35]; Gender: 69.8% women, 30.2% men, 0.00% non-binary) and for the ‘T1_Group - Waitlist’ group: 92 participants (Mean age = 24.4, SD = 3.7, range: [18, 35]; Gender: 87.0% women, 13.0% men, 0.00% non-binary)

# Allocation ratio
report(data$T1_Group)

x: 3 levels, namely Meditation (n = 58, 28.57%), Reflection (n = 53, 26.11%) and Waitlist (n = 92, 45.32%)

report(data$T2_Condition)

x: 2 levels, namely Control (n = 113, 55.67%) and Depleted (n = 90, 44.33%)

Recruitment

“How did you hear about the study?”

get_label(data$T1_recruitment) %>% cat
## Comment avez-vous entendu parler de l'étude? - Selected Choice
report(data$T1_recruitment)
## x: 8 levels, namely Email (n = 49, 24.14%), Facebook (n = 116, 57.14%), google
## (n = 5, 2.46%), Kijiji (n = 8, 3.94%), Other (n = 0, 0.00%), SQRP bulletin
## board (n = 10, 4.93%), SSMU Marketplace (n = 6, 2.96%) and Word of mouth (n =
## 9, 4.43%)
data %>% 
    count(T1_recruitment, sort = TRUE)
T1_recruitment n
Facebook 116
Email 49
SQRP bulletin board 10
Word of mouth 9
Kijiji 8
SSMU Marketplace 6
google 5

Age

data %>% 
  nice_density("age", histogram = TRUE)

Gender

data %>% 
    count(gender, sort = TRUE)
gender n
Female 165
Male 38

Psychology Classes

data <- data %>% 
  var_labels(T1_psycho.class = "Have you already completed a psychology course?")

get_label(data$T1_psycho.class) %>% cat
## Have you already completed a psychology course?
data %>% 
    count(T1_psycho.class, sort = TRUE)
T1_psycho.class n
Yes 124
No 79

Virtual Reality

“Have you ever tried virtual reality?”

get_label(data$T1_virtual.reality) %>% cat
## Avez-vous déjà essayé une sorte de réalité virtuelle?
data %>% 
    count(T1_virtual.reality, sort = TRUE)
T1_virtual.reality n
No 137
Yes 66

Meditation Experience

data <- data %>% 
  var_labels(T1_medi.experience = "What is your meditation experience?")

get_label(data$T1_medi.experience) %>% cat
## What is your meditation experience?
data %>% 
    count(T1_medi.experience, sort = TRUE, .drop = FALSE)
T1_medi.experience n
< 5 hours 161
Between 5 and 10 hours 42
> 10 hours 0

Disorders

“Do you have a history of psychiatric or neurological disorders?”

get_label(data$T1_disorders) %>% cat
## Avez-vous des antécédents de troubles psychiatriques ou neurologiques?
data %>% 
    count(T1_disorders, sort = TRUE)
T1_disorders n
No 203

Vision

“Do you have normal or corrected-to-normal vision?”

get_label(data$T1_vision) %>% cat
## Avez-vous une vision normale ou corrigée à la normale?
data %>% 
    count(T1_vision, sort = TRUE)
T1_vision n
Yes 203

Phone

“Do you own a smart phone of type iPhone or Android?”

get_label(data$T1_phone) %>% cat
## Possédez-vous un téléphone intelligent de type iPhone ou Android?
data %>% 
    count(T1_phone, sort = TRUE)
T1_phone n
Yes 201
No 2

Live in Quebec

“Do you live in Quebec at this time?”

get_label(data$T1_quebec) %>% cat
## Habitez-vous au Québec présentement?
data %>% 
    count(T1_quebec, sort = TRUE)
T1_quebec n
NA 168
Yes 34
No 1

Student

data <- data %>% 
  var_labels(T1_student = "Are you a student?")

get_label(data$T1_student) %>% cat
## Are you a student?
data %>% 
    count(T1_student, sort = TRUE)
T1_student n
Student 160
Non-student 43
get_label(data$T1_student.program_cat) %>% cat
## Dans quel programme étudiez-vous?
report(data$T1_student.program)
## x: 111 entries, such as psychologie (11.33%); baccalauréat en psychologie
## (2.96%); criminologie (2.96%) and 108 others (43 missing)
data %>% 
    count(T1_student.program_cat, sort = TRUE) %>% 
  filter(n > 3)
T1_student.program_cat n
Psychology 45
NA 43
Other 29
Biology 7
Teaching 7
Biomedical Sciences 6
Law 5
Nursing 5
Social Work 5
Arts 4
History 4
Neuroscience 4
Sexology 4

Workplace

get_label(data$T1_workplace) %>% cat
## Dans quel domaine travaillez-vous?
report(data$T1_workplace)
## x: 39 entries, such as informatique (1.48%); administration (0.99%);
## psychologie (0.99%) and 36 others (160 missing)
data %>% 
    count(T1_workplace_cat, sort = TRUE) %>% 
  filter(n > 1)
T1_workplace_cat n
NA 160
Other 12
Health 5
Information and Technology 5
Psychology 5
Admin 4
Arts & Culture 4

Meditation Practice

Participants were asked questions about their meditation practice during the 13 weeks.

get_label(data$T3_post.medipractice) %>% cat
## Avez-vous pratiqué la méditation au cours des 13 dernières semaines?
data %>% 
    count(T3_post.medipractice, sort = TRUE)
T3_post.medipractice n
No 107
Yes 95
NA 1
get_label(data$T3_medipractice.which) %>% cat
## Quel type de méditation avez-vous pratiqué? - Selected Choice
report(data$T3_medipractice.which)
## x: 21 entries, such as NA (53.20%); mindfulness (20.69%); LKM (7.88%) and 18
## others (0 missing)
data %>% 
    count(T3_medipractice.which, sort = TRUE)
T3_medipractice.which n
NA 108
mindfulness 42
LKM 16
mindfulness, LKM 16
chakras 3
mindfulness, LKM, vipassana 2
other, (Guidée) 2
LKM, chakras 1
LKM, chakras, transcendental 1
mindfulness, LKM, chakras 1
mindfulness, LKM, chakras, other, (amour-propre et healing) 1
mindfulness, LKM, other, (J’ai fait les 42 jours de méditation par l’étude et de la méditation seul (avec bol)) 1
mindfulness, LKM, other, (en marchant) 1
mindfulness, chakras 1
mindfulness, other, (Respiration) 1
mindfulness, transcendental 1
other, (Générique, juste se coucher et se vider l’esprit) 1
other, (Libre pensée / se-laisser-porter) 1
other, (Relaxation) 1
other, (Respiration-visualisation) 1
vipassana 1
get_label(data$T3_medipractice.time) %>% cat
## À quelle fréquence avez-vous médité par semaine?
data %>% 
    count(T3_medipractice.time, sort = TRUE)
T3_medipractice.time n
NA 108
10-20 min 35
< 10 min 33
20-30 min 10
1-2 h 8
30-60 min 8
> 2 h 1
get_label(data$T3_choice.medicomp) %>% cat
## Aimeriez-vous recevoir le programme de méditation gratuitement?
data %>% 
    count(T3_choice.medicomp, sort = TRUE)
T3_choice.medicomp n
Oui 104
Non 93
already received 5
NA 1
data <- data %>% 
  mutate(T3_medipractice.time = ifelse(
    is.na(T3_medipractice.time), "NA", T3_medipractice.time))

Survival Analysis

Eysenbach (2005) notes:

Thus, in any longitudinal eHealth study, we can draw two kinds of attrition curves: (1) proportion of users who are lost to follow-up over time, and (2) proportion of users who do not drop out (eg, who are still filling in questionnaires), but who are no longer using the application, plotted over time. My hypothesis is that the loss-to-follow-up attrition curve usually follows the nonusage attrition curve because a high proportion of loss to follow-up is a result of nonusage (“losing interest” is the underlying variable which explains both curves). In longitudinal studies with control groups, for example randomized trials, a third curve can be drawn to illustrate loss-to-follow-up rate in the comparison group.

Loss-to-Follow-Up Attrition Curve

This is participants who stopped completed questionnaires at T2 or T3 (for all three groups).

# Part1: setting up the data ----------------------------------------------
data_surv <- readRDS("Data/finaldataset_n496.rds")

# Add Time points completions
any_not_na <- \(x) any(!is.na(x))

data_surv <- data_surv %>% 
  rowwise() %>% 
  mutate(T1_completed = any_not_na(pick(contains("T1_"))),
         T2_completed = any_not_na(pick(contains("T2_"))),
         T3_completed = any_not_na(pick(contains("T3_"))),
         T_completed = rowSums(pick(T1_completed:T3_completed)))

## Step 1 ------------------------------------------------------------------

# Step 1 = calculate number of questionnaires completed...
# I think we can make the decision here as NOT MISSING, instead of
# completed more than 70%... it's not about how much they engaged in the
# treatment here... but rather about whether they opened the survey at all,
# that they are still checking their emails.

# For now we should only focus on the intervention groups and ignore waitlist...
# So we should have...
n1 <- data_surv %>% 
  filter(T1_Group != "Waitlist") %>% 
  count(T1_Group)
n1
T1_Group n
Meditation 152
Reflection 154
sum(n1$n)
## [1] 306
# 306 people
# But the participation file only has 285 rows... hum
# But importantly, data_surv doesn't contain the raw exercise or activity data,
# just the questionnaires... so likely more people did the questionnaires but
# didn't do the surveys...
# And only 154 in the reflection group

# Also ChatGPT suggests excluding participants for the failed attention checks
# BEFORE doing the survival analysis for consistency with my inclusion
# criteria for my main analyses... so here we go

# data_surv <- data_surv %>% 
#   mutate(att_check = rowSums(
#     select(., T1_attention1, T2_attention1, T3_attention1), na.rm = TRUE)) %>% 
#   filter(att_check >= 2)
# But that woudln't work because it would underestimate dropout since all the
# dropouts of course couldn't do the attention checks

# We would also have to deal with activites (weekly) vs exercises (daily)... but maybe ignore activities for now for the sake of simplicity...

# For the exercises for reflection group for instance, 
# maybe that makes sense that we have a small sample, because 
# med.ex2 gives us only 124 unique rows of all surveys
# > lapply(med.ex2, \(x) nrow(x)) %>% unlist %>% max()
# [1] 124
# And even med.ex gives us 137(not necessarily unique) rows of all surveys
# > lapply(med.ex, \(x) nrow(x)) %>% unlist %>% max()
# [1] 137
# So I think we can go for the 124 reflection group exercises number...

D_duration_percentage <- paste0("D", 1:42, "_duration_percentage")
W_duration_percentage <- paste0("W", 1:6, "_duration_percentage")
duration_percentage <- c(D_duration_percentage, W_duration_percentage)
non_na_to_1 <- \(x) {ifelse(is.na(x), x, 1)}

data_surv <- data_surv %>% 
  mutate(across(all_of(duration_percentage), non_na_to_1),
  exercise_completed_discrete = rowSums(across(all_of(
    D_duration_percentage)), na.rm = TRUE),
  activity_completed_discrete = rowSums(across(all_of(
    W_duration_percentage)), na.rm = TRUE)) %>% 
  select(T1_Group, part.percent, 
         exercise_completed_discrete, 
         activity_completed_discrete,
         T_completed,
         age:T1_quebec)

## Step 2 ------------------------------------------------------------------

# Step 2 = determine binary status of inclusion or exclusion... (0 and 1)

data_surv <- data_surv %>% 
  mutate(Status = ifelse(part.percent >= 2/3, 0, 1),
         Status = ifelse(is.na(Status), 1, Status),
         completed_discrete = exercise_completed_discrete + activity_completed_discrete,
         .after = "T1_Group") %>% 
  as.data.frame()

# Part 2: Analysis proper -------------------------------------------------

## Time points only ------------------------------------------------------------------

# Next, we look at survival curves by treatment.
fit <- survfit(Surv(T_completed, Status) ~ T1_Group, data = data_surv)

# add method to grid.draw to avoid error in survminer when printing
grid.draw.ggsurvplot <- function(x){
  survminer:::print.ggsurvplot(x, newpage = FALSE)
}

# Enhanced plot using survminer
loss_to_followup <- ggsurvplot(
  fit, 
  data = data_surv,
  xlab = "Number of Surveys",
  ylab = "Dropout Probability (questionnaires only)",
  title = "Survival Curve Based on Kaplan-Meier Estimates (Thériault et al., 2024)",
  caption = "Error bands = 95% confidence bands. Dotted lines = median survival values.",
  font.caption = 13,
  conf.int = TRUE,
  pval = TRUE,
  pval.method = TRUE,
  risk.table = TRUE,
  legend.title = "",
  legend.labs = c("Meditation", "Reflection", "Waitlist"),
  font.main = c(14, "bold"),
  font.x = c(12, "plain"),
  font.y = c(12, "plain"),
  font.tickslab = c(10, "plain"),
  break.x.by = 1,
  surv.median.line = "hv"
)
loss_to_followup

# Save a high-res .png image file
ggsave("Figure X_survival_T1-T3.pdf", plot = loss_to_followup, 
       width = 9, height = 7, unit = "in", dpi = 300)
ggsave("Figure X_survival_T1-T3.png", plot = loss_to_followup, 
       width = 9, height = 7, unit = "in", dpi = 300)

# Perform the log-rank test
survdiff(Surv(T_completed, Status) ~ T1_Group, data = data_surv)
## Call:
## survdiff(formula = Surv(T_completed, Status) ~ T1_Group, data = data_surv)
## 
##                       N Observed Expected (O-E)^2/E (O-E)^2/V
## T1_Group=Meditation 152       69      103     11.10      25.2
## T1_Group=Reflection 154       78       96      3.38       7.5
## T1_Group=Waitlist   190      190      138     19.39      52.2
## 
##  Chisq= 53.7  on 2 degrees of freedom, p= 2e-12
# Fit a Cox model
cox_model <- coxph(Surv(T_completed, Status) ~ T1_Group + age + gender +
                     #T1_already.participated + 
                     T1_psycho.class + 
                     T1_virtual.reality + T1_medi.experience #+ 
                   #T1_phone #+ 
                   #T1_quebec
                   , 
                   data = data_surv)

# View the summary
summary(cox_model)
## Call:
## coxph(formula = Surv(T_completed, Status) ~ T1_Group + age + 
##     gender + T1_psycho.class + T1_virtual.reality + T1_medi.experience, 
##     data = data_surv)
## 
##   n= 496, number of events= 337 
## 
##                                              coef exp(coef) se(coef)      z
## T1_GroupReflection                        0.27656   1.31859  0.16795  1.647
## T1_GroupWaitlist                          1.00123   2.72163  0.14741  6.792
## age                                      -0.02592   0.97441  0.01477 -1.755
## genderMale                               -0.09380   0.91046  0.15041 -0.624
## genderNon-Binary                          1.49574   4.46264  0.52385  2.855
## T1_psycho.classYes                       -0.06550   0.93660  0.11584 -0.565
## T1_virtual.realityYes                     0.13698   1.14680  0.11394  1.202
## T1_medi.experience> 10 hours              0.62021   1.85932  0.71422  0.868
## T1_medi.experienceBetween 5 and 10 hours -0.11797   0.88873  0.14631 -0.806
##                                          Pr(>|z|)    
## T1_GroupReflection                         0.0996 .  
## T1_GroupWaitlist                         1.11e-11 ***
## age                                        0.0792 .  
## genderMale                                 0.5329    
## genderNon-Binary                           0.0043 ** 
## T1_psycho.classYes                         0.5718    
## T1_virtual.realityYes                      0.2293    
## T1_medi.experience> 10 hours               0.3852    
## T1_medi.experienceBetween 5 and 10 hours   0.4201    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##                                          exp(coef) exp(-coef) lower .95
## T1_GroupReflection                          1.3186     0.7584    0.9487
## T1_GroupWaitlist                            2.7216     0.3674    2.0387
## age                                         0.9744     1.0263    0.9466
## genderMale                                  0.9105     1.0983    0.6780
## genderNon-Binary                            4.4626     0.2241    1.5984
## T1_psycho.classYes                          0.9366     1.0677    0.7464
## T1_virtual.realityYes                       1.1468     0.8720    0.9173
## T1_medi.experience> 10 hours                1.8593     0.5378    0.4586
## T1_medi.experienceBetween 5 and 10 hours    0.8887     1.1252    0.6672
##                                          upper .95
## T1_GroupReflection                           1.833
## T1_GroupWaitlist                             3.633
## age                                          1.003
## genderMale                                   1.223
## genderNon-Binary                            12.459
## T1_psycho.classYes                           1.175
## T1_virtual.realityYes                        1.434
## T1_medi.experience> 10 hours                 7.539
## T1_medi.experienceBetween 5 and 10 hours     1.184
## 
## Concordance= 0.59  (se = 0.02 )
## Likelihood ratio test= 72.63  on 9 df,   p=5e-12
## Wald test            = 70.55  on 9 df,   p=1e-11
## Score (logrank) test = 75.07  on 9 df,   p=2e-12
# Visualize the Cox model
ggforest(cox_model, data = data_surv)

Nonusage Attrition Curve

This is participants who stopped adhering to treatment (only the experimental groups with daily exercises and weekly activities).

## By treatment ------------------------------------------------------------------

# We have to exclude waitlist for the time being
data_surv <- data_surv %>%
  filter(T1_Group != "Waitlist")
# Or instead recode it properly...
# data_surv <- data_surv %>%
#   mutate(completed_discrete = case_when(
#     T1_Group == "Waitlist" ~ T_completed * 20,
#     TRUE ~ completed_discrete))

# Next, we look at survival curves by treatment.
fit <- survfit(Surv(completed_discrete, Status) ~ T1_Group, data = data_surv)

# Enhanced plot using survminer
nonusage <- ggsurvplot(
  fit, 
  data = data_surv,
  xlab = "Number of Surveys",
  ylab = "Dropout Probability",
  title = "Survival Curve Based on Kaplan-Meier Estimates (Thériault et al., 2024)",
  caption = "Error bands = 95% confidence bands. Dotted lines = median survival values.",
  font.caption = 13,
  conf.int = TRUE,
  pval = TRUE,
  pval.method = TRUE,
  risk.table = TRUE,
  palette = c("#F8766D", "#53B400"),
  legend.title = "",
  legend.labs = c("Meditation", "Reflection"),
  # legend.labs = c("Meditation", "Reflection", "Waitlist"),
  font.main = c(14, "bold"),
  font.x = c(12, "plain"),
  font.y = c(12, "plain"),
  font.tickslab = c(10, "plain"),
  surv.median.line = "hv",
  break.time.by = 8
)
nonusage

# Save a high-res .png image file
ggsave("Figure X_survival.pdf", plot = nonusage, width = 9, height = 7, unit = "in", dpi = 300)
ggsave("Figure X_survival.png", plot = nonusage, width = 9, height = 7, unit = "in", dpi = 300)

# Perform the log-rank test
survdiff(Surv(completed_discrete, Status) ~ T1_Group, data = data_surv)
## Call:
## survdiff(formula = Surv(completed_discrete, Status) ~ T1_Group, 
##     data = data_surv)
## 
##                       N Observed Expected (O-E)^2/E (O-E)^2/V
## T1_Group=Meditation 152       69     78.5      1.15      2.65
## T1_Group=Reflection 154       78     68.5      1.32      2.65
## 
##  Chisq= 2.6  on 1 degrees of freedom, p= 0.1
# Fit a Cox model
cox_model <- coxph(Surv(completed_discrete, Status) ~ T1_Group + age + gender +
                     #T1_already.participated + 
                     T1_psycho.class + 
                     T1_virtual.reality + T1_medi.experience #+ 
                   #T1_phone #+ 
                   #T1_quebec
                   , 
                   data = data_surv)

# View the summary
summary(cox_model)
## Call:
## coxph(formula = Surv(completed_discrete, Status) ~ T1_Group + 
##     age + gender + T1_psycho.class + T1_virtual.reality + T1_medi.experience, 
##     data = data_surv)
## 
##   n= 306, number of events= 147 
## 
##                                              coef exp(coef) se(coef)      z
## T1_GroupReflection                        0.31514   1.37045  0.16988  1.855
## T1_GroupWaitlist                               NA        NA  0.00000     NA
## age                                      -0.02972   0.97072  0.02061 -1.442
## genderMale                               -0.17662   0.83810  0.22692 -0.778
## genderNon-Binary                          1.19673   3.30927  0.52980  2.259
## T1_psycho.classYes                       -0.11426   0.89203  0.17376 -0.658
## T1_virtual.realityYes                     0.31005   1.36349  0.17110  1.812
## T1_medi.experience> 10 hours              1.24837   3.48465  1.04862  1.190
## T1_medi.experienceBetween 5 and 10 hours -0.27797   0.75732  0.22197 -1.252
##                                          Pr(>|z|)  
## T1_GroupReflection                         0.0636 .
## T1_GroupWaitlist                               NA  
## age                                        0.1494  
## genderMale                                 0.4364  
## genderNon-Binary                           0.0239 *
## T1_psycho.classYes                         0.5108  
## T1_virtual.realityYes                      0.0700 .
## T1_medi.experience> 10 hours               0.2339  
## T1_medi.experienceBetween 5 and 10 hours   0.2105  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##                                          exp(coef) exp(-coef) lower .95
## T1_GroupReflection                          1.3705     0.7297    0.9823
## T1_GroupWaitlist                                NA         NA        NA
## age                                         0.9707     1.0302    0.9323
## genderMale                                  0.8381     1.1932    0.5372
## genderNon-Binary                            3.3093     0.3022    1.1716
## T1_psycho.classYes                          0.8920     1.1210    0.6346
## T1_virtual.realityYes                       1.3635     0.7334    0.9750
## T1_medi.experience> 10 hours                3.4847     0.2870    0.4462
## T1_medi.experienceBetween 5 and 10 hours    0.7573     1.3204    0.4902
##                                          upper .95
## T1_GroupReflection                           1.912
## T1_GroupWaitlist                                NA
## age                                          1.011
## genderMale                                   1.308
## genderNon-Binary                             9.348
## T1_psycho.classYes                           1.254
## T1_virtual.realityYes                        1.907
## T1_medi.experience> 10 hours                27.211
## T1_medi.experienceBetween 5 and 10 hours     1.170
## 
## Concordance= 0.587  (se = 0.025 )
## Likelihood ratio test= 14.18  on 8 df,   p=0.08
## Wald test            = 14.89  on 8 df,   p=0.06
## Score (logrank) test = 15.3  on 8 df,   p=0.05
# Visualize the Cox model
ggforest(cox_model, data = data_surv)

## Exercises only ------------------------------------------------------------------

# Or instead recode it properly...
# data_surv <- data_surv %>%
#   mutate(exercise_completed_discrete = case_when(
#     T1_Group == "Waitlist" ~ T_completed * 14,
#     TRUE ~ exercise_completed_discrete))

# Next, we look at survival curves by treatment.
fit <- survfit(Surv(exercise_completed_discrete, Status) ~ T1_Group, data = data_surv)

# Enhanced plot using survminer
survp <- ggsurvplot(
  fit, 
  data = data_surv,
  xlab = "Number of Surveys",
  ylab = "Dropout Probability (exercises only)",
  title = "Survival Curve Based on Kaplan-Meier Estimates (Thériault et al., 2024)",
  caption = "Error bands = 95% confidence bands. Dotted lines = median survival values.",
  font.caption = 13,
  conf.int = TRUE,
  pval = TRUE,
  pval.method = TRUE,
  risk.table = TRUE,
  palette = c("#F8766D", "#53B400"),
  legend.title = "",
  legend.labs = c("Meditation", "Reflection"),
  # legend.labs = c("Meditation", "Reflection", "Waitlist"),
  font.main = c(14, "bold"),
  font.x = c(12, "plain"),
  font.y = c(12, "plain"),
  font.tickslab = c(10, "plain"),
  surv.median.line = "hv",
  break.time.by = 7,
  xlim = c(0, 43)
)
survp

# Save a high-res .png image file
ggsave("Figure X_survival_exercise.pdf", plot = survp, 
       width = 9, height = 7, unit = "in", dpi = 300)
ggsave("Figure X_survival_exercise.png", plot = survp, 
       width = 9, height = 7, unit = "in", dpi = 300)

# Perform the log-rank test
survdiff(Surv(exercise_completed_discrete, Status) ~ T1_Group, data = data_surv)
## Call:
## survdiff(formula = Surv(exercise_completed_discrete, Status) ~ 
##     T1_Group, data = data_surv)
## 
##                       N Observed Expected (O-E)^2/E (O-E)^2/V
## T1_Group=Meditation 152       69     78.5      1.14      2.68
## T1_Group=Reflection 154       78     68.5      1.31      2.68
## 
##  Chisq= 2.7  on 1 degrees of freedom, p= 0.1
# Fit a Cox model
cox_model <- coxph(Surv(exercise_completed_discrete, Status) ~ T1_Group + age + gender +
                     #T1_already.participated + 
                     T1_psycho.class + 
                     T1_virtual.reality + T1_medi.experience #+ 
                   #T1_phone #+ 
                   #T1_quebec
                   , 
                   data = data_surv)

# View the summary
summary(cox_model)
## Call:
## coxph(formula = Surv(exercise_completed_discrete, Status) ~ T1_Group + 
##     age + gender + T1_psycho.class + T1_virtual.reality + T1_medi.experience, 
##     data = data_surv)
## 
##   n= 306, number of events= 147 
## 
##                                              coef exp(coef) se(coef)      z
## T1_GroupReflection                        0.31037   1.36393  0.16985  1.827
## T1_GroupWaitlist                               NA        NA  0.00000     NA
## age                                      -0.02956   0.97087  0.02062 -1.434
## genderMale                               -0.17268   0.84141  0.22683 -0.761
## genderNon-Binary                          1.19509   3.30386  0.52990  2.255
## T1_psycho.classYes                       -0.11074   0.89517  0.17371 -0.638
## T1_virtual.realityYes                     0.31835   1.37486  0.17112  1.860
## T1_medi.experience> 10 hours              1.24411   3.46984  1.04849  1.187
## T1_medi.experienceBetween 5 and 10 hours -0.27346   0.76074  0.22192 -1.232
##                                          Pr(>|z|)  
## T1_GroupReflection                         0.0677 .
## T1_GroupWaitlist                               NA  
## age                                        0.1516  
## genderMale                                 0.4465  
## genderNon-Binary                           0.0241 *
## T1_psycho.classYes                         0.5238  
## T1_virtual.realityYes                      0.0628 .
## T1_medi.experience> 10 hours               0.2354  
## T1_medi.experienceBetween 5 and 10 hours   0.2179  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##                                          exp(coef) exp(-coef) lower .95
## T1_GroupReflection                          1.3639     0.7332    0.9777
## T1_GroupWaitlist                                NA         NA        NA
## age                                         0.9709     1.0300    0.9324
## genderMale                                  0.8414     1.1885    0.5394
## genderNon-Binary                            3.3039     0.3027    1.1694
## T1_psycho.classYes                          0.8952     1.1171    0.6369
## T1_virtual.realityYes                       1.3749     0.7273    0.9831
## T1_medi.experience> 10 hours                3.4698     0.2882    0.4445
## T1_medi.experienceBetween 5 and 10 hours    0.7607     1.3145    0.4924
##                                          upper .95
## T1_GroupReflection                           1.903
## T1_GroupWaitlist                                NA
## age                                          1.011
## genderMale                                   1.312
## genderNon-Binary                             9.334
## T1_psycho.classYes                           1.258
## T1_virtual.realityYes                        1.923
## T1_medi.experience> 10 hours                27.088
## T1_medi.experienceBetween 5 and 10 hours     1.175
## 
## Concordance= 0.588  (se = 0.025 )
## Likelihood ratio test= 14.16  on 8 df,   p=0.08
## Wald test            = 14.9  on 8 df,   p=0.06
## Score (logrank) test = 15.31  on 8 df,   p=0.05
# Visualize the Cox model
ggforest(cox_model, data = data_surv)

## Activities only ------------------------------------------------------------------

# Next, we look at survival curves by treatment.
fit <- survfit(Surv(activity_completed_discrete, Status) ~ T1_Group, data = data_surv)

# Enhanced plot using survminer
survp <- ggsurvplot(
  fit, 
  data = data_surv,
  xlab = "Number of Surveys",
  ylab = "Dropout Probability (activities only)",
  title = "Survival Curve Based on Kaplan-Meier Estimates (Thériault et al., 2024)",
  caption = "Error bands = 95% confidence bands. Dotted lines = median survival values.",
  font.caption = 13,
  conf.int = TRUE,
  pval = TRUE,
  pval.method = TRUE,
  risk.table = TRUE,
  palette = c("#F8766D", "#53B400"),
  legend.title = "",
  legend.labs = c("Meditation", "Reflection"),
  font.main = c(14, "bold"),
  font.x = c(12, "plain"),
  font.y = c(12, "plain"),
  font.tickslab = c(10, "plain"),
  surv.median.line = "hv"
)
survp

# Save a high-res .png image file
ggsave("Figure X_survival_activity.pdf", plot = survp, 
       width = 9, height = 7, unit = "in", dpi = 300)
ggsave("Figure X_survival_activity.png", plot = survp, 
       width = 9, height = 7, unit = "in", dpi = 300)

# Perform the log-rank test
survdiff(Surv(activity_completed_discrete, Status) ~ T1_Group, data = data_surv)
## Call:
## survdiff(formula = Surv(activity_completed_discrete, Status) ~ 
##     T1_Group, data = data_surv)
## 
##                       N Observed Expected (O-E)^2/E (O-E)^2/V
## T1_Group=Meditation 152       69     77.2     0.873       2.1
## T1_Group=Reflection 154       78     69.8     0.966       2.1
## 
##  Chisq= 2.1  on 1 degrees of freedom, p= 0.1
# Fit a Cox model
cox_model <- coxph(Surv(activity_completed_discrete, Status) ~ T1_Group + age + gender +
                     #T1_already.participated + 
                     T1_psycho.class + 
                     T1_virtual.reality + T1_medi.experience #+ 
                   #T1_phone #+ 
                   #T1_quebec
                   , 
                   data = data_surv)

# View the summary
summary(cox_model)
## Call:
## coxph(formula = Surv(activity_completed_discrete, Status) ~ T1_Group + 
##     age + gender + T1_psycho.class + T1_virtual.reality + T1_medi.experience, 
##     data = data_surv)
## 
##   n= 306, number of events= 147 
## 
##                                              coef exp(coef) se(coef)      z
## T1_GroupReflection                        0.30040   1.35040  0.16928  1.775
## T1_GroupWaitlist                               NA        NA  0.00000     NA
## age                                      -0.02754   0.97284  0.02051 -1.343
## genderMale                               -0.20807   0.81215  0.22753 -0.914
## genderNon-Binary                          1.09894   3.00098  0.52768  2.083
## T1_psycho.classYes                       -0.12212   0.88504  0.17362 -0.703
## T1_virtual.realityYes                     0.28395   1.32837  0.17072  1.663
## T1_medi.experience> 10 hours              1.24777   3.48258  1.04844  1.190
## T1_medi.experienceBetween 5 and 10 hours -0.26423   0.76779  0.22156 -1.193
##                                          Pr(>|z|)  
## T1_GroupReflection                         0.0760 .
## T1_GroupWaitlist                               NA  
## age                                        0.1793  
## genderMale                                 0.3605  
## genderNon-Binary                           0.0373 *
## T1_psycho.classYes                         0.4818  
## T1_virtual.realityYes                      0.0963 .
## T1_medi.experience> 10 hours               0.2340  
## T1_medi.experienceBetween 5 and 10 hours   0.2330  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##                                          exp(coef) exp(-coef) lower .95
## T1_GroupReflection                          1.3504     0.7405    0.9691
## T1_GroupWaitlist                                NA         NA        NA
## age                                         0.9728     1.0279    0.9345
## genderMale                                  0.8122     1.2313    0.5200
## genderNon-Binary                            3.0010     0.3332    1.0668
## T1_psycho.classYes                          0.8850     1.1299    0.6298
## T1_virtual.realityYes                       1.3284     0.7528    0.9506
## T1_medi.experience> 10 hours                3.4826     0.2871    0.4461
## T1_medi.experienceBetween 5 and 10 hours    0.7678     1.3024    0.4973
##                                          upper .95
## T1_GroupReflection                           1.882
## T1_GroupWaitlist                                NA
## age                                          1.013
## genderMale                                   1.269
## genderNon-Binary                             8.442
## T1_psycho.classYes                           1.244
## T1_virtual.realityYes                        1.856
## T1_medi.experience> 10 hours                27.185
## T1_medi.experienceBetween 5 and 10 hours     1.185
## 
## Concordance= 0.585  (se = 0.025 )
## Likelihood ratio test= 12.74  on 8 df,   p=0.1
## Wald test            = 13.29  on 8 df,   p=0.1
## Score (logrank) test = 13.6  on 8 df,   p=0.09
# Visualize the Cox model
ggforest(cox_model, data = data_surv)

Flowchart

Here, we report various sample demographics.

Survival Analysis Plots

This is the Loss-to-Follow-Up Attrition Curve: participants who stopped completed questionnaires at T2 or T3 (for all three groups). There are three units on the x-axis because there are three questionnaire sessions (T1, T2, and T3). The strategy, rather than being temporal (doing T3 but not T2 would be considered a dropout) is cumulative, so as to count someone who did two surveys, but not necessarily without skipping one (e.g., doing T1 and T3 but not T2) as having dropped at T3 rather than T2.

loss_to_followup

This is the Nonusage Attrition Curve: participants who stopped adhering to treatment (so, only using data from the experimental groups with daily exercises and weekly activities). There are 48 units on the x-axis because there are 42 daily exercises and 6 activities. The same cumulative (rather than temporal) strategy is used here: the number of surveys represent the total number of surveys completed, regardless of whether some were skipped in the middle.

nonusage

Although the weekly activities are technically three times as long as the daily exercises (30 min vs 10 min), here the weekly activities count as only one survey so have the same value as the daily exercises. The reason for this is that it is conceptually easier to think of each survey as one unit. The alternative, weighting some surveys more (e.g., a completed weekly activity could be multiplied by three) bring little additional clarity, if any, to the graph, given that it just makes horizontal bars longer without the possibility for units inbetween.

Sample demographics by group

str_formula <- "~ age + gender + T1_psycho.class + T1_medi.experience | T1_Group"
table_caption <- c("Table 1", "Sample demographics split by intervention group")

data <- data %>% 
  var_labels(
    age = "Age",
    gender = "Gender",
    T1_medi.experience = "Meditation experience",
    T3_post.medipractice = "Practiced meditation over last 13 weeks",
    T1_psycho.class = "Already completed a psychology course")

x <- flex_table1(str_formula, data = data, table_caption = table_caption)
x

Sample demographics split by intervention group

Table 1

Characteristic

Meditation
(N = 58)

Reflection
(N = 53)

Waitlist
(N = 92)

F / χ²

df

p

Age

25 (± 4.5)

25 (± 3.7)

24 (± 3.7)

0.89

2, 113.74

.413

Gender *

6.58

2

.0372

  Female

48 (83 %)

37 (70 %)

80 (87 %)

  Male

10 (17 %)

16 (30 %)

12 (13 %)

Already completed a psychology course

5.1

2

.0781

  No

29 (50 %)

21 (40 %)

29 (32 %)

  Yes

29 (50 %)

32 (60 %)

63 (68 %)

Meditation experience

0.75

2

.688

  < 5 hours

46 (79 %)

40 (75 %)

75 (82 %)

  Between 5 and 10 hours

12 (21 %)

13 (25 %)

17 (18 %)

Note. Differences are determined by one way ANOVA or Pearson's χ²-test.

save_flextable(x, "Results/table1.docx", overwrite = TRUE)

Explore missing data

Here, we explore missing data, first by item and questionnaire, then using the visdat package, and finally using Little’s MCAR test.

Missing items

# Check for nice_na
nice_na(data, scales = c(
  "T1_BSCS", "T1_BAQ", "T1_NOBAGS", "T1_attitude", "T1_dehumanization", 
  "T1_WHS", "T1_CLS", "T2_NOBAGS", "T2_attitude", "T2_dehumanization", 
  "T2_SMS5", "T2_PANAS", "T2_WHS", "T2_CLS", "T2_charity", "T3_NOBAGS", 
  "T3_attitude", "T3_dehumanization", "T3_WHS", "T3_CLS"))
var items na cells na_percent na_max na_max_percent all_na
T1_BSCS_1:T1_BSCS_7 7 0 1421 0.00 0 0.00 0
T1_BAQ_1:T1_BAQ_13 13 1 2639 0.04 1 7.69 0
T1_NOBAGS.1_1:T1_NOBAGS.16_1 20 0 4060 0.00 0 0.00 0
T1_attitude_1:T1_attitude_9 9 0 1827 0.00 0 0.00 0
T1_dehumanization_1:T1_dehumanization_9 9 1 1827 0.05 1 11.11 0
T1_WHS_1:T1_WHS_6 6 0 1218 0.00 0 0.00 0
T1_CLS_1:T1_CLS_21 21 0 4263 0.00 0 0.00 0
T2_NOBAGS.1_1:T2_NOBAGS.16_1 20 0 4060 0.00 0 0.00 0
T2_attitude_1:T2_attitude_9 9 0 1827 0.00 0 0.00 0
T2_dehumanization_1:T2_dehumanization_9 9 1 1827 0.05 1 11.11 0
T2_SMS5_1:T2_SMS5_6 6 0 1218 0.00 0 0.00 0
T2_PANAS_1:T2_PANAS_10 10 0 2030 0.00 0 0.00 0
T2_WHS_1:T2_WHS_6 6 0 1218 0.00 0 0.00 0
T2_CLS_1:T2_CLS_21 21 0 4263 0.00 0 0.00 0
T2_charity.moisson1_1:T2_charity.armee2_1 48 62 9744 0.64 22 45.83 0
T3_NOBAGS.1_1:T3_NOBAGS.16_1 20 0 4060 0.00 0 0.00 0
T3_attitude_1:T3_attitude_9 9 0 1827 0.00 0 0.00 0
T3_dehumanization_1:T3_dehumanization_9 9 0 1827 0.00 0 0.00 0
T3_WHS_1:T3_WHS_6 6 0 1218 0.00 0 0.00 0
T3_CLS_1:T3_CLS_21 21 21 4263 0.49 21 100.00 1
Total 431 12893 87493 14.74 129 29.93 0

A few items are missing here and there.

Patterns of missing data

Let’s check for patterns of missing data.

# Smaller subset of data for easier inspection
data %>%
  # select(manualworkerId:att_check2_raw, 
  #        condition:condition_dum) %>%
  vis_miss

Little’s MCAR test

# Let's use Little's MCAR test to confirm
# We have to proceed by "scale" because the function can only
# support 30 variables max at a time
library(naniar)

# We only check for the variable that had missing data, charity

# Have to divide this one in two because it is too large for the function
data %>% 
  select(T2_charity.moisson1_1:T2_charity.suzuki2_1) %>% 
  mcar_test
statistic df p.value missing.patterns
214.1051 181 0.0466264 8
data %>% 
  select(T2_charity.conserv1_1:T2_charity.armee2_1) %>% 
  mcar_test
statistic df p.value missing.patterns
69.62351 61 0.2099961 6

Impute missing data

Here, we impute missing data with the missForest package, as it is one of the best imputation methods.

Imputation

# Need logical and character variables as factors for missForest
# "Error: Can not handle categorical predictors with more than 53 categories."
new.data <- data %>% 
  select(-c(T1_student.program, # T1_student.program = Too many categories (> 53)
            contains("text.answer"),
            T1_already.participated, # T1_already.participated = lead to error
            T1_workplace, # T1_workplace = lead to error (only when n = 496)
            ID, # have to remove
            T3_consent)) %>% # T3_consent = lead to error
  mutate(across(c(where(is.character), where(is.logical)), as.factor)) %>% 
  as.data.frame()
# sapply(data %>% select(!where(is.numeric)), \(x) length(unique(x)))

# Parallel processing
registerDoParallel(cores = 11)

# Variables
set.seed(100)
time1 <- Sys.time()
data.imp <- missForest(new.data, verbose = TRUE, parallelize = "variables")
##   removed variable(s) 4 due to the missingness of all entries
##   parallelizing over the variables of the input data matrix 'xmis'
##   missForest iteration 1 in progress...done!
##     estimated error(s): 0.01476409 0.1850773 
##     difference(s): 0.0001179016 0.07799672 
##     time: 10.63 seconds
## 
##   missForest iteration 2 in progress...done!
##     estimated error(s): 0.01459879 0.1853274 
##     difference(s): 1.322876e-05 0.08682266 
##     time: 16.7 seconds
## 
##   missForest iteration 3 in progress...done!
##     estimated error(s): 0.0143588 0.1819373 
##     difference(s): 1.283222e-05 0.08969622 
##     time: 14.25 seconds
## 
##   missForest iteration 4 in progress...done!
##     estimated error(s): 0.014449 0.1855504 
##     difference(s): 1.038234e-05 0.08641215 
##     time: 12.78 seconds
## 
##   missForest iteration 5 in progress...done!
##     estimated error(s): 0.01456685 0.1866562 
##     difference(s): 1.066632e-05 0.08210181 
##     time: 14.28 seconds
## 
##   missForest iteration 6 in progress...done!
##     estimated error(s): 0.01444488 0.1848978 
##     difference(s): 8.098704e-06 0.08949097 
##     time: 13.84 seconds
## 
##   missForest iteration 7 in progress...done!
##     estimated error(s): 0.01446103 0.1860107 
##     difference(s): 9.250454e-06 0.09544335 
##     time: 19.85 seconds
# Total time is 2 sec (4*0.5) - 4 cores
time2 <- Sys.time()
total_time <- time2-time1
total_time
## Time difference of 1.707786 mins
# 2 cores = 2 min
# 4 cores = 41 sec
# 6 cores = 49 sec
# 8 cores = 1 min
# 9 cores = 57 esc
# 10 cores = 25 sec
# 11 cores = 18 sec
# 12 cores = 1 min
# 15 cores = 42 sec

# Add back ID
data.imp$ximp$ID <- data$ID

# Extract imputed dataset
data <- data.imp$ximp

Details

Why impute the data? van Ginkel explains,

Regardless of the missingness mechanism, multiple imputation is always to be preferred over listwise deletion. Under MCAR it is preferred because it results in more statistical power, under MAR it is preferred because besides more power it will give unbiased results whereas listwise deletion may not, and under NMAR it is also the preferred method because it will give less biased results than listwise deletion.

van Ginkel, J. R., Linting, M., Rippe, R. C. A., & van der Voort, A. (2020). Rebutting existing misconceptions about multiple imputation as a method for handling missing data. Journal of Personality Assessment, 102(3), 297-308. https://doi.org/10.1080/00223891.2018.1530680

Why missForest? It outperforms other imputation methods, including the popular MICE (multiple imputation by chained equations). You also don’t end up with several datasets, which makes it easier for following analyses. Finally, it can be applied to mixed data types (missings in numeric & categorical variables).

Waljee, A. K., Mukherjee, A., Singal, A. G., Zhang, Y., Warren, J., Balis, U., … & Higgins, P. D. (2013). Comparison of imputation methods for missing laboratory data in medicine. BMJ open, 3(8), e002847. https://doi.org/10.1093/bioinformatics/btr597

Stekhoven, D. J., & Bühlmann, P. (2012). MissForest—non-parametric missing value imputation for mixed-type data. Bioinformatics, 28(1), 112-118. https://doi.org/10.1093/bioinformatics/btr597

Scale Means & Reliability

Now that we have imputed the missing data, we are ready to calculate our scale means. After reversing our items, we can then get the alphas and omegas for our different scales.

Self-control

# Reverse code items 2, 4, 6, 7
data <- data %>% 
  mutate(across(contains("BSCS"), .names = "{col}r"))

data <- data %>% 
  mutate(across(ends_with(paste("BSCS", c(2, 4, 6, 7), sep = "_")), 
                ~nice_reverse(.x, 5), .names = "{col}r"))

# Get mean BSCS
data <- data %>% 
  mutate(T1_BSCS = rowMeans(pick(T1_BSCS_1r:T1_BSCS_7r)))

# Get alpha & omega
data %>% 
  select(T1_BSCS_1r:T1_BSCS_7r) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.75 
## G.6:                   0.75 
## Omega Hierarchical:    0.75 
## Omega H asymptotic:    1 
## Omega Total            0.75 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##               g  F1*   h2   h2   u2 p2 com
## T1_BSCS_1r 0.54      0.29 0.29 0.71  1   1
## T1_BSCS_2r 0.55      0.30 0.30 0.70  1   1
## T1_BSCS_3r 0.69      0.47 0.47 0.53  1   1
## T1_BSCS_4r 0.59      0.35 0.35 0.65  1   1
## T1_BSCS_5r 0.48      0.23 0.23 0.77  1   1
## T1_BSCS_6r 0.61      0.37 0.37 0.63  1   1
## T1_BSCS_7r 0.37           0.14 0.86  1   1
## 
## With Sums of squares  of:
##    g  F1*   h2 
## 2.15 0.00 0.73 
## 
## general/max  2.95   max/min =   6.585015e+15
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 14  and the fit is  0.39 
## The number of observations was  203  with Chi Square =  77.58  with prob <  7.9e-11
## The root mean square of the residuals is  0.09 
## The df corrected root mean square of the residuals is  0.12
## RMSEA index =  0.149  and the 10 % confidence intervals are  0.118 0.183
## BIC =  3.19
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 14  and the fit is  0.39 
## The number of observations was  203  with Chi Square =  77.58  with prob <  7.9e-11
## The root mean square of the residuals is  0.09 
## The df corrected root mean square of the residuals is  0.12 
## 
## RMSEA index =  0.149  and the 10 % confidence intervals are  0.118 0.183
## BIC =  3.19 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.88   0
## Multiple R square of scores with factors      0.77   0
## Minimum correlation of factor score estimates 0.54  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.75 0.75
## Omega general for total scores and subscales  0.75 0.75
## Omega group for total scores and subscales    0.00 0.00

Trait aggression

# Reverse code item 7
data <- data %>% 
  mutate(across(contains("BAQ"), .names = "{col}r"))

data <- data %>% 
  mutate(across(T1_BAQ_7, ~nice_reverse(.x, 7), .names = "{col}r"))

# Get mean BAQ
data <- data %>% 
  mutate(T1_BAQ = rowMeans(pick(T1_BAQ_1r:T1_BAQ_12r)))

# Get alpha & omega
data %>% 
  select(T1_BAQ_1r:T1_BAQ_12r) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.78 
## G.6:                   0.82 
## Omega Hierarchical:    0.79 
## Omega H asymptotic:    1 
## Omega Total            0.79 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##               g  F1*   h2   h2   u2 p2 com
## T1_BAQ_1r  0.65      0.42 0.42 0.58  1   1
## T1_BAQ_2r  0.57      0.32 0.32 0.68  1   1
## T1_BAQ_3r  0.56      0.31 0.31 0.69  1   1
## T1_BAQ_4r                 0.03 0.97  1   1
## T1_BAQ_5r  0.56      0.31 0.31 0.69  1   1
## T1_BAQ_6r  0.46      0.21 0.21 0.79  1   1
## T1_BAQ_7r  0.41           0.17 0.83  1   1
## T1_BAQ_8r  0.62      0.39 0.39 0.61  1   1
## T1_BAQ_9r  0.72      0.52 0.52 0.48  1   1
## T1_BAQ_10r 0.33           0.11 0.89  1   1
## T1_BAQ_11r 0.45      0.20 0.20 0.80  1   1
## T1_BAQ_12r 0.28           0.08 0.92  1   1
## 
## With Sums of squares  of:
##   g F1*  h2 
## 3.1 0.0 1.0 
## 
## general/max  2.98   max/min =   1.858259e+16
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 54  and the fit is  1.38 
## The number of observations was  203  with Chi Square =  270.72  with prob <  1.3e-30
## The root mean square of the residuals is  0.12 
## The df corrected root mean square of the residuals is  0.13
## RMSEA index =  0.141  and the 10 % confidence intervals are  0.125 0.158
## BIC =  -16.19
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 54  and the fit is  1.38 
## The number of observations was  203  with Chi Square =  270.72  with prob <  1.3e-30
## The root mean square of the residuals is  0.12 
## The df corrected root mean square of the residuals is  0.13 
## 
## RMSEA index =  0.141  and the 10 % confidence intervals are  0.125 0.158
## BIC =  -16.19 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.91   0
## Multiple R square of scores with factors      0.83   0
## Minimum correlation of factor score estimates 0.66  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.79 0.79
## Omega general for total scores and subscales  0.79 0.79
## Omega group for total scores and subscales    0.00 0.00

Explicit Attitudes

data <- data %>% 
  mutate(T1_attitude = rowMeans(pick(T1_attitude_1:T1_attitude_9)),
         T2_attitude = rowMeans(pick(T2_attitude_1:T2_attitude_9)),
         T3_attitude = rowMeans(pick(T3_attitude_1:T3_attitude_9)))

# Get alpha & omega
data %>% 
  select(T1_attitude_1:T1_attitude_9) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Warning in cov2cor(t(w) %*% r %*% w): diag(V) had non-positive or NA entries;
## the non-finite result may be dubious
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.93 
## G.6:                   0.94 
## Omega Hierarchical:    0.94 
## Omega H asymptotic:    1 
## Omega Total            0.94 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##                  g  F1*   h2   h2   u2 p2 com
## T1_attitude_1 0.92      0.84 0.84 0.16  1   1
## T1_attitude_2 0.78      0.61 0.61 0.39  1   1
## T1_attitude_3 0.91      0.82 0.82 0.18  1   1
## T1_attitude_4 0.85      0.73 0.73 0.27  1   1
## T1_attitude_5 0.83      0.68 0.68 0.32  1   1
## T1_attitude_6 0.83      0.68 0.68 0.32  1   1
## T1_attitude_7 0.39           0.15 0.85  1   1
## T1_attitude_8 0.83      0.69 0.69 0.31  1   1
## T1_attitude_9 0.69      0.47 0.47 0.53  1   1
## 
## With Sums of squares  of:
##   g F1*  h2 
## 5.7 0.0 3.9 
## 
## general/max  1.44   max/min =   Inf
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 27  and the fit is  0.73 
## The number of observations was  203  with Chi Square =  143.81  with prob <  6.8e-18
## The root mean square of the residuals is  0.05 
## The df corrected root mean square of the residuals is  0.06
## RMSEA index =  0.146  and the 10 % confidence intervals are  0.123 0.17
## BIC =  0.35
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 27  and the fit is  0.73 
## The number of observations was  203  with Chi Square =  143.81  with prob <  6.8e-18
## The root mean square of the residuals is  0.05 
## The df corrected root mean square of the residuals is  0.06 
## 
## RMSEA index =  0.146  and the 10 % confidence intervals are  0.123 0.17
## BIC =  0.35 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.98   0
## Multiple R square of scores with factors      0.96   0
## Minimum correlation of factor score estimates 0.91  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.94 0.94
## Omega general for total scores and subscales  0.94 0.94
## Omega group for total scores and subscales    0.00 0.00
data %>% 
  select(T2_attitude_1:T2_attitude_9) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.96 
## G.6:                   0.97 
## Omega Hierarchical:    0.96 
## Omega H asymptotic:    1 
## Omega Total            0.96 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##                  g  F1*   h2   h2   u2 p2 com
## T2_attitude_1 0.92      0.85 0.85 0.15  1   1
## T2_attitude_2 0.87      0.75 0.75 0.25  1   1
## T2_attitude_3 0.94      0.89 0.89 0.11  1   1
## T2_attitude_4 0.88      0.77 0.77 0.23  1   1
## T2_attitude_5 0.91      0.82 0.82 0.18  1   1
## T2_attitude_6 0.91      0.83 0.83 0.17  1   1
## T2_attitude_7 0.56      0.32 0.32 0.68  1   1
## T2_attitude_8 0.93      0.86 0.86 0.14  1   1
## T2_attitude_9 0.84      0.71 0.71 0.29  1   1
## 
## With Sums of squares  of:
##   g F1*  h2 
## 6.8 0.0 5.4 
## 
## general/max  1.26   max/min =   9.715827e+16
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 27  and the fit is  0.83 
## The number of observations was  203  with Chi Square =  164.1  with prob <  1.3e-21
## The root mean square of the residuals is  0.03 
## The df corrected root mean square of the residuals is  0.03
## RMSEA index =  0.158  and the 10 % confidence intervals are  0.136 0.182
## BIC =  20.65
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 27  and the fit is  0.83 
## The number of observations was  203  with Chi Square =  164.1  with prob <  1.3e-21
## The root mean square of the residuals is  0.03 
## The df corrected root mean square of the residuals is  0.03 
## 
## RMSEA index =  0.158  and the 10 % confidence intervals are  0.136 0.182
## BIC =  20.65 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.99   0
## Multiple R square of scores with factors      0.98   0
## Minimum correlation of factor score estimates 0.95  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.96 0.96
## Omega general for total scores and subscales  0.96 0.96
## Omega group for total scores and subscales    0.00 0.00
data %>% 
  select(T3_attitude_1:T3_attitude_9) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## diag(V) had non-positive or NA entries; the non-finite result may be dubious
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.96 
## G.6:                   0.97 
## Omega Hierarchical:    0.96 
## Omega H asymptotic:    1 
## Omega Total            0.96 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##                  g  F1*   h2   h2   u2 p2 com
## T3_attitude_1 0.96      0.93 0.93 0.07  1   1
## T3_attitude_2 0.83      0.68 0.68 0.32  1   1
## T3_attitude_3 0.94      0.88 0.88 0.12  1   1
## T3_attitude_4 0.86      0.74 0.74 0.26  1   1
## T3_attitude_5 0.92      0.85 0.85 0.15  1   1
## T3_attitude_6 0.88      0.78 0.78 0.22  1   1
## T3_attitude_7 0.61      0.37 0.37 0.63  1   1
## T3_attitude_8 0.90      0.81 0.81 0.19  1   1
## T3_attitude_9 0.79      0.62 0.62 0.38  1   1
## 
## With Sums of squares  of:
##   g F1*  h2 
## 6.7 0.0 5.2 
## 
## general/max  1.29   max/min =   Inf
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 27  and the fit is  0.98 
## The number of observations was  203  with Chi Square =  193.76  with prob <  3.8e-27
## The root mean square of the residuals is  0.04 
## The df corrected root mean square of the residuals is  0.04
## RMSEA index =  0.174  and the 10 % confidence intervals are  0.152 0.198
## BIC =  50.31
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 27  and the fit is  0.98 
## The number of observations was  203  with Chi Square =  193.76  with prob <  3.8e-27
## The root mean square of the residuals is  0.04 
## The df corrected root mean square of the residuals is  0.04 
## 
## RMSEA index =  0.174  and the 10 % confidence intervals are  0.152 0.198
## BIC =  50.31 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.99   0
## Multiple R square of scores with factors      0.98   0
## Minimum correlation of factor score estimates 0.96  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.96 0.96
## Omega general for total scores and subscales  0.96 0.96
## Omega group for total scores and subscales    0.00 0.00

Dehumanization

data <- data %>% 
  mutate(T1_dehumanization = rowMeans(pick(T1_dehumanization_1:T1_dehumanization_9)),
         T2_dehumanization = rowMeans(pick(T2_dehumanization_1:T2_dehumanization_9)),
         T3_dehumanization = rowMeans(pick(T3_dehumanization_1:T3_dehumanization_9)))

# Get alpha & omega
data %>% 
  select(T1_dehumanization_1:T1_dehumanization_9) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.94 
## G.6:                   0.96 
## Omega Hierarchical:    0.96 
## Omega H asymptotic:    1 
## Omega Total            0.96 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##                        g  F1*   h2   h2   u2 p2 com
## T1_dehumanization_1 0.91      0.83 0.83 0.17  1   1
## T1_dehumanization_2 0.93      0.87 0.87 0.13  1   1
## T1_dehumanization_3 0.93      0.86 0.86 0.14  1   1
## T1_dehumanization_4 0.93      0.86 0.86 0.14  1   1
## T1_dehumanization_5 0.95      0.90 0.90 0.10  1   1
## T1_dehumanization_6 0.87      0.76 0.76 0.24  1   1
## T1_dehumanization_7                0.00 1.00  1   1
## T1_dehumanization_8 0.92      0.85 0.85 0.15  1   1
## T1_dehumanization_9 0.81      0.66 0.66 0.34  1   1
## 
## With Sums of squares  of:
##   g F1*  h2 
## 6.6 0.0 5.5 
## 
## general/max  1.21   max/min =   1.533958e+17
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 27  and the fit is  1.24 
## The number of observations was  203  with Chi Square =  244.35  with prob <  6.9e-37
## The root mean square of the residuals is  0.03 
## The df corrected root mean square of the residuals is  0.04
## RMSEA index =  0.199  and the 10 % confidence intervals are  0.177 0.223
## BIC =  100.89
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 27  and the fit is  1.24 
## The number of observations was  203  with Chi Square =  244.35  with prob <  6.9e-37
## The root mean square of the residuals is  0.03 
## The df corrected root mean square of the residuals is  0.04 
## 
## RMSEA index =  0.199  and the 10 % confidence intervals are  0.177 0.223
## BIC =  100.89 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.99   0
## Multiple R square of scores with factors      0.98   0
## Minimum correlation of factor score estimates 0.96  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.96 0.96
## Omega general for total scores and subscales  0.96 0.96
## Omega group for total scores and subscales    0.00 0.00
data %>% 
  select(T2_dehumanization_1:T2_dehumanization_9) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.96 
## G.6:                   0.97 
## Omega Hierarchical:    0.97 
## Omega H asymptotic:    1 
## Omega Total            0.97 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##                        g  F1*   h2   h2   u2 p2 com
## T2_dehumanization_1 0.98      0.96 0.96 0.04  1   1
## T2_dehumanization_2 0.94      0.89 0.89 0.11  1   1
## T2_dehumanization_3 0.94      0.89 0.89 0.11  1   1
## T2_dehumanization_4 0.92      0.84 0.84 0.16  1   1
## T2_dehumanization_5 0.96      0.92 0.92 0.08  1   1
## T2_dehumanization_6 0.92      0.85 0.85 0.15  1   1
## T2_dehumanization_7                0.00 1.00  1   1
## T2_dehumanization_8 0.95      0.91 0.91 0.09  1   1
## T2_dehumanization_9 0.94      0.88 0.88 0.12  1   1
## 
## With Sums of squares  of:
##   g F1*  h2 
## 7.1 0.0 6.4 
## 
## general/max  1.12   max/min =   1.334418e+17
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 27  and the fit is  1.09 
## The number of observations was  203  with Chi Square =  215.85  with prob <  2.3e-31
## The root mean square of the residuals is  0.02 
## The df corrected root mean square of the residuals is  0.02
## RMSEA index =  0.186  and the 10 % confidence intervals are  0.163 0.21
## BIC =  72.4
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 27  and the fit is  1.09 
## The number of observations was  203  with Chi Square =  215.85  with prob <  2.3e-31
## The root mean square of the residuals is  0.02 
## The df corrected root mean square of the residuals is  0.02 
## 
## RMSEA index =  0.186  and the 10 % confidence intervals are  0.163 0.21
## BIC =  72.4 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.99   0
## Multiple R square of scores with factors      0.99   0
## Minimum correlation of factor score estimates 0.98  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.97 0.97
## Omega general for total scores and subscales  0.97 0.97
## Omega group for total scores and subscales    0.00 0.00
data %>% 
  select(T3_dehumanization_1:T3_dehumanization_9) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Warning in cov2cor(t(w) %*% r %*% w): diag(V) had non-positive or NA entries;
## the non-finite result may be dubious
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.95 
## G.6:                   0.97 
## Omega Hierarchical:    0.96 
## Omega H asymptotic:    1 
## Omega Total            0.96 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##                        g  F1*   h2   h2   u2 p2 com
## T3_dehumanization_1 0.95      0.90 0.90 0.10  1   1
## T3_dehumanization_2 0.93      0.87 0.87 0.13  1   1
## T3_dehumanization_3 0.93      0.87 0.87 0.13  1   1
## T3_dehumanization_4 0.95      0.91 0.91 0.09  1   1
## T3_dehumanization_5 0.95      0.91 0.91 0.09  1   1
## T3_dehumanization_6 0.89      0.78 0.78 0.22  1   1
## T3_dehumanization_7                0.01 0.99  1   1
## T3_dehumanization_8 0.93      0.86 0.86 0.14  1   1
## T3_dehumanization_9 0.84      0.70 0.70 0.30  1   1
## 
## With Sums of squares  of:
##   g F1*  h2 
## 6.8 0.0 5.8 
## 
## general/max  1.17   max/min =   Inf
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 27  and the fit is  2.32 
## The number of observations was  203  with Chi Square =  457.5  with prob <  8.7e-80
## The root mean square of the residuals is  0.03 
## The df corrected root mean square of the residuals is  0.04
## RMSEA index =  0.28  and the 10 % confidence intervals are  0.259 0.304
## BIC =  314.04
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 27  and the fit is  2.32 
## The number of observations was  203  with Chi Square =  457.5  with prob <  8.7e-80
## The root mean square of the residuals is  0.03 
## The df corrected root mean square of the residuals is  0.04 
## 
## RMSEA index =  0.28  and the 10 % confidence intervals are  0.259 0.304
## BIC =  314.04 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.99   0
## Multiple R square of scores with factors      0.98   0
## Minimum correlation of factor score estimates 0.96  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.96 0.96
## Omega general for total scores and subscales  0.96 0.96
## Omega group for total scores and subscales    0.00 0.00

Aggression Attitude (NOBAGS)

data <- data %>% 
  mutate(across(contains("NOBAGS"), .names = "{col}r"))

# Reverse code NOBAGS (items 1:2, 5:6, 10,12, 14:16, 20)
data <- data %>%
  mutate(across(ends_with(paste0("NOBAGS.", c(
    "1_1", "1_2", "3_1", "3_2", "6_1", "8_1", "10_1", "12_1", "16_1"))), 
    ~nice_reverse(.x, 4), .names = "{col}r"))

# Get mean NOBAGS
data <- data %>% 
  mutate(T1_NOBAGS = rowMeans(pick(T1_NOBAGS.1_1r:T1_NOBAGS.16_1r)),
         T2_NOBAGS = rowMeans(pick(T2_NOBAGS.1_1r:T2_NOBAGS.16_1r)),
         T3_NOBAGS = rowMeans(pick(T3_NOBAGS.1_1r:T3_NOBAGS.16_1r)))

# Get alpha & omega
data %>% 
  select(T1_NOBAGS.1_1r:T1_NOBAGS.16_1r) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.85 
## G.6:                   0.91 
## Omega Hierarchical:    0.84 
## Omega H asymptotic:    0.99 
## Omega Total            0.85 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##                     g  F1*   h2   h2   u2 p2 com
## T1_NOBAGS.1_1r   0.63      0.40 0.40 0.60  1   1
## T1_NOBAGS.1_2r   0.52      0.27 0.27 0.73  1   1
## T1_NOBAGS.2_1r   0.53      0.28 0.28 0.72  1   1
## T1_NOBAGS.2_2r   0.46      0.21 0.21 0.79  1   1
## T1_NOBAGS.3_1r   0.64      0.41 0.41 0.59  1   1
## T1_NOBAGS.3_2r   0.50      0.25 0.25 0.75  1   1
## T1_NOBAGS.4_1r   0.67      0.45 0.45 0.55  1   1
## T1_NOBAGS.4_2r   0.47      0.22 0.22 0.78  1   1
## T1_NOBAGS.5_1r   0.51      0.26 0.26 0.74  1   1
## T1_NOBAGS.6_1r   0.42           0.18 0.82  1   1
## T1_NOBAGS.7_1r   0.57      0.33 0.33 0.67  1   1
## T1_NOBAGS.8_1r   0.45      0.20 0.20 0.80  1   1
## T1_NOBAGS.9_1r   0.44      0.20 0.20 0.80  1   1
## T1_NOBAGS.10_1r  0.34           0.12 0.88  1   1
## T1_NOBAGS.11_1r- 0.27           0.07 0.93  1   1
## T1_NOBAGS.12_1r  0.30           0.09 0.91  1   1
## T1_NOBAGS.13_1r  0.43           0.18 0.82  1   1
## T1_NOBAGS.14_1r  0.45      0.21 0.21 0.79  1   1
## T1_NOBAGS.15_1r  0.47      0.22 0.22 0.78  1   1
## T1_NOBAGS.16_1r                 0.03 0.97  1   1
## 
## With Sums of squares  of:
##   g F1*  h2 
## 4.6 0.0 1.3 
## 
## general/max  3.59   max/min =   7.648515e+15
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 170  and the fit is  6.58 
## The number of observations was  203  with Chi Square =  1276.17  with prob <  1.1e-168
## The root mean square of the residuals is  0.16 
## The df corrected root mean square of the residuals is  0.17
## RMSEA index =  0.179  and the 10 % confidence intervals are  0.17 0.189
## BIC =  372.93
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 170  and the fit is  6.58 
## The number of observations was  203  with Chi Square =  1276.17  with prob <  1.1e-168
## The root mean square of the residuals is  0.16 
## The df corrected root mean square of the residuals is  0.17 
## 
## RMSEA index =  0.179  and the 10 % confidence intervals are  0.17 0.189
## BIC =  372.93 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.93   0
## Multiple R square of scores with factors      0.87   0
## Minimum correlation of factor score estimates 0.73  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.85 0.84
## Omega general for total scores and subscales  0.84 0.84
## Omega group for total scores and subscales    0.00 0.00
data %>% 
  select(T2_NOBAGS.1_1r:T2_NOBAGS.16_1r) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.88 
## G.6:                   0.94 
## Omega Hierarchical:    0.88 
## Omega H asymptotic:    0.99 
## Omega Total            0.88 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##                     g  F1*   h2   h2   u2 p2 com
## T2_NOBAGS.1_1r   0.53      0.28 0.28 0.72  1   1
## T2_NOBAGS.1_2r   0.41           0.17 0.83  1   1
## T2_NOBAGS.2_1r   0.68      0.46 0.46 0.54  1   1
## T2_NOBAGS.2_2r   0.69      0.47 0.47 0.53  1   1
## T2_NOBAGS.3_1r   0.63      0.40 0.40 0.60  1   1
## T2_NOBAGS.3_2r   0.54      0.29 0.29 0.71  1   1
## T2_NOBAGS.4_1r   0.64      0.41 0.41 0.59  1   1
## T2_NOBAGS.4_2r   0.61      0.37 0.37 0.63  1   1
## T2_NOBAGS.5_1r   0.68      0.47 0.47 0.53  1   1
## T2_NOBAGS.6_1r   0.45      0.20 0.20 0.80  1   1
## T2_NOBAGS.7_1r   0.68      0.46 0.46 0.54  1   1
## T2_NOBAGS.8_1r   0.42           0.18 0.82  1   1
## T2_NOBAGS.9_1r   0.46      0.21 0.21 0.79  1   1
## T2_NOBAGS.10_1r  0.37           0.13 0.87  1   1
## T2_NOBAGS.11_1r- 0.38           0.15 0.85  1   1
## T2_NOBAGS.12_1r  0.46      0.21 0.21 0.79  1   1
## T2_NOBAGS.13_1r  0.47      0.22 0.22 0.78  1   1
## T2_NOBAGS.14_1r  0.47      0.22 0.22 0.78  1   1
## T2_NOBAGS.15_1r  0.42           0.18 0.82  1   1
## T2_NOBAGS.16_1r  0.42           0.18 0.82  1   1
## 
## With Sums of squares  of:
##   g F1*  h2 
## 5.7 0.0 1.9 
## 
## general/max  3   max/min =   5.240896e+15
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 170  and the fit is  7.91 
## The number of observations was  203  with Chi Square =  1532.35  with prob <  1.2e-217
## The root mean square of the residuals is  0.18 
## The df corrected root mean square of the residuals is  0.19
## RMSEA index =  0.199  and the 10 % confidence intervals are  0.19 0.208
## BIC =  629.1
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 170  and the fit is  7.91 
## The number of observations was  203  with Chi Square =  1532.35  with prob <  1.2e-217
## The root mean square of the residuals is  0.18 
## The df corrected root mean square of the residuals is  0.19 
## 
## RMSEA index =  0.199  and the 10 % confidence intervals are  0.19 0.208
## BIC =  629.1 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.95   0
## Multiple R square of scores with factors      0.90   0
## Minimum correlation of factor score estimates 0.79  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.88 0.88
## Omega general for total scores and subscales  0.88 0.88
## Omega group for total scores and subscales    0.00 0.00
data %>% 
  select(T3_NOBAGS.1_1r:T3_NOBAGS.16_1r) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.89 
## G.6:                   0.94 
## Omega Hierarchical:    0.89 
## Omega H asymptotic:    0.99 
## Omega Total            0.89 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##                     g  F1*   h2   h2   u2 p2 com
## T3_NOBAGS.1_1r   0.63      0.40 0.40 0.60  1   1
## T3_NOBAGS.1_2r   0.44      0.20 0.20 0.80  1   1
## T3_NOBAGS.2_1r   0.75      0.56 0.56 0.44  1   1
## T3_NOBAGS.2_2r   0.69      0.48 0.48 0.52  1   1
## T3_NOBAGS.3_1r   0.64      0.41 0.41 0.59  1   1
## T3_NOBAGS.3_2r   0.55      0.30 0.30 0.70  1   1
## T3_NOBAGS.4_1r   0.72      0.52 0.52 0.48  1   1
## T3_NOBAGS.4_2r   0.61      0.37 0.37 0.63  1   1
## T3_NOBAGS.5_1r   0.68      0.46 0.46 0.54  1   1
## T3_NOBAGS.6_1r   0.48      0.23 0.23 0.77  1   1
## T3_NOBAGS.7_1r   0.68      0.46 0.46 0.54  1   1
## T3_NOBAGS.8_1r   0.43           0.19 0.81  1   1
## T3_NOBAGS.9_1r   0.51      0.26 0.26 0.74  1   1
## T3_NOBAGS.10_1r  0.42           0.18 0.82  1   1
## T3_NOBAGS.11_1r- 0.35           0.12 0.88  1   1
## T3_NOBAGS.12_1r  0.42           0.18 0.82  1   1
## T3_NOBAGS.13_1r  0.46      0.21 0.21 0.79  1   1
## T3_NOBAGS.14_1r  0.54      0.29 0.29 0.71  1   1
## T3_NOBAGS.15_1r  0.47      0.22 0.22 0.78  1   1
## T3_NOBAGS.16_1r  0.34           0.12 0.88  1   1
## 
## With Sums of squares  of:
##   g F1*  h2 
## 6.1 0.0 2.2 
## 
## general/max  2.73   max/min =   5.220342e+15
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 170  and the fit is  7.67 
## The number of observations was  203  with Chi Square =  1487.54  with prob <  5.2e-209
## The root mean square of the residuals is  0.17 
## The df corrected root mean square of the residuals is  0.18
## RMSEA index =  0.195  and the 10 % confidence intervals are  0.187 0.205
## BIC =  584.29
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 170  and the fit is  7.67 
## The number of observations was  203  with Chi Square =  1487.54  with prob <  5.2e-209
## The root mean square of the residuals is  0.17 
## The df corrected root mean square of the residuals is  0.18 
## 
## RMSEA index =  0.195  and the 10 % confidence intervals are  0.187 0.205
## BIC =  584.29 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.95   0
## Multiple R square of scores with factors      0.91   0
## Minimum correlation of factor score estimates 0.82  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.89 0.89
## Omega general for total scores and subscales  0.89 0.89
## Omega group for total scores and subscales    0.00 0.00

Willingness to Help

data <- data %>% 
  mutate(T1_WHS = rowMeans(pick(T1_WHS_1:T1_WHS_6)),
         T2_WHS = rowMeans(pick(T2_WHS_1:T2_WHS_6)),
         T3_WHS = rowMeans(pick(T3_WHS_1:T3_WHS_6)))

# Get alpha & omega
data %>% 
  select(T1_WHS_1:T1_WHS_6) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.65 
## G.6:                   0.66 
## Omega Hierarchical:    0.64 
## Omega H asymptotic:    0.96 
## Omega Total            0.67 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##             g  F1*   h2   h2   u2 p2 com
## T1_WHS_1 0.72      0.52 0.52 0.48  1   1
## T1_WHS_2 0.57      0.32 0.32 0.68  1   1
## T1_WHS_3 0.24           0.06 0.94  1   1
## T1_WHS_4 0.46      0.21 0.21 0.79  1   1
## T1_WHS_5 0.25           0.06 0.94  1   1
## T1_WHS_6 0.67      0.45 0.45 0.55  1   1
## 
## With Sums of squares  of:
##    g  F1*   h2 
## 1.62 0.00 0.62 
## 
## general/max  2.6   max/min =   8.978249e+15
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 9  and the fit is  0.26 
## The number of observations was  203  with Chi Square =  50.63  with prob <  8.2e-08
## The root mean square of the residuals is  0.11 
## The df corrected root mean square of the residuals is  0.15
## RMSEA index =  0.151  and the 10 % confidence intervals are  0.112 0.193
## BIC =  2.82
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 9  and the fit is  0.26 
## The number of observations was  203  with Chi Square =  50.63  with prob <  8.2e-08
## The root mean square of the residuals is  0.11 
## The df corrected root mean square of the residuals is  0.15 
## 
## RMSEA index =  0.151  and the 10 % confidence intervals are  0.112 0.193
## BIC =  2.82 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.86   0
## Multiple R square of scores with factors      0.73   0
## Minimum correlation of factor score estimates 0.46  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.67 0.64
## Omega general for total scores and subscales  0.64 0.64
## Omega group for total scores and subscales    0.00 0.00
data %>% 
  select(T2_WHS_1:T2_WHS_6) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Warning in cov2cor(t(w) %*% r %*% w): diag(V) had non-positive or NA entries;
## the non-finite result may be dubious
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.67 
## G.6:                   0.67 
## Omega Hierarchical:    0.67 
## Omega H asymptotic:    0.98 
## Omega Total            0.68 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##             g  F1*   h2   h2   u2 p2 com
## T2_WHS_1 0.68      0.47 0.47 0.53  1   1
## T2_WHS_2 0.60      0.36 0.36 0.64  1   1
## T2_WHS_3 0.38           0.14 0.86  1   1
## T2_WHS_4 0.41           0.16 0.84  1   1
## T2_WHS_5 0.34           0.11 0.89  1   1
## T2_WHS_6 0.62      0.39 0.39 0.61  1   1
## 
## With Sums of squares  of:
##    g  F1*   h2 
## 1.63 0.00 0.56 
## 
## general/max  2.93   max/min =   Inf
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 9  and the fit is  0.26 
## The number of observations was  203  with Chi Square =  51.27  with prob <  6.2e-08
## The root mean square of the residuals is  0.11 
## The df corrected root mean square of the residuals is  0.14
## RMSEA index =  0.152  and the 10 % confidence intervals are  0.113 0.194
## BIC =  3.45
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 9  and the fit is  0.26 
## The number of observations was  203  with Chi Square =  51.27  with prob <  6.2e-08
## The root mean square of the residuals is  0.11 
## The df corrected root mean square of the residuals is  0.14 
## 
## RMSEA index =  0.152  and the 10 % confidence intervals are  0.113 0.194
## BIC =  3.45 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.85   0
## Multiple R square of scores with factors      0.72   0
## Minimum correlation of factor score estimates 0.43  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.68 0.67
## Omega general for total scores and subscales  0.67 0.67
## Omega group for total scores and subscales    0.00 0.00
data %>% 
  select(T3_WHS_1:T3_WHS_6) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.69 
## G.6:                   0.69 
## Omega Hierarchical:    0.69 
## Omega H asymptotic:    0.99 
## Omega Total            0.69 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##             g  F1*   h2   h2   u2 p2 com
## T3_WHS_1 0.50      0.25 0.25 0.75  1   1
## T3_WHS_2 0.57      0.33 0.33 0.67  1   1
## T3_WHS_3 0.37           0.14 0.86  1   1
## T3_WHS_4 0.45      0.20 0.20 0.80  1   1
## T3_WHS_5 0.50      0.25 0.25 0.75  1   1
## T3_WHS_6 0.71      0.50 0.50 0.50  1   1
## 
## With Sums of squares  of:
##    g  F1*   h2 
## 1.67 0.00 0.54 
## 
## general/max  3.08   max/min =   9.736251e+15
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 9  and the fit is  0.33 
## The number of observations was  203  with Chi Square =  65.07  with prob <  1.4e-10
## The root mean square of the residuals is  0.12 
## The df corrected root mean square of the residuals is  0.16
## RMSEA index =  0.175  and the 10 % confidence intervals are  0.137 0.217
## BIC =  17.25
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 9  and the fit is  0.33 
## The number of observations was  203  with Chi Square =  65.07  with prob <  1.4e-10
## The root mean square of the residuals is  0.12 
## The df corrected root mean square of the residuals is  0.16 
## 
## RMSEA index =  0.175  and the 10 % confidence intervals are  0.137 0.217
## BIC =  17.25 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.85   0
## Multiple R square of scores with factors      0.72   0
## Minimum correlation of factor score estimates 0.44  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.69 0.69
## Omega general for total scores and subscales  0.69 0.69
## Omega group for total scores and subscales    0.00 0.00

Compassionate Love

data <- data %>% 
  mutate(T1_CLS = rowMeans(pick(T1_CLS_1:T1_CLS_21)),
         T2_CLS = rowMeans(pick(T2_CLS_1:T2_CLS_21)),
         T3_CLS = rowMeans(pick(T3_CLS_1:T3_CLS_21)))

# Get alpha & omega
data %>% 
  select(T1_CLS_1:T1_CLS_21) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.94 
## G.6:                   0.95 
## Omega Hierarchical:    0.94 
## Omega H asymptotic:    1 
## Omega Total            0.94 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##              g  F1*   h2   h2   u2 p2 com
## T1_CLS_1  0.67      0.44 0.44 0.56  1   1
## T1_CLS_2  0.64      0.41 0.41 0.59  1   1
## T1_CLS_3  0.74      0.55 0.55 0.45  1   1
## T1_CLS_4  0.65      0.42 0.42 0.58  1   1
## T1_CLS_5  0.71      0.50 0.50 0.50  1   1
## T1_CLS_6  0.74      0.55 0.55 0.45  1   1
## T1_CLS_7  0.54      0.29 0.29 0.71  1   1
## T1_CLS_8  0.72      0.51 0.51 0.49  1   1
## T1_CLS_9  0.77      0.59 0.59 0.41  1   1
## T1_CLS_10 0.67      0.45 0.45 0.55  1   1
## T1_CLS_11 0.60      0.35 0.35 0.65  1   1
## T1_CLS_12 0.72      0.52 0.52 0.48  1   1
## T1_CLS_13 0.42           0.18 0.82  1   1
## T1_CLS_14 0.53      0.28 0.28 0.72  1   1
## T1_CLS_15 0.70      0.49 0.49 0.51  1   1
## T1_CLS_16 0.60      0.36 0.36 0.64  1   1
## T1_CLS_17 0.70      0.49 0.49 0.51  1   1
## T1_CLS_18 0.63      0.40 0.40 0.60  1   1
## T1_CLS_19 0.59      0.35 0.35 0.65  1   1
## T1_CLS_20 0.65      0.43 0.43 0.57  1   1
## T1_CLS_21 0.66      0.44 0.44 0.56  1   1
## 
## With Sums of squares  of:
##   g F1*  h2 
## 9.0 0.0 4.1 
## 
## general/max  2.21   max/min =   1.226371e+16
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 189  and the fit is  1.91 
## The number of observations was  203  with Chi Square =  369.45  with prob <  8.9e-14
## The root mean square of the residuals is  0.06 
## The df corrected root mean square of the residuals is  0.06
## RMSEA index =  0.068  and the 10 % confidence intervals are  0.058 0.079
## BIC =  -634.75
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 189  and the fit is  1.91 
## The number of observations was  203  with Chi Square =  369.45  with prob <  8.9e-14
## The root mean square of the residuals is  0.06 
## The df corrected root mean square of the residuals is  0.06 
## 
## RMSEA index =  0.068  and the 10 % confidence intervals are  0.058 0.079
## BIC =  -634.75 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.97   0
## Multiple R square of scores with factors      0.94   0
## Minimum correlation of factor score estimates 0.89  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.94 0.94
## Omega general for total scores and subscales  0.94 0.94
## Omega group for total scores and subscales    0.00 0.00
data %>% 
  select(T2_CLS_1:T2_CLS_21) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.94 
## G.6:                   0.96 
## Omega Hierarchical:    0.95 
## Omega H asymptotic:    1 
## Omega Total            0.95 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##              g  F1*   h2   h2   u2 p2 com
## T2_CLS_1  0.69      0.48 0.48 0.52  1   1
## T2_CLS_2  0.71      0.50 0.50 0.50  1   1
## T2_CLS_3  0.79      0.62 0.62 0.38  1   1
## T2_CLS_4  0.63      0.40 0.40 0.60  1   1
## T2_CLS_5  0.68      0.46 0.46 0.54  1   1
## T2_CLS_6  0.77      0.59 0.59 0.41  1   1
## T2_CLS_7  0.62      0.38 0.38 0.62  1   1
## T2_CLS_8  0.72      0.51 0.51 0.49  1   1
## T2_CLS_9  0.77      0.60 0.60 0.40  1   1
## T2_CLS_10 0.73      0.53 0.53 0.47  1   1
## T2_CLS_11 0.64      0.41 0.41 0.59  1   1
## T2_CLS_12 0.76      0.57 0.57 0.43  1   1
## T2_CLS_13 0.30           0.09 0.91  1   1
## T2_CLS_14 0.57      0.32 0.32 0.68  1   1
## T2_CLS_15 0.79      0.62 0.62 0.38  1   1
## T2_CLS_16 0.61      0.37 0.37 0.63  1   1
## T2_CLS_17 0.70      0.49 0.49 0.51  1   1
## T2_CLS_18 0.62      0.39 0.39 0.61  1   1
## T2_CLS_19 0.64      0.40 0.40 0.60  1   1
## T2_CLS_20 0.64      0.42 0.42 0.58  1   1
## T2_CLS_21 0.66      0.43 0.43 0.57  1   1
## 
## With Sums of squares  of:
##   g F1*  h2 
## 9.6 0.0 4.7 
## 
## general/max  2.05   max/min =   2.584538e+16
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 189  and the fit is  2.68 
## The number of observations was  203  with Chi Square =  517.68  with prob <  2.2e-32
## The root mean square of the residuals is  0.06 
## The df corrected root mean square of the residuals is  0.07
## RMSEA index =  0.092  and the 10 % confidence intervals are  0.083 0.102
## BIC =  -486.51
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 189  and the fit is  2.68 
## The number of observations was  203  with Chi Square =  517.68  with prob <  2.2e-32
## The root mean square of the residuals is  0.06 
## The df corrected root mean square of the residuals is  0.07 
## 
## RMSEA index =  0.092  and the 10 % confidence intervals are  0.083 0.102
## BIC =  -486.51 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.98   0
## Multiple R square of scores with factors      0.95   0
## Minimum correlation of factor score estimates 0.90  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.95 0.95
## Omega general for total scores and subscales  0.95 0.95
## Omega group for total scores and subscales    0.00 0.00
data %>% 
  select(T3_CLS_1:T3_CLS_21) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.95 
## G.6:                   0.96 
## Omega Hierarchical:    0.95 
## Omega H asymptotic:    1 
## Omega Total            0.95 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##              g  F1*   h2   h2   u2 p2 com
## T3_CLS_1  0.73      0.53 0.53 0.47  1   1
## T3_CLS_2  0.76      0.57 0.57 0.43  1   1
## T3_CLS_3  0.77      0.59 0.59 0.41  1   1
## T3_CLS_4  0.71      0.50 0.50 0.50  1   1
## T3_CLS_5  0.69      0.48 0.48 0.52  1   1
## T3_CLS_6  0.79      0.63 0.63 0.37  1   1
## T3_CLS_7  0.65      0.42 0.42 0.58  1   1
## T3_CLS_8  0.71      0.50 0.50 0.50  1   1
## T3_CLS_9  0.77      0.59 0.59 0.41  1   1
## T3_CLS_10 0.73      0.54 0.54 0.46  1   1
## T3_CLS_11 0.66      0.43 0.43 0.57  1   1
## T3_CLS_12 0.79      0.62 0.62 0.38  1   1
## T3_CLS_13 0.46      0.21 0.21 0.79  1   1
## T3_CLS_14 0.55      0.30 0.30 0.70  1   1
## T3_CLS_15 0.80      0.64 0.64 0.36  1   1
## T3_CLS_16 0.61      0.38 0.38 0.62  1   1
## T3_CLS_17 0.74      0.55 0.55 0.45  1   1
## T3_CLS_18 0.69      0.48 0.48 0.52  1   1
## T3_CLS_19 0.61      0.37 0.37 0.63  1   1
## T3_CLS_20 0.63      0.39 0.39 0.61  1   1
## T3_CLS_21 0.68      0.46 0.46 0.54  1   1
## 
## With Sums of squares  of:
##    g  F1*   h2 
## 10.2  0.0  5.2 
## 
## general/max  1.96   max/min =   3.731544e+16
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 189  and the fit is  2.78 
## The number of observations was  203  with Chi Square =  537.02  with prob <  4.3e-35
## The root mean square of the residuals is  0.06 
## The df corrected root mean square of the residuals is  0.07
## RMSEA index =  0.095  and the 10 % confidence intervals are  0.086 0.105
## BIC =  -467.17
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 189  and the fit is  2.78 
## The number of observations was  203  with Chi Square =  537.02  with prob <  4.3e-35
## The root mean square of the residuals is  0.06 
## The df corrected root mean square of the residuals is  0.07 
## 
## RMSEA index =  0.095  and the 10 % confidence intervals are  0.086 0.105
## BIC =  -467.17 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.98   0
## Multiple R square of scores with factors      0.96   0
## Minimum correlation of factor score estimates 0.91  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.95 0.95
## Omega general for total scores and subscales  0.95 0.95
## Omega group for total scores and subscales    0.00 0.00

Situational Self-Control

data <- data %>% 
  mutate(across(contains("SMS5"), .names = "{col}r"))

# Reverse code SMS5 (items 3 et 5)
data <- data %>%
  mutate(across(ends_with(paste0("SMS5_", c(3, 5))), 
                ~nice_reverse(.x, 5), .names = "{col}r"))

# Get mean SMS5
data <- data %>% 
  mutate(T2_SMS5 = rowMeans(pick(T2_SMS5_1r:T2_SMS5_6r)))

# Get alpha & omega
data %>% 
  select(T2_SMS5_1r:T2_SMS5_6r) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.65 
## G.6:                   0.65 
## Omega Hierarchical:    0.67 
## Omega H asymptotic:    1 
## Omega Total            0.67 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##               g  F1*   h2   h2   u2 p2 com
## T2_SMS5_1r 0.63      0.40 0.40 0.60  1   1
## T2_SMS5_2r                0.00 1.00  1   1
## T2_SMS5_3r 0.50      0.25 0.25 0.75  1   1
## T2_SMS5_4r 0.58      0.34 0.34 0.66  1   1
## T2_SMS5_5r 0.59      0.35 0.35 0.65  1   1
## T2_SMS5_6r 0.62      0.39 0.39 0.61  1   1
## 
## With Sums of squares  of:
##    g  F1*   h2 
## 1.72 0.00 0.61 
## 
## general/max  2.84   max/min =   1.412284e+16
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 9  and the fit is  0.15 
## The number of observations was  203  with Chi Square =  30.12  with prob <  0.00042
## The root mean square of the residuals is  0.07 
## The df corrected root mean square of the residuals is  0.09
## RMSEA index =  0.107  and the 10 % confidence intervals are  0.067 0.151
## BIC =  -17.7
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 9  and the fit is  0.15 
## The number of observations was  203  with Chi Square =  30.12  with prob <  0.00042
## The root mean square of the residuals is  0.07 
## The df corrected root mean square of the residuals is  0.09 
## 
## RMSEA index =  0.107  and the 10 % confidence intervals are  0.067 0.151
## BIC =  -17.7 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.85   0
## Multiple R square of scores with factors      0.73   0
## Minimum correlation of factor score estimates 0.46  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.67 0.67
## Omega general for total scores and subscales  0.67 0.67
## Omega group for total scores and subscales    0.00 0.00

Positive and Negative Affect

# Get mean of PANAS
# Positive affect = 1, 3, 5, 7, 9
# Negative affect = 2, 4, 6, 8, 10
data <- data %>% mutate(
  T2_PANAS_pos = rowMeans(pick(paste0("T2_PANAS_", seq(1, 9, 2)))),
  T2_PANAS_neg = rowMeans(pick(paste0("T2_PANAS_", seq(2, 10, 2)))))

# Get alpha & omega
# Positive affect
data %>% 
  select(all_of(paste0("T2_PANAS_", seq(1, 9, 2)))) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.86 
## G.6:                   0.84 
## Omega Hierarchical:    0.86 
## Omega H asymptotic:    1 
## Omega Total            0.86 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##               g  F1*   h2   h2   u2 p2 com
## T2_PANAS_1 0.73      0.53 0.53 0.47  1   1
## T2_PANAS_3 0.70      0.49 0.49 0.51  1   1
## T2_PANAS_5 0.79      0.63 0.63 0.37  1   1
## T2_PANAS_7 0.67      0.45 0.45 0.55  1   1
## T2_PANAS_9 0.84      0.70 0.70 0.30  1   1
## 
## With Sums of squares  of:
##   g F1*  h2 
## 2.8 0.0 1.6 
## 
## general/max  1.74   max/min =   2.911861e+16
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 5  and the fit is  0.05 
## The number of observations was  203  with Chi Square =  10.65  with prob <  0.059
## The root mean square of the residuals is  0.03 
## The df corrected root mean square of the residuals is  0.04
## RMSEA index =  0.074  and the 10 % confidence intervals are  0 0.138
## BIC =  -15.91
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 5  and the fit is  0.05 
## The number of observations was  203  with Chi Square =  10.65  with prob <  0.059
## The root mean square of the residuals is  0.03 
## The df corrected root mean square of the residuals is  0.04 
## 
## RMSEA index =  0.074  and the 10 % confidence intervals are  0 0.138
## BIC =  -15.91 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.94   0
## Multiple R square of scores with factors      0.88   0
## Minimum correlation of factor score estimates 0.75  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.86 0.86
## Omega general for total scores and subscales  0.86 0.86
## Omega group for total scores and subscales    0.00 0.00
# Negative affect
data %>% 
  select(all_of(paste0("T2_PANAS_", seq(2, 10, 2)))) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.8 
## G.6:                   0.8 
## Omega Hierarchical:    0.82 
## Omega H asymptotic:    1 
## Omega Total            0.82 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##                g  F1*   h2   h2   u2 p2 com
## T2_PANAS_2  0.85      0.73 0.73 0.27  1   1
## T2_PANAS_4  0.34           0.11 0.89  1   1
## T2_PANAS_6  0.79      0.62 0.62 0.38  1   1
## T2_PANAS_8  0.67      0.45 0.45 0.55  1   1
## T2_PANAS_10 0.73      0.54 0.54 0.46  1   1
## 
## With Sums of squares  of:
##   g F1*  h2 
## 2.5 0.0 1.4 
## 
## general/max  1.72   max/min =   1.02867e+17
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 5  and the fit is  0.16 
## The number of observations was  203  with Chi Square =  31.84  with prob <  6.4e-06
## The root mean square of the residuals is  0.07 
## The df corrected root mean square of the residuals is  0.1
## RMSEA index =  0.163  and the 10 % confidence intervals are  0.112 0.219
## BIC =  5.28
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 5  and the fit is  0.16 
## The number of observations was  203  with Chi Square =  31.84  with prob <  6.4e-06
## The root mean square of the residuals is  0.07 
## The df corrected root mean square of the residuals is  0.1 
## 
## RMSEA index =  0.163  and the 10 % confidence intervals are  0.112 0.219
## BIC =  5.28 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.93   0
## Multiple R square of scores with factors      0.87   0
## Minimum correlation of factor score estimates 0.73  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.82 0.82
## Omega general for total scores and subscales  0.82 0.82
## Omega group for total scores and subscales    0.00 0.00

Charity

data <- data %>% mutate(
  T2_Charity = rowMeans(pick(contains("charity") & ends_with("1_1"))),
  T2_Familiarity = rowMeans(pick(contains("charity") & ends_with("2_1"))))

# Get alpha & omega
data %>% 
  select(contains("charity") & ends_with("1_1")) %>% 
  omega(nfactors = 1)
## Omega_h for 1 factor is not meaningful, just omega_t
## Warning in schmid(m, nfactors, fm, digits, rotate = rotate, n.obs = n.obs, :
## Omega_h and Omega_asymptotic are not meaningful with one factor
## Warning in cov2cor(t(w) %*% r %*% w): diag(V) had non-positive or NA entries;
## the non-finite result may be dubious
## Omega 
## Call: omegah(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip, 
##     digits = digits, title = title, sl = sl, labels = labels, 
##     plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option, 
##     covar = covar)
## Alpha:                 0.98 
## G.6:                   0.99 
## Omega Hierarchical:    0.98 
## Omega H asymptotic:    1 
## Omega Total            0.98 
## 
## Schmid Leiman Factor loadings greater than  0.2 
##                            g  F1*   h2   h2   u2 p2 com
## T2_charity.moisson1_1   0.84      0.71 0.71 0.29  1   1
## T2_charity.bonneau1_1   0.85      0.72 0.72 0.28  1   1
## T2_charity.AMDI1_1      0.86      0.73 0.73 0.27  1   1
## T2_charity.CAAM1_1      0.86      0.74 0.74 0.26  1   1
## T2_charity.centrefem1_1 0.80      0.64 0.64 0.36  1   1
## T2_charity.LGBTQ1_1     0.74      0.55 0.55 0.45  1   1
## T2_charity.equiterre1_1 0.87      0.75 0.75 0.25  1   1
## T2_charity.dejeuner1_1  0.80      0.64 0.64 0.36  1   1
## T2_charity.cancer1_1    0.87      0.75 0.75 0.25  1   1
## T2_charity.parkinson1_1 0.87      0.75 0.75 0.25  1   1
## T2_charity.OXFAM1_1     0.81      0.65 0.65 0.35  1   1
## T2_charity.papillon1_1  0.85      0.72 0.72 0.28  1   1
## T2_charity.croix1_1     0.86      0.74 0.74 0.26  1   1
## T2_charity.centraide1_1 0.87      0.76 0.76 0.24  1   1
## T2_charity.autisme1_1   0.82      0.68 0.68 0.32  1   1
## T2_charity.suzuki1_1    0.78      0.62 0.62 0.38  1   1
## T2_charity.conserv1_1   0.81      0.65 0.65 0.35  1   1
## T2_charity.coeur1_1     0.85      0.73 0.73 0.27  1   1
## T2_charity.UNICEF1_1    0.82      0.67 0.67 0.33  1   1
## T2_charity.amnistie1_1  0.81      0.66 0.66 0.34  1   1
## T2_charity.green1_1     0.77      0.59 0.59 0.41  1   1
## T2_charity.WWF1_1       0.79      0.62 0.62 0.38  1   1
## T2_charity.MSF1_1       0.86      0.74 0.74 0.26  1   1
## T2_charity.armee1_1     0.75      0.56 0.56 0.44  1   1
## 
## With Sums of squares  of:
##   g F1*  h2 
##  16   0  11 
## 
## general/max  1.45   max/min =   Inf
## mean percent general =  1    with sd =  0 and cv of  0 
## Explained Common Variance of the general factor =  1 
## 
## The degrees of freedom are 252  and the fit is  4.85 
## The number of observations was  203  with Chi Square =  934.44  with prob <  4.3e-79
## The root mean square of the residuals is  0.05 
## The df corrected root mean square of the residuals is  0.05
## RMSEA index =  0.115  and the 10 % confidence intervals are  0.108 0.124
## BIC =  -404.48
## 
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 252  and the fit is  4.85 
## The number of observations was  203  with Chi Square =  934.44  with prob <  4.3e-79
## The root mean square of the residuals is  0.05 
## The df corrected root mean square of the residuals is  0.05 
## 
## RMSEA index =  0.115  and the 10 % confidence intervals are  0.108 0.124
## BIC =  -404.48 
## 
## Measures of factor score adequacy             
##                                                  g F1*
## Correlation of scores with factors            0.99   0
## Multiple R square of scores with factors      0.98   0
## Minimum correlation of factor score estimates 0.96  -1
## 
##  Total, General and Subset omega for each subset
##                                                  g  F1*
## Omega total for total scores and subscales    0.98 0.98
## Omega general for total scores and subscales  0.98 0.98
## Omega group for total scores and subscales    0.00 0.00

Intensity * Duration

# Create new variable blastintensity.duration
data <- data %>% 
  mutate(T1_blastintensity.duration = T1_blastintensity * T1_blastduration,
         T2_blastintensity.duration = T2_blastintensity * T2_blastduration,
         T3_blastintensity.duration = T3_blastintensity * T3_blastduration)

Assumptions

In this section, we: (a) test assumptions of normality, (b) transform variables violating assumptions, (c) test assumptions of homoscedasticity, and (d) identify and winsorize outliers.

At this stage, we define a list of our relevant variables.

# Make list of DVs
col.list <- c("T1_blastintensity", "T1_blastduration", "T1_blastintensity.duration",
              "T2_blastintensity", "T2_blastduration", "T2_blastintensity.duration",
              "T3_blastintensity", "T3_blastduration", "T3_blastintensity.duration",
              "T1_BSCS", "T1_BAQ", 
              "T1_attitude", "T2_attitude", "T3_attitude",
              "T1_dehumanization", "T2_dehumanization", "T3_dehumanization", 
              "T1_NOBAGS", "T2_NOBAGS", "T3_NOBAGS", 
              "T1_WHS", "T2_WHS", "T3_WHS", 
              "T1_CLS", "T2_CLS", "T3_CLS", 
              "T2_SMS5", "T2_PANAS_pos", "T2_PANAS_neg", "T2_Charity", 
              "T2_Familiarity", "T2_memory.altruistic", "T2_memory.aggressive")

Power Analysis

We report below how we ran our power analysis for this study.

# According to Cohen:
# d = 0.2 == 'small'
# d = 0.5 == 'medium'
# d = 0.8 == 'large'

# Sensitivity analysis
pwr.t.test(n = 50, d = , sig.level = .05, power = .80, type = c("two.sample"))
## 
##      Two-sample t test power calculation 
## 
##               n = 50
##               d = 0.565858
##       sig.level = 0.05
##           power = 0.8
##     alternative = two.sided
## 
## NOTE: n is number in *each* group
# A sample size of 50 only allows to detect medium effect sizes greater than 0.57

###########################
# Required sample size for large effect
pwr.t.test(n = , d = 0.80, sig.level = .05, power = .80, type = c("two.sample"))
## 
##      Two-sample t test power calculation 
## 
##               n = 25.52458
##               d = 0.8
##       sig.level = 0.05
##           power = 0.8
##     alternative = two.sided
## 
## NOTE: n is number in *each* group
# Required sample size: 26 PER GROUP.

# Required sample size for medium effect
pwr.t.test(n = , d = 0.50, sig.level = .05, power = .80, type = c("two.sample"))
## 
##      Two-sample t test power calculation 
## 
##               n = 63.76561
##               d = 0.5
##       sig.level = 0.05
##           power = 0.8
##     alternative = two.sided
## 
## NOTE: n is number in *each* group
# Required sample size: 64 PER GROUP.

# Required sample size for small effect
pwr.t.test(n = , d = 0.20, sig.level = .05, power = .80, type = c("two.sample"))
## 
##      Two-sample t test power calculation 
## 
##               n = 393.4057
##               d = 0.2
##       sig.level = 0.05
##           power = 0.8
##     alternative = two.sided
## 
## NOTE: n is number in *each* group
# Required sample size: 394 PER GROUP.

# Function to compute Cohen's D from means and SDs for Kang, Gray, & Dovidio (2014)
get.d <- function(m1, s1, m2, s2) {
  SDpooled <- sqrt((s1^2 + s2^2) / 2)
  (d <- (m2 - m1) / SDpooled)
}

### Race IAT ###
# Meditation vs (discussion + waitlist): d = 0.80 (large)
d = get.d(m1 = -0.163, s1 = 0.33, m2 = mean(c(0.050, 0.096)), s2 = mean(c(0.31, 0.20)))
(n = ceiling(pwr.t.test(n = , d = d, sig.level = .05, power = .80, type = c("two.sample"))$n))
## [1] 26
n*3
## [1] 78
# [n >= 26 per group, T=78]

# Meditation vs waitlist: d = 0.95 (large)
d = get.d(m1 = -0.163, s1 = 0.33, m2 = 0.096, s2 = 0.20)
(n = ceiling(pwr.t.test(n = , d = d, sig.level = .05, power = .80, type = c("two.sample"))$n))
## [1] 19
n*3
## [1] 57
# [n >= 19 per group, T=57]

# Meditation vs discussion: d = 0.67 (medium-large) 
d = get.d(m1 = -0.163, s1 = 0.33, m2 = 0.050, s2 = 0.31)
(n = ceiling(pwr.t.test(n = , d = d, sig.level = .05, power = .80, type = c("two.sample"))$n))
## [1] 37
n*3
## [1] 111
# [n >= 37 per group, T=111]

### Homeless IAT ###
# Meditation vs (discussion + waitlist): d = 0.42 (medium)
d = get.d(m1 = -0.021, s1 = 0.29, m2 = mean(c(0.056, 0.149)), s2 = mean(c(0.31, 0.31)))
(n = ceiling(pwr.t.test(n = , d = d, sig.level = .05, power = .80, type = c("two.sample"))$n))
## [1] 94
n*3
## [1] 282
# [n >= 94 per group, T=282]

# Meditation vs waitlist : d = 0.57 (medium)
d = get.d(m1 = -0.021, s1 = 0.29, m2 = 0.149, s2 = 0.31)
(n = ceiling(pwr.t.test(n = , d = d, sig.level = .05, power = .80, type = c("two.sample"))$n))
## [1] 50
n*3
## [1] 150
# [n >= 50 per group, T=150]

# Meditation vs discussion : d = 0.26 (small)
d = get.d(m1 = -0.021, s1 = 0.29, m2 = 0.056, s2 = 0.31)
(n = ceiling(pwr.t.test(n = , d = d, sig.level = .05, power = .80, type = c("two.sample"))$n))
## [1] 240
n*3
## [1] 720
# [n >= 240 per group, T=720]


###########################
### FOR REGRESSIONS #######
###########################

# According to Cohen:
# f2 = 0.02 == 'small'
# f2 = 0.15 == 'medium'
# f2 = 0.35 == 'large'

# Sensitivity analysis
# n = number of observations
n = 150
# p = the number of predictors
p = 7 # number of coefficients (one line/term per group (3) + one for T1)
# u = numerator degrees of freedom = number of predictors - 1
u = p - 1
# v = denominator degrees of freedom = sample size - number of predictors
v = n - u - 1
# f2 = effect size measure
round(pwr.f2.test(u = u, v = v, f2 = , sig.level = .05, power = .80)$f2,2)
## [1] 0.09
# A sample size of 150 only allows to detect medium effect sizes greater than f2 = 0.09

# BUT note that Aguinis, Beaty, Boik, and Pierce (2005) has shown that the average effect size in tests of moderation is only f2 = 0.009.  

###########################
# Required sample size for large effect
v = pwr.f2.test(u = u, v = , f2 = .25, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
## [1] 62
# 62 PER GROUP.

# Required sample size for medium effect
v = pwr.f2.test(u = u, v = , f2 = .15, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
## [1] 98
# 98 PER GROUP.

# Required sample size for small effect
v = pwr.f2.test(u = u, v = , f2 = .02, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
## [1] 688
# 688 PER GROUP.

###############################################
### Calculating f2 from Kang et al. (2014) ###
# Cohen's d to Cohen's f = d/2
f = 0.57/2

# OR calculate f with function rather than d/2
pwr.anova.test(k = 3, n = 34, f = , sig.level = .05, power = .80)$f
## [1] 0.3120502
# According to G*Power
f = 0.06976247

f2 = f^2
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
## [1] 2806
# 175 PER GROUP.

### Calculating f2 ###
# Existing R2 for moderated regressions on ego depletion are:

# 1) R2 = .21 (Hofmann, Rauch, & Gawronski, 2007; food + depletion). Note: they got one R2 for both automatic and explicit attitudes because they included both in the same model...
# 2) R2 = .15 + .22 = .37 (Schmidt, Zimmermann, Banse, & Imhoff, 2015; aggression + depletion) Note: this is the R2 from the second step of the three-step hierarchical regression + the first step (because the R2 provided for step 2 is delta R2 so step 1 was already subtracted...)
# 3) R2 = .27 (Friese, Hofmann, & Wänke, 2008, study 1; food + cognitive load)
# 4) R2 = .30 (Friese, Hofmann, & Wänke, 2008, study 2; food + depletion)
# 5) R2 = .40 (Friese, Hofmann, & Wänke, 2008, study 3; food + depletion)
# 6) R2 = .12 (Ostafin, Marlatt, & Greenwald, 2008; alcohol + depletion)
# 7) R2 = .42 (Hofmann & Friese, 2008; food + alcohol). Note : they got one R2 for both automatic and explicit attitudes because they included both in the same model... they also included positive affect, negative affect, and alcohol-related problems as covariates.
# 8) R2 = .22 (Hofmann, Gschwendner, Friese, Wiers, & Schmitt, 2008, study 1; sex + WMC)
# 9) R2 = .14 (Hofmann, Gschwendner, Friese, Wiers, & Schmitt, 2008, study 2; food + WMC)
# 10) R2 = .27 (Hofmann, Gschwendner, Friese, Wiers, & Schmitt, 2008, study 3; anger behaviour + WMC)
# If we are evaluating the impact of a set of predictors on an outcome, then the f2 formula is:
f2 = R2/(1 - R2)

# Else, if we are evaluating the impact of one set of predictors above and beyond a second set of predictors (or covariates), then the f2 formula is:
f2 = (R2AB - R2A)/(1 - R2AB)
# p = the number of predictors
p = 3 # one per predictor (attitude, condition) + interaction (if we don't add explicit and implicit in the same model... else + 1)
# u = numerator degrees of freedom = number of predictors - 1
u = p - 1

# Required sample size for effect size of Hofmann, Rauch, & Gawronski (2007; food + depletion)
R2 = 0.21
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
## [1] 40
# 40 PER GROUP.

# Required sample size for effect size of Schmidt, Zimmermann, Banse, & Imhoff (2015; aggression + depletion)
R2 = .22 + 0.15
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
## [1] 20
# 20 PER GROUP.

# For this paper only (for the sake of time), we will also do it the "right" way since they did a hierarchical regression (but this is not the analysis we are doing, so it might not apply...)
R2A = .22
R2AB = R2A + 0.15
f2 = (R2AB - R2A)/(1 - R2AB)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
## [1] 44
# 64 PER GROUP.

# Required sample size for effect size of Friese, Hofmann, & Wänke (2008, study 1; food + cognitive load)
R2 = .27
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
## [1] 30
# 30 PER GROUP.

# Required sample size for effect size of Friese, Hofmann, & Wänke (2008, study 2; food + depletion)
R2 = .30
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
## [1] 26
# 26 PER GROUP.

# Required sample size for effect size of Friese, Hofmann, & Wänke (2008, study 3; food + depletion)
R2 = .40
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
## [1] 18
# 18 PER GROUP.

# Required sample size for effect size of Ostafin, Marlatt, & Greenwald (2008; alcohol + depletion)
R2 = .12
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
## [1] 74
# 74 PER GROUP.

# Required sample size for effect size of Hofmann & Friese (2008; food + alcohol)
R2 = .42
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
## [1] 17
# 17 PER GROUP.

# Required sample size for effect size of Hofmann, Gschwendner, Friese, Wiers, & Schmitt (2008, study 1; sex + WMC)
R2 = .22
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
## [1] 38
# 38 PER GROUP.

# Required sample size for effect size of Hofmann, Gschwendner, Friese, Wiers, & Schmitt (2008, study 2; food + WMC)
R2 = .14
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
## [1] 63
# 63 PER GROUP.

# Required sample size for effect size of Hofmann, Gschwendner, Friese, Wiers, & Schmitt (2008, study 3; anger behaviour + WMC)
R2 = .27
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
## [1] 30
# 30 PER GROUP.

# Average sample size needed:
ceiling(mean(c(40, 20, 30, 26, 18, 74, 17, 38, 63, 30)))
## [1] 36
# 36 per group

Normality

lapply(col.list, function(x) 
  nice_normality(data, 
                 variable = x, 
                 title = x,
                 group = "T1_Group",
                 shapiro = TRUE,
                 histogram = TRUE))
## [[1]]

## 
## [[2]]

## 
## [[3]]

## 
## [[4]]

## 
## [[5]]

## 
## [[6]]

## 
## [[7]]

## 
## [[8]]

## 
## [[9]]

## 
## [[10]]

## 
## [[11]]

## 
## [[12]]

## 
## [[13]]

## 
## [[14]]

## 
## [[15]]

## 
## [[16]]

## 
## [[17]]

## 
## [[18]]

## 
## [[19]]

## 
## [[20]]

## 
## [[21]]

## 
## [[22]]

## 
## [[23]]

## 
## [[24]]

## 
## [[25]]

## 
## [[26]]

## 
## [[27]]

## 
## [[28]]

## 
## [[29]]

## 
## [[30]]

## 
## [[31]]

## 
## [[32]]

## 
## [[33]]

Several variables are clearly skewed. Let’s apply transformations. But first, let’s deal with the working memory task, SOPT (Self-Ordered Pointing Task). It is clearly problematic.

Transformation

The function below transforms variables according to the best possible transformation (via the bestNormalize package), and also standardizes the variables.

predict_bestNormalize <- function(var) {
  x <- bestNormalize(var, standardize = FALSE, allow_orderNorm = FALSE)
  print(cur_column())
  print(x$chosen_transform)
  cat("\n")
  y <- predict(x)
  attr(y, "transform") <- c(attributes(y), attributes(x$chosen_transform)$class[1])
  y
}

set.seed(42)
data <- data %>% 
  mutate(across(all_of(col.list), 
                predict_bestNormalize,
                .names = "{.col}.t"))
## [1] "T1_blastintensity"
## I(x) Transformation with 203 nonmissing obs.
## 
## [1] "T1_blastduration"
## Non-Standardized Yeo-Johnson Transformation with 203 nonmissing obs.:
##  Estimated statistics:
##  - lambda = 0.8480719 
##  - mean (before standardization) = 372.6267 
##  - sd (before standardization) = 153.6659 
## 
## [1] "T1_blastintensity.duration"
## Non-Standardized sqrt(x + a) Transformation with 203 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - mean (before standardization) = 57.79789 
##  - sd (before standardization) = 29.92885 
## 
## [1] "T2_blastintensity"
## I(x) Transformation with 203 nonmissing obs.
## 
## [1] "T2_blastduration"
## Non-Standardized double reversed Log_b(x + a) Transformation with 203 nonmissing obs.:
##  Relevant statistics:
##  - a = 
##  - b = 10 
##  - max(x) = 2200 ; min(x) = -200 
##  - mean (before standardization) = 0.2664483 
##  - sd (before standardization) = 0.1552362 
## 
## [1] "T2_blastintensity.duration"
## Non-Standardized sqrt(x + a) Transformation with 203 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - mean (before standardization) = 50.70594 
##  - sd (before standardization) = 30.79619 
## 
## [1] "T3_blastintensity"
## Non-Standardized Yeo-Johnson Transformation with 203 nonmissing obs.:
##  Estimated statistics:
##  - lambda = 0.4908571 
##  - mean (before standardization) = 1.885324 
##  - sd (before standardization) = 1.194873 
## 
## [1] "T3_blastduration"
## I(x) Transformation with 203 nonmissing obs.
## 
## [1] "T3_blastintensity.duration"
## Non-Standardized sqrt(x + a) Transformation with 203 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - mean (before standardization) = 49.61076 
##  - sd (before standardization) = 33.52397 
## 
## [1] "T1_BSCS"
## I(x) Transformation with 203 nonmissing obs.
## 
## [1] "T1_BAQ"
## Non-Standardized Log_b(x + a) Transformation with 203 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - b = 10 
##  - mean (before standardization) = 0.5121964 
##  - sd (before standardization) = 0.1264322 
## 
## [1] "T1_attitude"
## Non-Standardized double reversed Log_b(x + a) Transformation with 203 nonmissing obs.:
##  Relevant statistics:
##  - a = 
##  - b = 10 
##  - max(x) = 107.6111 ; min(x) = 16.27778 
##  - mean (before standardization) = 0.700763 
##  - sd (before standardization) = 0.2840707 
## 
## [1] "T2_attitude"
## Non-Standardized double reversed Log_b(x + a) Transformation with 203 nonmissing obs.:
##  Relevant statistics:
##  - a = 
##  - b = 10 
##  - max(x) = 109.2444 ; min(x) = -1.688889 
##  - mean (before standardization) = 0.7724832 
##  - sd (before standardization) = 0.2719986 
## 
## [1] "T3_attitude"
## Non-Standardized double reversed Log_b(x + a) Transformation with 203 nonmissing obs.:
##  Relevant statistics:
##  - a = 
##  - b = 10 
##  - max(x) = 109.7778 ; min(x) = -7.555556 
##  - mean (before standardization) = 0.8077354 
##  - sd (before standardization) = 0.2579719 
## 
## [1] "T1_dehumanization"
## Non-Standardized Yeo-Johnson Transformation with 203 nonmissing obs.:
##  Estimated statistics:
##  - lambda = 4.309397 
##  - mean (before standardization) = 62422039 
##  - sd (before standardization) = 26502006 
## 
## [1] "T2_dehumanization"
## Non-Standardized double reversed Log_b(x + a) Transformation with 203 nonmissing obs.:
##  Relevant statistics:
##  - a = 
##  - b = 10 
##  - max(x) = 109.2 ; min(x) = -1.2 
##  - mean (before standardization) = 0.7993443 
##  - sd (before standardization) = 0.2122099 
## 
## [1] "T3_dehumanization"
## Non-Standardized double reversed Log_b(x + a) Transformation with 203 nonmissing obs.:
##  Relevant statistics:
##  - a = 
##  - b = 10 
##  - max(x) = 108.8889 ; min(x) = 2.222222 
##  - mean (before standardization) = 0.8057941 
##  - sd (before standardization) = 0.2225451 
## 
## [1] "T1_NOBAGS"
## Non-Standardized double reversed Log_b(x + a) Transformation with 203 nonmissing obs.:
##  Relevant statistics:
##  - a = 
##  - b = 10 
##  - max(x) = 3.355 ; min(x) = 0.895 
##  - mean (before standardization) = 0.2961235 
##  - sd (before standardization) = 0.1574908 
## 
## [1] "T2_NOBAGS"
## I(x) Transformation with 203 nonmissing obs.
## 
## [1] "T3_NOBAGS"
## Non-Standardized sqrt(x + a) Transformation with 203 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - mean (before standardization) = 1.362011 
##  - sd (before standardization) = 0.1775818 
## 
## [1] "T1_WHS"
## Non-Standardized sqrt(x + a) Transformation with 203 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - mean (before standardization) = 2.292266 
##  - sd (before standardization) = 0.2011462 
## 
## [1] "T2_WHS"
## I(x) Transformation with 203 nonmissing obs.
## 
## [1] "T3_WHS"
## Non-Standardized Box Cox Transformation with 203 nonmissing obs.:
##  Estimated statistics:
##  - lambda = 1.915833 
##  - mean (before standardization) = 13.07348 
##  - sd (before standardization) = 4.077983 
## 
## [1] "T1_CLS"
## Non-Standardized Yeo-Johnson Transformation with 203 nonmissing obs.:
##  Estimated statistics:
##  - lambda = 1.765136 
##  - mean (before standardization) = 12.70066 
##  - sd (before standardization) = 3.619577 
## 
## [1] "T2_CLS"
## Non-Standardized Box Cox Transformation with 203 nonmissing obs.:
##  Estimated statistics:
##  - lambda = 1.650611 
##  - mean (before standardization) = 8.289506 
##  - sd (before standardization) = 2.738414 
## 
## [1] "T3_CLS"
## I(x) Transformation with 203 nonmissing obs.
## 
## [1] "T2_SMS5"
## Non-Standardized sqrt(x + a) Transformation with 203 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - mean (before standardization) = 1.4883 
##  - sd (before standardization) = 0.205304 
## 
## [1] "T2_PANAS_pos"
## Non-Standardized Yeo-Johnson Transformation with 203 nonmissing obs.:
##  Estimated statistics:
##  - lambda = 0.5858628 
##  - mean (before standardization) = 1.880847 
##  - sd (before standardization) = 0.5664185 
## 
## [1] "T2_PANAS_neg"
## Non-Standardized Yeo-Johnson Transformation with 203 nonmissing obs.:
##  Estimated statistics:
##  - lambda = -1.930842 
##  - mean (before standardization) = 0.4269545 
##  - sd (before standardization) = 0.03505011 
## 
## [1] "T2_Charity"
## I(x) Transformation with 203 nonmissing obs.
## 
## [1] "T2_Familiarity"
## Non-Standardized Log_b(x + a) Transformation with 203 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - b = 10 
##  - mean (before standardization) = 0.3878763 
##  - sd (before standardization) = 0.1357485 
## 
## [1] "T2_memory.altruistic"
## Non-Standardized Box Cox Transformation with 203 nonmissing obs.:
##  Estimated statistics:
##  - lambda = -0.0579946 
##  - mean (before standardization) = 6.526004 
##  - sd (before standardization) = 0.4827819 
## 
## [1] "T2_memory.aggressive"
## Non-Standardized Box Cox Transformation with 203 nonmissing obs.:
##  Estimated statistics:
##  - lambda = -0.1672025 
##  - mean (before standardization) = 4.518254 
##  - sd (before standardization) = 0.2219788
col.list <- paste0(col.list, ".t")

Note. The I(x) transformations above are actually not transformations, but a shorthand function for passing the data “as is”. Suggesting the package estimated the various attempted transformations did not improve normality in those cases, so no transformation is used. This only appears when standardize is set to FALSE. When set to TRUE, for those variables, it is actually center_scale(x), suggesting that the data are only CENTERED because they need no transformation (no need to be scaled), only to be centered.

Let’s check if normality was corrected.

# Group normality
named.col.list <- setNames(col.list, unlist(lapply(data, function(x) attributes(x)$transform)))
lapply(named.col.list, function(x) 
  nice_normality(data, 
                 x, 
                 "T1_Group",
                 shapiro = TRUE,
                 title = x,
                 histogram = TRUE))
## $no_transform

## 
## $yeojohnson

## 
## $sqrt_x

## 
## $no_transform

## 
## $double_reverse_log

## 
## $sqrt_x

## 
## $yeojohnson

## 
## $no_transform

## 
## $sqrt_x

## 
## $no_transform

## 
## $log_x

## 
## $double_reverse_log

## 
## $double_reverse_log

## 
## $double_reverse_log

## 
## $yeojohnson

## 
## $double_reverse_log

## 
## $double_reverse_log

## 
## $double_reverse_log

## 
## $no_transform

## 
## $sqrt_x

## 
## $sqrt_x

## 
## $no_transform

## 
## $boxcox

## 
## $yeojohnson

## 
## $boxcox

## 
## $no_transform

## 
## $sqrt_x

## 
## $yeojohnson

## 
## $yeojohnson

## 
## $no_transform

## 
## $log_x

## 
## $boxcox

## 
## $boxcox

Looks rather reasonable now, though not perfect (fortunately contrasts are quite robust against violations of normality).

We can now resume with the next step: checking variance.

Homoscedasticity

# Plotting variance
plots(lapply(col.list, function(x) {
  nice_varplot(data, x, group = "T1_Group")
  }),
  n_columns = 2)

Variance looks good. No group has four times the variance of any other group. We can now resume with checking outliers.

Outliers

We check outliers visually with the plot_outliers function, which draws red lines at +/- 3 median absolute deviations.

plots(lapply(col.list, function(x) {
  plot_outliers(data, x, group = "T1_Group", ytitle = x, binwidth = 0.3)
  }),
  n_columns = 2)

There are some outliers, but nothing unreasonable. Let’s still check with the 3 median absolute deviations (MAD) method.

data %>% 
  as.data.frame %>% 
  filter(T1_Group == "Waitlist") %>% 
  find_mad(col.list, criteria = 3)
## 11 outlier(s) based on 3 median absolute deviations for variable(s): 
##  T1_blastintensity.t, T1_blastduration.t, T1_blastintensity.duration.t, T2_blastintensity.t, T2_blastduration.t, T2_blastintensity.duration.t, T3_blastintensity.t, T3_blastduration.t, T3_blastintensity.duration.t, T1_BSCS.t, T1_BAQ.t, T1_attitude.t, T2_attitude.t, T3_attitude.t, T1_dehumanization.t, T2_dehumanization.t, T3_dehumanization.t, T1_NOBAGS.t, T2_NOBAGS.t, T3_NOBAGS.t, T1_WHS.t, T2_WHS.t, T3_WHS.t, T1_CLS.t, T2_CLS.t, T3_CLS.t, T2_SMS5.t, T2_PANAS_pos.t, T2_PANAS_neg.t, T2_Charity.t, T2_Familiarity.t, T2_memory.altruistic.t, T2_memory.aggressive.t 
## 
## The following participants were considered outliers for more than one variable: 
## 
##   Row n
## 1   6 4
## 2  10 2
## 3  20 3
## 4  87 2
## 
## Outliers per variable: 
## 
## $T2_blastduration.t
##   Row T2_blastduration.t_mad
## 1   6               3.556668
## 
## $T1_dehumanization.t
##    Row T1_dehumanization.t_mad
## 1    5               -3.838126
## 2    6               -4.126899
## 3    7               -3.201162
## 4   16               -3.669448
## 5   20               -3.507809
## 6   43               -3.586169
## 7   51               -3.214970
## 8   58               -3.332422
## 9   69               -3.380825
## 10  87               -3.255470
## 
## $T2_dehumanization.t
##   Row T2_dehumanization.t_mad
## 1   6               -5.417989
## 2  20               -3.326447
## 3  87               -3.141619
## 
## $T3_dehumanization.t
##   Row T3_dehumanization.t_mad
## 1   6               -4.240110
## 2  20               -3.163079
## 
## $T3_CLS.t
##   Row T3_CLS.t_mad
## 1  10    -3.035208
## 
## $T2_memory.aggressive.t
##   Row T2_memory.aggressive.t_mad
## 1  10                  -3.355042
data %>% 
  as.data.frame %>% 
  filter(T1_Group == "Reflection") %>% 
  find_mad(col.list, criteria = 3)
## 9 outlier(s) based on 3 median absolute deviations for variable(s): 
##  T1_blastintensity.t, T1_blastduration.t, T1_blastintensity.duration.t, T2_blastintensity.t, T2_blastduration.t, T2_blastintensity.duration.t, T3_blastintensity.t, T3_blastduration.t, T3_blastintensity.duration.t, T1_BSCS.t, T1_BAQ.t, T1_attitude.t, T2_attitude.t, T3_attitude.t, T1_dehumanization.t, T2_dehumanization.t, T3_dehumanization.t, T1_NOBAGS.t, T2_NOBAGS.t, T3_NOBAGS.t, T1_WHS.t, T2_WHS.t, T3_WHS.t, T1_CLS.t, T2_CLS.t, T3_CLS.t, T2_SMS5.t, T2_PANAS_pos.t, T2_PANAS_neg.t, T2_Charity.t, T2_Familiarity.t, T2_memory.altruistic.t, T2_memory.aggressive.t 
## 
## The following participants were considered outliers for more than one variable: 
## 
##   Row n
## 1  50 2
## 
## Outliers per variable: 
## 
## $T1_blastduration.t
##   Row T1_blastduration.t_mad
## 1  25              -3.069514
## 2  28              -3.365391
## 3  29              -3.465695
## 4  50              -3.587106
## 
## $T2_blastduration.t
##   Row T2_blastduration.t_mad
## 1   8               4.860844
## 
## $T2_attitude.t
##   Row T2_attitude.t_mad
## 1  18         -3.000369
## 
## $T1_WHS.t
##   Row T1_WHS.t_mad
## 1  52    -4.343345
## 
## $T2_SMS5.t
##   Row T2_SMS5.t_mad
## 1  50      3.019472
## 
## $T2_Familiarity.t
##   Row T2_Familiarity.t_mad
## 1  10            -3.277611
## 2  16            -3.136634
data %>% 
  as.data.frame %>% 
  filter(T1_Group == "Meditation") %>% 
  find_mad(col.list, criteria = 3)
## 12 outlier(s) based on 3 median absolute deviations for variable(s): 
##  T1_blastintensity.t, T1_blastduration.t, T1_blastintensity.duration.t, T2_blastintensity.t, T2_blastduration.t, T2_blastintensity.duration.t, T3_blastintensity.t, T3_blastduration.t, T3_blastintensity.duration.t, T1_BSCS.t, T1_BAQ.t, T1_attitude.t, T2_attitude.t, T3_attitude.t, T1_dehumanization.t, T2_dehumanization.t, T3_dehumanization.t, T1_NOBAGS.t, T2_NOBAGS.t, T3_NOBAGS.t, T1_WHS.t, T2_WHS.t, T3_WHS.t, T1_CLS.t, T2_CLS.t, T3_CLS.t, T2_SMS5.t, T2_PANAS_pos.t, T2_PANAS_neg.t, T2_Charity.t, T2_Familiarity.t, T2_memory.altruistic.t, T2_memory.aggressive.t 
## 
## The following participants were considered outliers for more than one variable: 
## 
##   Row n
## 1  35 2
## 2  38 2
## 3  48 2
## 4  58 4
## 
## Outliers per variable: 
## 
## $T1_blastintensity.t
##   Row T1_blastintensity.t_mad
## 1  58                3.102657
## 
## $T1_blastduration.t
##   Row T1_blastduration.t_mad
## 1  19              -3.651525
## 2  27              -3.651525
## 3  38              -3.651525
## 4  58               3.025181
## 
## $T1_blastintensity.duration.t
##   Row T1_blastintensity.duration.t_mad
## 1  58                          3.12631
## 
## $T3_attitude.t
##   Row T3_attitude.t_mad
## 1   3         -3.869917
## 2   5         -3.851400
## 3  16         -3.014938
## 4  35         -3.851400
## 5  48         -3.627971
## 6  55         -3.860675
## 7  58         -3.929166
## 
## $T2_dehumanization.t
##   Row T2_dehumanization.t_mad
## 1  42               -3.630039
## 
## $T3_dehumanization.t
##   Row T3_dehumanization.t_mad
## 1  35               -3.404316
## 
## $T1_NOBAGS.t
##   Row T1_NOBAGS.t_mad
## 1  31        4.672522
## 
## $T1_CLS.t
##   Row T1_CLS.t_mad
## 1  48    -3.235848
## 
## $T3_CLS.t
##   Row T3_CLS.t_mad
## 1  38    -3.059297

After our transformations, there are 12 outliers in the waitlist group, 3 in the reflection group, and 12 in the meditation group.

Multivariate outliers

For multivariate outliers, it is recommended to use the Minimum Covariance Determinant, a robust version of the Mahalanobis distance (MCD, Leys et al., 2019; Thériault et al., 2024). However, when the N/p (N = sample size; p = number of parameters) ratio is smaller than 10, the MCD becomes unreliable (Thériault et al., 2024), which is our case here, so we use the Mahalanobis distance instead.

Leys, C., Delacre, M., Mora, Y. L., Lakens, D., & Ley, C. (2019). How to classify, detect, and manage univariate and multivariate outliers, with emphasis on pre-registration. International Review of Social Psychology, 32(1).

Thériault, R., Ben-Shachar, M. S., Patil, I., Lüdecke, D., Wiernik, B. M., & Makowski, D. (2024). Check your outliers! An introduction to identifying statistical outliers in R with easystats. Behavior Research Methods, 56(4), 4162-4172. https://doi.org/10.3758/s13428-024-02356-w

# We have to exclude two variables that are too large following the transformations
# Otherwise we get an error:
# Error in solve.default(cov, ...) : 
#  system is computationally singular: reciprocal condition number = 8.99059e-20
data.na <- na.omit(data[col.list])
x <- check_outliers(data.na[-c(15:16)], method = "mcd")
## Warning: The sample size is too small in your data, relative to the number of
##   variables, for MCD to be reliable.
##   You may try to increase the `percentage_central` argument (must be
##   between 0 and 1), or choose another method.
x
## 45 outliers detected: cases 7, 10, 14, 18, 21, 23, 24, 32, 36, 37, 38,
##   42, 54, 55, 58, 64, 71, 73, 76, 81, 82, 83, 86, 88, 89, 91, 94, 101,
##   103, 113, 114, 125, 127, 131, 132, 134, 145, 151, 153, 158, 160, 175,
##   182, 199, 200.
## - Based on the following method and threshold: mcd (61.098).
## - For variables: T1_blastintensity.t, T1_blastduration.t,
##   T1_blastintensity.duration.t, T2_blastintensity.t, T2_blastduration.t,
##   T2_blastintensity.duration.t, T3_blastintensity.t, T3_blastduration.t,
##   T3_blastintensity.duration.t, T1_BSCS.t, T1_BAQ.t, T1_attitude.t,
##   T2_attitude.t, T3_attitude.t, T3_dehumanization.t, T1_NOBAGS.t,
##   T2_NOBAGS.t, T3_NOBAGS.t, T1_WHS.t, T2_WHS.t, T3_WHS.t, T1_CLS.t,
##   T2_CLS.t, T3_CLS.t, T2_SMS5.t, T2_PANAS_pos.t, T2_PANAS_neg.t,
##   T2_Charity.t, T2_Familiarity.t, T2_memory.altruistic.t,
##   T2_memory.aggressive.t.
# Warning message:
# The sample size is too small in your data, relative to the number of variables, for MCD to be
# reliable. You may try to increase the `percentage_central` argument (must be between 0 and 1), 
# or choose another method.

# So we have to rely on Mahalanobis instead

x <- check_outliers(data.na[-c(15:16)], method = "mahalanobis")
x
## 5 outliers detected: cases 23, 38, 58, 76, 182.
## - Based on the following method and threshold: mahalanobis (61.098).
## - For variables: T1_blastintensity.t, T1_blastduration.t,
##   T1_blastintensity.duration.t, T2_blastintensity.t, T2_blastduration.t,
##   T2_blastintensity.duration.t, T3_blastintensity.t, T3_blastduration.t,
##   T3_blastintensity.duration.t, T1_BSCS.t, T1_BAQ.t, T1_attitude.t,
##   T2_attitude.t, T3_attitude.t, T3_dehumanization.t, T1_NOBAGS.t,
##   T2_NOBAGS.t, T3_NOBAGS.t, T1_WHS.t, T2_WHS.t, T3_WHS.t, T1_CLS.t,
##   T2_CLS.t, T3_CLS.t, T2_SMS5.t, T2_PANAS_pos.t, T2_PANAS_neg.t,
##   T2_Charity.t, T2_Familiarity.t, T2_memory.altruistic.t,
##   T2_memory.aggressive.t.
# 5 outliers only! That's more reasonable!

There are 5 multivariate outliers according to the Mahalanobis distance. Let’s exclude them.

data <- data[-which(x), ]

Winsorization

Visual assessment and the Mahalanobis distance method confirm we have some outlier values (given we already excluded the multivariate outliers, there only remains the univariate ones). We could ignore them but because they could have disproportionate influence on the models, one recommendation is to winsorize them by bringing the values at 3 SD. Instead of using the standard deviation around the mean, however, we use the absolute deviation around the median, as it is more robust to extreme observations. For a discussion, see:

Leys, C., Klein, O., Bernard, P., & Licata, L. (2013). Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median. Journal of Experimental Social Psychology, 49(4), 764–766. https://doi.org/10.1016/j.jesp.2013.03.013

# Winsorize variables of interest with MAD
data <- data %>% 
  group_by(T1_Group) %>% 
  mutate(across(all_of(col.list), 
                winsorize_mad,
                .names = "{.col}.w")) %>% 
  ungroup()

# Update col.list
col.list <- paste0(col.list, ".w")

Outliers are still present but were brought back within reasonable limits, where applicable.

Standardization

We can now standardize our variables.

data <- data %>%
  mutate(across(all_of(col.list), standardize, .names = "{col}.s"))

# Update col.list
col.list <- paste0(col.list, ".s")

We are now ready to move to the analyses proper.

# Let's replace original variables with the transformed variables
data[gsub(".t.w.s", "", col.list)] <- data[col.list]

Baseline Group Differences

str_formula <- "~ T1_BSCS + T1_BAQ + T1_attitude + T1_dehumanization + T1_NOBAGS + T1_WHS + T1_CLS + T1_blastintensity.duration | T1_Group"
table_caption <- c("Table 2", "Baseline Differences by intervention group")

x <- flex_table1(str_formula, data = data, table_caption = table_caption)
x

Baseline Differences by intervention group

Table 2

Characteristic

Meditation
(N = 56)

Reflection
(N = 50)

Waitlist
(N = 92)

F / χ²

df

p

T1_BSCS

0.26 (± 0.96)

-0.086 (± 0.80)

-0.11 (± 1.1)

2.75

2, 195

.0664

T1_BAQ

-0.24 (± 0.97)

0.18 (± 1.0)

0.045 (± 0.98)

2.53

2, 195

.0825

T1_attitude

0.19 (± 1.1)

-0.22 (± 0.90)

0.0088 (± 0.99)

2.24

2, 195

.109

T1_dehumanization

0.096 (± 1.0)

-0.14 (± 1.0)

0.016 (± 0.97)

0.73

2, 195

.484

T1_NOBAGS

-0.13 (± 1.0)

0.12 (± 1.1)

0.017 (± 0.93)

0.88

2, 195

.416

T1_WHS

0.065 (± 1.1)

-0.17 (± 0.96)

0.051 (± 0.95)

0.94

2, 195

.393

T1_CLS

0.18 (± 1.2)

-0.10 (± 0.95)

-0.055 (± 0.92)

1.31

2, 195

.272

T1_blastintensity.duration

0.031 (± 0.98)

0.21 (± 1.0)

-0.13 (± 0.99)

1.97

2, 195

.142

Note. Differences are determined by one way ANOVA or Pearson's χ²-test.

save_flextable(x, "Results/table2.docx", overwrite = TRUE)

Differential Attrition

Are the dropout rates equivalent between groups? For this Differential Attrition Rate Test, we run a simple chi squared test of independence on dropout counts.

# suppressWarnings(suppressMessages(library(dplyr)))
# suppressWarnings(library(report))
data_chi2 <- data.frame(
  group = c("Meditation", "Waitlist", "Reflection"),
  pre = c(152, 190, 154),
  post = c(104, 146, 78),
  follow = c(91, 124, 73)
)

data_chi2 <- data_chi2 %>% 
  mutate(dropout_pre_post = pre - post,
         dropout_post_follow = post - follow)
data_chi2
group pre post follow dropout_pre_post dropout_post_follow
Meditation 152 104 91 48 13
Waitlist 190 146 124 44 22
Reflection 154 78 73 76 5
percentages <- data_chi2 %>% 
  mutate(t2_drop_p = round(dropout_pre_post / pre * 100, 2),
         t3_drop_p = round(dropout_post_follow / post * 100, 2))
percentages
group pre post follow dropout_pre_post dropout_post_follow t2_drop_p t3_drop_p
Meditation 152 104 91 48 13 31.58 12.50
Waitlist 190 146 124 44 22 23.16 15.07
Reflection 154 78 73 76 5 49.35 6.41
x <- data_chi2[c(3, 5)]
x
post dropout_pre_post
104 48
146 44
78 76
chisq.test(x) %>% 
  report
## Effect sizes were labelled following Funder's (2019) recommendations.
## 
## The Pearson's Chi-squared test of independence between suggests that the effect
## is statistically significant, and medium (chi2 = 26.57, p < .001; Adjusted
## Cramer's v = 0.22, 95% CI [0.14, 1.00])
x <- data_chi2[c(4, 6)]
x
follow dropout_post_follow
91 13
124 22
73 5
chisq.test(x) %>% 
  report
## Effect sizes were labelled following Funder's (2019) recommendations.
## 
## The Pearson's Chi-squared test of independence between suggests that the effect
## is statistically not significant, and very small (chi2 = 3.57, p = 0.168;
## Adjusted Cramer's v = 0.07, 95% CI [0.00, 1.00])

Conclusion: It seems that there indeed is differential attrition, but only from Time 1 (pre) to Time 2 (post), not from Time 2 to Time 3 (follow-up). However, the effect size is small. This suggests that part of the results could maybe be explained by who dropped from the study from which group. Maybe these people had different characteristics. To answer this question, we conduct a new test.

For the Selective Attrition Test, which assesses whether the characteristics of the dropouts are different from those who remained in the study, we use a logistic regression model which can be used to model the probability of dropout as a function of group and other relevant covariates.

data_496 <- readRDS("Data/finaldataset_n496.rds")

data_496 <- data_496 %>% 
  mutate(included = ID %in% data$ID,
         drop = as.numeric(!included))

x <- glm(drop ~ T1_Group + age + gender #+ T1_psycho.class
         #+ T1_virtual.reality + T1_medi.experience 
         #+ T1_quebec
        , data = data_496, family = binomial)

x %>% 
  report() %>% 
  nice_table(highlight = TRUE)
## Non-dataframe detected. Attempting to coerce to dataframe

Parameter

Fit

b

95% CI (b)

z

p

b*

95% CI (b*)

(Intercept)

1.72

[0.54, 2.93]

2.83

.005**

0.52

[0.18, 0.87]

T1 Group [Reflection]

0.22

[-0.26, 0.69]

0.89

.373

0.22

[-0.26, 0.69]

T1 Group [Waitlist]

-0.49

[-0.93, -0.05]

-2.16

.031*

-0.49

[-0.93, -0.05]

age

-0.05

[-0.10, -0.00]

-2.08

.037*

-0.20

[-0.39, -0.01]

gender [Male]

0.05

[-0.42, 0.54]

0.22

.825

0.05

[-0.42, 0.54]

gender [Non-Binary]

14.84

[-68.36, NA]

0.02

.984

14.84

[-68.36, NA]

AIC

661.47

AICc

661.64

BIC

686.71

Tjur's R2

0.03

Sigma

1.00

Log_loss

0.65

# plot logistic regression curve
ggplot(data_496, aes(x=age, y=drop)) + 
  geom_point(alpha=.5) +
  stat_smooth(method="glm", se=FALSE, method.args = list(family=binomial))
## `geom_smooth()` using formula = 'y ~ x'

# Are groups from the whole sample equivalent on age?
str_formula <- "~ age + gender + T1_psycho.class + T1_medi.experience | T1_Group"
table_caption <- c("Table 1", "Sample demographics split by intervention group")

data_496 <- data_496 %>% 
  var_labels(
    age = "Age",
    gender = "Gender",
    T1_medi.experience = "Meditation experience",
    T3_post.medipractice = "Practiced meditation over last 13 weeks",
    T1_psycho.class = "Already completed a psychology course")

x <- flex_table1(str_formula, data = data_496, table_caption = table_caption)
x

Sample demographics split by intervention group

Table 1

Characteristic

Meditation
(N = 152)

Reflection
(N = 154)

Waitlist
(N = 190)

F / χ²

df

p

Age

25 (± 4.4)

25 (± 4.1)

24 (± 3.5)

2.08

2, 309.47

.126

Gender

5.1

4

.277

  Female

121 (80 %)

121 (79 %)

158 (83 %)

  Male

28 (18 %)

32 (21 %)

32 (17 %)

  Non-Binary

3 (2 %)

1 (1 %)

0 (0 %)

Already completed a psychology course *

9

2

.0111

  No

78 (51 %)

68 (44 %)

67 (35 %)

  Yes

74 (49 %)

86 (56 %)

123 (65 %)

Meditation experience

.855 ͣ

  < 5 hours

123 (81 %)

122 (79 %)

156 (82 %)

  > 10 hours

1 (1 %)

0 (0 %)

1 (1 %)

  Between 5 and 10 hours

28 (18 %)

32 (21 %)

33 (17 %)

Note. Differences are determined by one way ANOVA or Pearson's χ²-test.

ͣ Fisher's exact test, expected cell-count ≤ 1.

# data_496 %>% 
#   mutate(gender = as.factor(gender)) %>% 
#   group_by(gender) %>% 
#   summarize(drop = sum(drop) / n() * 100)

Conclusion: Group does not predict dropout rates, but gender does: Women seem more slightly more likely to dropout than males.

To check whether other variables relate to dropout rates, I would actually need to calculate means again, with transformations, winsorization, and standardization. However, I do not think that to be necessary since the main question was whether group predicted who dropped out.

Another, perhaps more appropriate possibility would be a survival analysis, as suggested by Demetri Pananos in the comment to the question. This would likely require a reshaping of your data, but is specifically designed for “time to event” data, which you seem to have.

library(survival)
library(ranger)
library(ggplot2)
library(dplyr)
library(ggfortify)

For survival analysis, I need a variable “Time” which indicates the last questionnaire session participants completed (T2 or T3), but I do not have this information right now.

Reference for this section: https://stats.stackexchange.com/a/631061/195567

Let’s now simply compare the means of dropouts and non-dropouts.

means_comparisons <- data_496 %>% 
  mutate(included = ifelse(included, "included", "drop"),
         across(any_of(col.list_short), standardize)) %>% 
  filter(T1_Group == "Reflection") %>% 
  summarize(across(any_of(col.list_short), \(x) mean(x, na.rm = TRUE)),
            .by = "included") %>% 
  pivot_longer(-included)

means_comparisons2 <- cbind(
  filter(means_comparisons[1:3], included == "included"),
  filter(means_comparisons[c(1, 3)], included == "drop"))
means_comparisons2 <- means_comparisons2[-c(1, 4)]
names(means_comparisons2)[2:3] <- c("value_included", "value_drop")

means_comparisons2 %>% 
  mutate(diff = value_included - value_drop) %>% 
  nice_table()

name

value_included

value_drop

diff

T1_blastintensity

0.20

0.01

0.19

T1_blastduration

0.10

0.05

0.06

T1_blastintensity.duration

0.15

0.06

0.09

T2_blastintensity

-0.22

0.15

-0.37

T2_blastduration

-0.26

0.01

-0.28

T2_blastintensity.duration

-0.23

0.24

-0.47

T3_blastintensity

-0.09

0.43

-0.52

T3_blastduration

-0.13

0.05

-0.18

T3_blastintensity.duration

-0.08

0.61

-0.69

T2_memory.altruistic

-0.20

-0.01

-0.18

T2_memory.aggressive

-0.04

-0.04

-0.01

Conclusion: participants from the reflection group that dropped out were LESS aggressive at Time 1 but MORE aggressive at Times 2 and 3, providing support for the interpretation that it is because of the dropouts that the reflection group consistently seem to appear to start on the “bad end” of variables, as well as ending up on the “right end” for Time 2 and 3 (but: perhaps their scores are not as good at Time 2 and 3 precisely because they have stopped the training). Let’s now test this statistically.

col.list_496 <- data_496 %>% 
  select(any_of(col.list_short)) %>% 
  names

nice_t_test(
  data = data_496,
  response = col.list_496,
  group = "included"
) %>% 
  nice_table(highlight = TRUE)
## Using independent samples t-test. 
## 
## Using Welch t-test (base R's default; cf. https://doi.org/10.5334/irsp.82).
## For the Student t-test, use `var.equal = TRUE`. 
## 

Dependent Variable

t

df

p

d

95% CI

T1_blastintensity

0.71

443.29

.476

0.06

[-0.12, 0.24]

T1_blastduration

0.85

449.97

.395

0.08

[-0.10, 0.26]

T1_blastintensity.duration

1.13

453.46

.258

0.10

[-0.08, 0.28]

T2_blastintensity

1.72

157.59

.088

0.23

[-0.02, 0.47]

T2_blastduration

0.93

168.73

.354

0.12

[-0.12, 0.37]

T2_blastintensity.duration

1.90

149.26

.060

0.26

[0.01, 0.50]

T3_blastintensity

0.49

38.30

.625

0.11

[-0.27, 0.48]

T3_blastduration

-0.11

39.30

.912

-0.02

[-0.40, 0.35]

T3_blastintensity.duration

0.55

37.48

.587

0.12

[-0.25, 0.50]

T2_memory.altruistic

1.11

102.97

.271

0.19

[-0.06, 0.43]

T2_memory.aggressive

0.05

154.90

.961

0.01

[-0.24, 0.25]

Conclusion: None of the t tests comparing dropouts and completers is significant, indicating that dropouts and completers did not differ on those variables. However, the compound variable blastintensity*duration was marginally significant.

Regression to the Mean

data %>% 
  filter(T1_Group == "Reflection") %>% 
  describe_distribution(col.list) %>% 
  write.csv("reg_to_mean.csv")

cov(data$T1_CLS, data$T2_CLS)
## [1] 0.7575881
cov(data$T1_blastintensity.duration, data$T2_blastintensity.duration)
## [1] 0.7169151
cor(data$T1_blastintensity.duration, data$T2_blastintensity.duration)
## [1] 0.7169151
# library(regtomean)
# 
# mee_chua <- replicate_data(50,60,"Before","After",data=language_test)
# ## sort mu ##
# mee_chua_sort <- mee_chua[with(mee_chua,order(mu)),]
# 
# meechua_reg(mee_chua_sort)

Contrasts (Group Differences)

In this section, we perform the planned contrasts.

Time 2

# Specify the order of factor levels for "Group". 
# Otherwise R will alphabetize them.
data$T1_Group <- factor(data$T1_Group, levels = c("Meditation", "Reflection", "Waitlist"))

# Define our dependent variables
DV <- data %>% select(T2_NOBAGS:T2_Charity) %>% names

# First column (which variable)
Variable <- rep(DV, each = 3)

# Second column (which comparison)
Comparison <- rep(c("MeditationvsCTR", 
                    "ReflectionvsCTR", 
                    "MeditationvsReflection"), 
                  length(DV))
# 14 == number of DV

# Make list of all formulas
formulas <- c(
  "T2_NOBAGS ~ T1_Group * T1_NOBAGS",
  "T2_attitude ~ T1_Group * T1_attitude",
  "T2_dehumanization ~ T1_Group * T1_dehumanization",
  "T2_IAT ~ T1_Group * T1_IAT",
  "T2_SMS5 ~ T1_Group",
  "T2_blastintensity ~ T1_Group * T1_blastintensity",
  "T2_blastduration ~ T1_Group * T1_blastduration",
  "T2_blastintensity.duration ~ T1_Group * T1_blastintensity.duration",
  "T2_memory.altruistic ~ T1_Group",
  "T2_memory.aggressive ~ T1_Group",
  "T2_WHS ~ T1_Group * T1_WHS",
  "T2_CLS ~ T1_Group * T1_CLS",
  "T2_Charity ~ T1_Group * T2_Familiarity",
  "T2_PANAS_pos ~ T1_Group",
  "T2_PANAS_neg ~ T1_Group"
)

# Make list of all models
models.list <- sapply(formulas, lm, data = data, simplify = FALSE, USE.NAMES = TRUE)

# Attempt with nice_lm_contrasts 
set.seed(100)
x <- nice_lm_contrasts(models.list, group = "T1_Group", data = data)

table3 <- nice_table(x, highlight = TRUE)
table3

Dependent Variable

Comparison

df

t

p

d

95% CI

T2_NOBAGS

Meditation - Reflection

192

0.02

.987

-0.12

[-0.53, 0.28]

Meditation - Waitlist

192

-1.54

.124

-0.30

[-0.64, 0.04]

Reflection - Waitlist

192

-1.51

.133

-0.18

[-0.51, 0.21]

T2_attitude

Meditation - Reflection

192

-0.79

.431

0.14

[-0.26, 0.55]

Meditation - Waitlist

192

1.52

.129

0.31

[-0.04, 0.63]

Reflection - Waitlist

192

2.33

.021*

0.17

[-0.20, 0.51]

T2_dehumanization

Meditation - Reflection

192

0.45

.651

0.21

[-0.18, 0.64]

Meditation - Waitlist

192

0.67

.503

0.12

[-0.21, 0.43]

Reflection - Waitlist

192

0.14

.887

-0.10

[-0.49, 0.25]

T2_IAT

Meditation - Reflection

192

-0.71

.480

-0.34

[-0.72, 0.02]

Meditation - Waitlist

192

0.45

.652

0.07

[-0.26, 0.38]

Reflection - Waitlist

192

1.21

.228

0.41

[-0.01, 0.77]

T2_SMS5

Meditation - Reflection

195

-0.98

.326

-0.19

[-0.56, 0.15]

Meditation - Waitlist

195

-1.81

.072

-0.31

[-0.64, 0.04]

Reflection - Waitlist

195

-0.66

.512

-0.12

[-0.46, 0.23]

T2_blastintensity

Meditation - Reflection

192

2.52

.012*

0.20

[-0.19, 0.62]

Meditation - Waitlist

192

-1.32

.188

-0.01

[-0.32, 0.36]

Reflection - Waitlist

192

-4.03

< .001***

-0.21

[-0.56, 0.14]

T2_blastduration

Meditation - Reflection

192

2.60

.010**

0.27

[-0.11, 0.63]

Meditation - Waitlist

192

-1.33

.185

-0.06

[-0.38, 0.27]

Reflection - Waitlist

192

-4.14

< .001***

-0.33

[-0.66, 0.03]

T2_blastintensity.duration

Meditation - Reflection

192

2.73

.007**

0.24

[-0.16, 0.66]

Meditation - Waitlist

192

-1.37

.173

-0.03

[-0.38, 0.30]

Reflection - Waitlist

192

-4.31

< .001***

-0.27

[-0.62, 0.10]

T2_memory.altruistic

Meditation - Reflection

195

1.58

.116

0.31

[-0.09, 0.66]

Meditation - Waitlist

195

-0.71

.481

-0.12

[-0.45, 0.22]

Reflection - Waitlist

195

-2.43

.016*

-0.43

[-0.77, -0.08]

T2_memory.aggressive

Meditation - Reflection

195

0.69

.492

0.13

[-0.28, 0.53]

Meditation - Waitlist

195

-0.16

.876

-0.03

[-0.36, 0.30]

Reflection - Waitlist

195

-0.91

.362

-0.16

[-0.52, 0.18]

T2_WHS

Meditation - Reflection

192

-0.85

.396

0.03

[-0.35, 0.45]

Meditation - Waitlist

192

0.39

.694

0.06

[-0.30, 0.41]

Reflection - Waitlist

192

1.32

.189

0.02

[-0.31, 0.37]

T2_CLS

Meditation - Reflection

192

-1.96

.051

-0.03

[-0.45, 0.39]

Meditation - Waitlist

192

1.99

.048*

0.40

[0.07, 0.71]

Reflection - Waitlist

192

4.11

< .001***

0.43

[0.05, 0.78]

T2_Charity

Meditation - Reflection

192

-0.43

.665

-0.10

[-0.51, 0.34]

Meditation - Waitlist

192

-1.38

.169

-0.19

[-0.55, 0.16]

Reflection - Waitlist

192

-0.85

.397

-0.10

[-0.44, 0.26]

T2_PANAS_pos

Meditation - Reflection

195

1.22

.225

0.24

[-0.13, 0.64]

Meditation - Waitlist

195

2.74

.007**

0.46

[0.12, 0.79]

Reflection - Waitlist

195

1.30

.197

0.23

[-0.12, 0.56]

T2_PANAS_neg

Meditation - Reflection

195

1.08

.282

0.21

[-0.22, 0.57]

Meditation - Waitlist

195

1.45

.148

0.25

[-0.09, 0.59]

Reflection - Waitlist

195

0.21

.837

0.04

[-0.30, 0.39]

Time 3

# Make list of all formulas
formulas2 <- c(
  "T3_NOBAGS ~ T1_Group * T1_NOBAGS",
  "T3_attitude ~ T1_Group * T1_attitude",
  "T3_dehumanization ~ T1_Group * T1_dehumanization",
  "T3_IAT ~ T1_Group * T1_IAT",
  "T3_blastintensity ~ T1_Group * T1_blastintensity",
  "T3_blastduration ~ T1_Group * T1_blastduration",
  "T3_blastintensity.duration ~ T1_Group * T1_blastintensity.duration",
  "T3_WHS ~ T1_Group * T1_WHS",
  "T3_CLS ~ T1_Group * T1_CLS"
)

# Make list of all models
models.list2 <- sapply(formulas2, lm, data = data, simplify = FALSE, USE.NAMES = TRUE)

## Attempt with nice_lm_contrasts
set.seed(100)
x2 <- nice_lm_contrasts(models.list2, group = "T1_Group", data = data)

table4 <- nice_table(x2, highlight = TRUE)
table4

Dependent Variable

Comparison

df

t

p

d

95% CI

T3_NOBAGS

Meditation - Reflection

192

-1.43

.154

-0.37

[-0.80, 0.04]

Meditation - Waitlist

192

-1.42

.157

-0.28

[-0.63, 0.03]

Reflection - Waitlist

192

0.22

.828

0.09

[-0.27, 0.44]

T3_attitude

Meditation - Reflection

192

-0.34

.736

0.23

[-0.16, 0.62]

Meditation - Waitlist

192

1.88

.062

0.35

[0.02, 0.68]

Reflection - Waitlist

192

2.17

.031*

0.12

[-0.25, 0.49]

T3_dehumanization

Meditation - Reflection

192

1.00

.318

0.29

[-0.11, 0.69]

Meditation - Waitlist

192

0.82

.416

0.12

[-0.19, 0.41]

Reflection - Waitlist

192

-0.33

.745

-0.17

[-0.53, 0.21]

T3_IAT

Meditation - Reflection

192

-0.05

.961

-0.18

[-0.56, 0.20]

Meditation - Waitlist

192

1.23

.220

0.20

[-0.11, 0.51]

Reflection - Waitlist

192

1.22

.225

0.38

[-0.01, 0.73]

T3_blastintensity

Meditation - Reflection

192

1.31

.191

0.07

[-0.29, 0.46]

Meditation - Waitlist

192

-1.65

.100

-0.08

[-0.39, 0.27]

Reflection - Waitlist

192

-3.01

.003**

-0.16

[-0.52, 0.19]

T3_blastduration

Meditation - Reflection

192

1.49

.138

0.15

[-0.23, 0.58]

Meditation - Waitlist

192

-1.13

.261

-0.05

[-0.38, 0.29]

Reflection - Waitlist

192

-2.72

.007**

-0.20

[-0.55, 0.16]

T3_blastintensity.duration

Meditation - Reflection

192

1.32

.189

0.10

[-0.31, 0.47]

Meditation - Waitlist

192

-1.38

.169

-0.06

[-0.39, 0.29]

Reflection - Waitlist

192

-2.77

.006**

-0.16

[-0.51, 0.19]

T3_WHS

Meditation - Reflection

192

-1.22

.226

0.03

[-0.38, 0.48]

Meditation - Waitlist

192

1.85

.066

0.21

[-0.14, 0.52]

Reflection - Waitlist

192

3.11

.002**

0.18

[-0.17, 0.54]

T3_CLS

Meditation - Reflection

192

-1.79

.075

-0.03

[-0.43, 0.39]

Meditation - Waitlist

192

1.60

.112

0.35

[-0.00, 0.65]

Reflection - Waitlist

192

3.53

.001***

0.38

[-0.01, 0.71]

Marginal Means

In this section, we generate two plot types based on marginal means: marginal means plots and lighthouse plots, to better illustrate the pairwise contrasts (only for significant contrasts from the previous step).

Time 2

Attitudes

The first plot below represents the adjusted (marginal) means, after taking into account covariates such as Time 1 measurements.

means <- estimate_means(models.list$`T2_attitude ~ T1_Group * T1_attitude`)

T2_attitude_means <- means

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Attitude", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(models.list$`T2_attitude ~ T1_Group * T1_attitude`)
plot(contrasts, estimate_means(models.list$`T2_attitude ~ T1_Group * T1_attitude`)) +
  ylab(paste("Adjusted", "Attitude", "Mean")) +
  theme_modern()

The plot above is a lighthouse plot. These plots represent the estimated means and their CI range (in black), while the grey areas show the CI range of the difference (as compared to the point estimate).

PANAS (Positive Affect)

means <- estimate_means(models.list$`T2_PANAS_pos ~ T1_Group`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Positive Affect", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(models.list$`T2_PANAS_pos ~ T1_Group`)
plot(contrasts, estimate_means(models.list$`T2_PANAS_pos ~ T1_Group`)) +
  ylab(paste("Adjusted", "Positive Affect", "Mean")) +
  theme_modern()

violin_PANAS_pos <- nice_violin(data, 
            group = "T1_Group", 
            response = "T2_PANAS_pos",
            obs = TRUE,
            comp1 = 1,
            comp2 = 3,
            has.d = TRUE,
            d.x = 3.2,
            d.y = 2,
            ytitle = "Positive Affect")
violin_PANAS_pos

violin_PANAS_neg <- nice_violin(data, 
            group = "T1_Group", 
            response = "T2_PANAS_neg",
            obs = TRUE,
            ytitle = "Negative Affect")
violin_PANAS_neg

Lighthouse plots represent the estimated means and their CI range (in black), while the grey areas show the CI range of the difference (as compared to the point estimate).

Aggression (Blast Intensity)

means <- estimate_means(models.list$`T2_blastintensity ~ T1_Group * T1_blastintensity`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Blast Intensity", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list$`T2_blastintensity ~ T1_Group * T1_blastintensity`)
plot(contrasts, estimate_means(
  models.list$`T2_blastintensity ~ T1_Group * T1_blastintensity`)) +
  ylab(paste("Adjusted", "Blast Intensity", "Mean")) +
  theme_modern()

Lighthouse plots represent the estimated means and their CI range (in black), while the grey areas show the CI range of the difference (as compared to the point estimate).

Aggression (Blast Duration)

means <- estimate_means(models.list$`T2_blastduration ~ T1_Group * T1_blastduration`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Blast Duration", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list$`T2_blastduration ~ T1_Group * T1_blastduration`)
plot(contrasts, estimate_means(
  models.list$`T2_blastduration ~ T1_Group * T1_blastduration`)) +
  ylab(paste("Adjusted", "Blast Duration", "Mean")) +
  theme_modern()

Lighthouse plots represent the estimated means and their CI range (in black), while the grey areas show the CI range of the difference (as compared to the point estimate).

Aggression (Blast Intensity * Duration)

means <- estimate_means(
  models.list$`T2_blastintensity.duration ~ T1_Group * T1_blastintensity.duration`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Blast Intensity * Duration", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list$`T2_blastintensity.duration ~ T1_Group * T1_blastintensity.duration`)
plot(contrasts, estimate_means(
  models.list$`T2_blastintensity.duration ~ T1_Group * T1_blastintensity.duration`)) +
  ylab(paste("Adjusted", "Blast Intensity * Duration", "Mean")) +
  theme_modern()

Lighthouse plots represent the estimated means and their CI range (in black), while the grey areas show the CI range of the difference (as compared to the point estimate).

Altruistic Memory

means <- estimate_means(
  models.list$`T2_memory.altruistic ~ T1_Group`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "ms to remember altruistic event", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list$`T2_memory.altruistic ~ T1_Group`)
plot(contrasts, estimate_means(
  models.list$`T2_memory.altruistic ~ T1_Group`)) +
  ylab(paste("Adjusted", "ms to remember altruistic event", "Mean")) +
  theme_modern()

violin_memory.altruistic <- nice_violin(data, 
            group = "T1_Group", 
            response = "T2_memory.altruistic",
            obs = TRUE,
            has.d = TRUE,
            d.x = 2.75,
            d.y = 2,
            signif_annotation = c("*"),
            signif_yposition = 3,
            signif_xmin = 2,
            signif_xmax = 3,
            ytitle = "ms to Remember Altruistic Event")
violin_memory.altruistic

violin_memory.aggressive <- nice_violin(data, 
            group = "T1_Group", 
            response = "T2_memory.aggressive",
            obs = TRUE,
            ytitle = "ms to Remember Aggressive Event")
violin_memory.aggressive

Lighthouse plots represent the estimated means and their CI range (in black), while the grey areas show the CI range of the difference (as compared to the point estimate).

Charity

means <- estimate_means(
  models.list$`T2_Charity ~ T1_Group`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Charity Donation", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list$`T2_Charity ~ T1_Group`)
plot(contrasts, estimate_means(
  models.list$`T2_Charity ~ T1_Group`)) +
  ylab(paste("Adjusted", "Charity Donation", "Mean")) +
  theme_modern()

violin_charity <- nice_violin(data, 
            group = "T1_Group", 
            response = "T2_Charity",
            obs = TRUE,
            ytitle = "Charity Donation")
violin_charity

Lighthouse plots represent the estimated means and their CI range (in black), while the grey areas show the CI range of the difference (as compared to the point estimate).

Compassion

means <- estimate_means(models.list$`T2_CLS ~ T1_Group * T1_CLS`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Compassion", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(models.list$`T2_CLS ~ T1_Group * T1_CLS`)
plot(contrasts, estimate_means(models.list$`T2_CLS ~ T1_Group * T1_CLS`)) +
  ylab(paste("Adjusted", "Compassion", "Mean")) +
  theme_modern()

Lighthouse plots represent the estimated means and their CI range (in black), while the grey areas show the CI range of the difference (as compared to the point estimate).

Time 3

Aggression (Blast Intensity)

means <- estimate_means(models.list2$`T3_blastintensity ~ T1_Group * T1_blastintensity`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Blast Intensity", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list2$`T3_blastintensity ~ T1_Group * T1_blastintensity`)
plot(contrasts, estimate_means(
  models.list2$`T3_blastintensity ~ T1_Group * T1_blastintensity`)) +
  ylab(paste("Adjusted", "Blast Intensity", "Mean")) +
  theme_modern()

Lighthouse plots represent the estimated means and their CI range (in black), while the grey areas show the CI range of the difference (as compared to the point estimate).

Aggression (Blast Duration)

means <- estimate_means(models.list2$`T3_blastduration ~ T1_Group * T1_blastduration`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Blast Duration", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list2$`T3_blastduration ~ T1_Group * T1_blastduration`)
plot(contrasts, estimate_means(
  models.list2$`T3_blastduration ~ T1_Group * T1_blastduration`)) +
  ylab(paste("Adjusted", "Blast Duration", "Mean")) +
  theme_modern()

Aggression (Blast Intensity * Duration)

means <- estimate_means(
  models.list2$`T3_blastintensity.duration ~ T1_Group * T1_blastintensity.duration`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Blast Intensity * Duration", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list2$`T3_blastintensity.duration ~ T1_Group * T1_blastintensity.duration`)
plot(contrasts, estimate_means(
  models.list2$`T3_blastintensity.duration ~ T1_Group * T1_blastintensity.duration`)) +
  ylab(paste("Adjusted", "Blast Intensity * Duration", "Mean")) +
  theme_modern()

Lighthouse plots represent the estimated means and their CI range (in black), while the grey areas show the CI range of the difference (as compared to the point estimate).

Helping

means <- estimate_means(models.list2$`T3_WHS ~ T1_Group * T1_WHS`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Willingness to Help", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list2$`T3_WHS ~ T1_Group * T1_WHS`)
plot(contrasts, estimate_means(
  models.list2$`T3_WHS ~ T1_Group * T1_WHS`)) +
  ylab(paste("Adjusted", "Willingness to Help", "Mean")) +
  theme_modern()

Lighthouse plots represent the estimated means and their CI range (in black), while the grey areas show the CI range of the difference (as compared to the point estimate).

Compassion

means <- estimate_means(models.list2$`T3_CLS ~ T1_Group * T1_CLS`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Compassion", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(models.list2$`T3_CLS ~ T1_Group * T1_CLS`)
plot(contrasts, estimate_means(models.list2$`T3_CLS ~ T1_Group * T1_CLS`)) +
  ylab(paste("Adjusted", "Compassion", "Mean")) +
  theme_modern()

Lighthouse plots represent the estimated means and their CI range (in black), while the grey areas show the CI range of the difference (as compared to the point estimate).

Over time (T1, T2, T3)

NOBAGS

time <- data %>% select(ends_with("NOBAGS")) %>% names()
p_NOBAGS <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  ytitle = "Aggression Attitude",
  error_bars = TRUE)
p_NOBAGS

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Attitudes

time <- data %>% select(ends_with("attitude")) %>% names()
p_attitude <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Intergroup Attitude (Positive)",
  significance_bars_x = c(2.15, 3.15),
  significance_stars = c("*", "*"),
  significance_stars_x = c(2.25, 3.25),
  significance_stars_y = list(c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_attitude

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Dehumanization

time <- data %>% select(ends_with("dehumanization")) %>% names()
p_dehumanization <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Humanization")
p_dehumanization

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

IAT

time <- data %>% select(ends_with("IAT")) %>% names()
p_IAT <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  ytitle = "Implicit Aggression (IAT)",
  error_bars = TRUE)
p_IAT

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Aggression (Blast Intensity)

time <- data %>% select(ends_with("blastintensity")) %>% names()
p_blastintensity <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  significance_bars_x = c(1.82, 2.16, 3.17),
  significance_stars = c("**", "***", "**"),
  significance_stars_x = c(1.62, 2.45, 3.35),
  significance_stars_y = list(c("Meditation", "Reflection", time = 2),
                              c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_blastintensity

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Aggression (Blast Duration)

time <- data %>% select(ends_with("blastduration")) %>% names()
p_blastduration <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  significance_bars_x = c(1.82, 2.16, 3.17),
  significance_stars = c("**", "***", "**"),
  significance_stars_x = c(1.62, 2.45, 3.35),
  significance_stars_y = list(c("Meditation", "Reflection", time = 2),
                              c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_blastduration

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Aggression (Blast Intensity * Duration)

time <- data %>% select(ends_with("blastintensity.duration")) %>% names()
p_blastintensity.duration <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Behavioural Aggression",
  significance_bars_x = c(1.82, 2.16, 3.17),
  significance_stars = c("**", "***", "**"),
  significance_stars_x = c(1.66, 2.40, 3.33),
  significance_stars_y = list(c("Meditation", "Reflection", time = 2),
                              c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_blastintensity.duration

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Helping

time <- data %>% select(ends_with("WHS")) %>% names()
p_WHS <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Willingness to Help",
  significance_bars_x = 3.2,
  significance_stars = "**",
  significance_stars_x = 3.35,
  significance_stars_y = list(c("Reflection", "Waitlist", time = 3)))
p_WHS

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Compassion

time <- data %>% select(ends_with("CLS")) %>% names()
p_CLS <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Compassionate Love",
  significance_bars_x = c(1.85, 2.15, 3.15),
  significance_stars = c("**", "***", "*"),
  significance_stars_x = c(1.68, 2.38, 3.25),
  significance_stars_y = list(c("Meditation", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_CLS

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Over time (change scores)

Data preparation

data_delta <- data %>% 
  group_by(T1_Group) %>% 
  mutate(
    across(ends_with("NOBAGS"), \(x) {x - T1_NOBAGS}),
    across(ends_with("attitude"), \(x) {x - T1_attitude}),
    across(ends_with("dehumanization"), \(x) {x - T1_dehumanization}),
    across(ends_with("IAT"), \(x) {x - T1_IAT}),
    across(ends_with("blastintensity"), \(x) {x - T1_blastintensity}),
    across(ends_with("blastduration"), \(x) {x - T1_blastduration}),
    across(ends_with("blastintensity.duration"), 
           \(x) {x - T1_blastintensity.duration}),
    across(ends_with("WHS"), \(x) {x - T1_WHS}),
    across(ends_with("CLS"), \(x) {x - T1_CLS})
    )

# Check it worked correctly
data_delta %>%
  summarize(m1 = mean(T1_NOBAGS),
            m2 = mean(T2_NOBAGS),
            m3 = mean(T3_NOBAGS))
T1_Group m1 m2 m3
Meditation 0 -0.0329012 -0.0899125
Reflection 0 -0.1679799 0.0279841
Waitlist 0 0.1113202 0.0395206
data_delta %>%
  summarize(m1 = mean(T1_attitude),
            m2 = mean(T2_attitude),
            m3 = mean(T3_attitude))
T1_Group m1 m2 m3
Meditation 0 -0.0060096 0.0361046
Reflection 0 0.2595302 0.2166312
Waitlist 0 -0.1373909 -0.1397111
# OK

NOBAGS

time <- data %>% select(ends_with("NOBAGS")) %>% names()
p_NOBAGS <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  ytitle = "Aggression Attitude",
  error_bars = TRUE)
p_NOBAGS

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Attitudes

time <- data %>% select(ends_with("attitude")) %>% names()
p_attitude <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Intergroup Attitude (Positive)",
  significance_bars_x = c(2.15, 3.15),
  significance_stars = c("*", "*"),
  significance_stars_x = c(2.25, 3.25),
  significance_stars_y = list(c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_attitude

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Dehumanization

time <- data %>% select(ends_with("dehumanization")) %>% names()
p_dehumanization <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Humanization")
p_dehumanization

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

IAT

time <- data %>% select(ends_with("IAT")) %>% names()
p_IAT <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  ytitle = "Implicit Aggression (IAT)",
  error_bars = TRUE)
p_IAT

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Aggression (Blast Intensity)

time <- data %>% select(ends_with("blastintensity")) %>% names()
p_blastintensity <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  significance_bars_x = c(1.82, 2.16, 3.17),
  significance_stars = c("**", "***", "**"),
  significance_stars_x = c(1.62, 2.45, 3.35),
  significance_stars_y = list(c("Meditation", "Reflection", time = 2),
                              c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_blastintensity

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Aggression (Blast Duration)

time <- data %>% select(ends_with("blastduration")) %>% names()
p_blastduration <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  significance_bars_x = c(1.82, 2.16, 3.17),
  significance_stars = c("**", "***", "**"),
  significance_stars_x = c(1.62, 2.45, 3.35),
  significance_stars_y = list(c("Meditation", "Reflection", time = 2),
                              c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_blastduration

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Aggression (Blast Intensity * Duration)

time <- data %>% select(ends_with("blastintensity.duration")) %>% names()
p_blastintensity.duration <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Behavioural Aggression",
  significance_bars_x = c(1.82, 2.16, 3.17),
  significance_stars = c("**", "***", "**"),
  significance_stars_x = c(1.66, 2.40, 3.33),
  significance_stars_y = list(c("Meditation", "Reflection", time = 2),
                              c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_blastintensity.duration

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Helping

time <- data %>% select(ends_with("WHS")) %>% names()
p_WHS <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Willingness to Help",
  significance_bars_x = 3.2,
  significance_stars = "**",
  significance_stars_x = 3.35,
  significance_stars_y = list(c("Reflection", "Waitlist", time = 3)))
p_WHS

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Compassion

time <- data %>% select(ends_with("CLS")) %>% names()
p_CLS <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Compassionate Love",
  significance_bars_x = c(1.85, 2.15, 3.15),
  significance_stars = c("**", "***", "*"),
  significance_stars_x = c(1.68, 2.38, 3.25),
  significance_stars_y = list(c("Meditation", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_CLS

data_delta_long <- data_delta %>% 
  select(T1_Group, time) %>% 
  mutate(across(time, as.numeric)) %>% 
  pivot_longer(time, names_to = "Time", values_to = "CLS") %>% 
  mutate(Time = gsub("[^0-9]", "", Time))
## Warning: Using an external vector in selections was deprecated in tidyselect 1.1.0.
## ℹ Please use `all_of()` or `any_of()` instead.
##   # Was:
##   data %>% select(time)
## 
##   # Now:
##   data %>% select(all_of(time))
## 
## See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.
ggline(data_delta_long, x = "Time", y = "CLS", group = "T1_Group",
       color = "T1_Group",
       shape = "T1_Group", 
       add = "mean",  # "mean_ci"
       size = 2,
       plot.type = "l"#,
       # point.color = "white",
       # stroke = 1.5
       )

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Compliance Analysis (CACE)

When conducting a Randomized-Controlled Trial (RCT), some participants are randomly assigned to a treatment condition, and others to a control group. However, not everyone assigned to the treatment might follow the treatment protocol (called “treatment compliance”).

According to Sagarin et al. (2014), one sensible approach to address this problem is using the complier average causal effect (CACE), also sometimes known as Local average treatment effect (LATE). According to Wikipedia, it is “the treatment effect for the subset of the sample that takes the treatment if and only if they were assigned to the treatment, otherwise known as the compliers.” In other words, it will be useful if a proportion of your participants assigned to the treatment group did not follow the treatment protocol.

In the following figures, the P > in the compliance column represents the compliance Percentage. The numbers on the right represent Hedge’s g (analogous to Cohen’s d) and its 95% confidence interval.

Time 2

Attitudes

data3 <- data2 %>% 
  mutate(part.percent = ifelse(T1_Group == "Waitlist", 0, part.percent),
         part.percent = part.percent * 100,
         T2_memory.altruistic = T2_memory.altruistic * -1,
         T1_GroupReflection = as.numeric(T1_Group == "Reflection"),
         T1_GroupMeditation = as.numeric(T1_Group == "Meditation"),
         across(contains("blast"), \(x) x * -1)) %>% 
  as.data.frame()

caceOutput <- caceSRTBoot(
  T2_attitude ~ T1_attitude + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Attitude (Reflection VS Waitlist)")

PANAS (Positive Affect)

caceOutput <- caceSRTBoot(
  T2_PANAS_pos ~ T1_GroupMeditation,
  intervention = "T1_GroupMeditation",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Positive Affect (Meditation VS Waitlist)")

Aggression (Blast Intensity)

caceOutput <- caceSRTBoot(
  T2_blastintensity ~ T1_blastintensity + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Blast Intensity (Reflection VS Waitlist)")

Aggression (Blast Duration)

caceOutput <- caceSRTBoot(
  T2_blastduration ~ T1_blastduration + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Blast Duration (Reflection VS Waitlist)")

Aggression (Blast Intensity * Duration)

caceOutput <- caceSRTBoot(
  T2_blastintensity.duration ~ T1_blastintensity.duration + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Blast Intensity * Duration (Reflection VS Waitlist)")

Altruistic Memory

caceOutput <- caceSRTBoot(
  T2_memory.altruistic ~ T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "ms to remember altruistic event (Reflection VS Waitlist)")

Compassion

caceOutput <- caceSRTBoot(
  T2_CLS ~ T1_CLS + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Compassionate Love (Reflection VS Waitlist)")

caceOutput <- caceSRTBoot(
  T2_CLS ~ T1_CLS + T1_GroupMeditation,
  intervention = "T1_GroupMeditation",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Compassionate Love (Meditation VS Waitlist)")

Time 3

Attitudes

caceOutput <- caceSRTBoot(
  T3_attitude ~ T1_attitude + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Attitude (Reflection VS Waitlist)")

Aggression (Blast Intensity)

caceOutput <- caceSRTBoot(
  T3_blastintensity ~ T1_blastintensity + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Blast Intensity (Reflection VS Waitlist)")

Aggression (Blast Duration)

caceOutput <- caceSRTBoot(
  T3_blastduration ~ T1_blastduration + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Blast Duration (Reflection VS Waitlist)")

Aggression (Blast Intensity * Duration)

caceOutput <- caceSRTBoot(
  T3_blastintensity.duration ~ T1_blastintensity.duration + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Blast Intensity * Duration (Reflection VS Waitlist)")

Helping

caceOutput <- caceSRTBoot(
  T3_WHS ~ T1_WHS + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Willingness to Help (Reflection VS Waitlist)")

Compassion

caceOutput <- caceSRTBoot(
  T3_CLS ~ T1_CLS + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Compassionate Love (Reflection VS Waitlist)")

caceOutput <- caceSRTBoot(
  T3_CLS ~ T1_CLS + T1_GroupMeditation,
  intervention = "T1_GroupMeditation",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Compassionate Love (Meditation VS Waitlist)")

What we see is general trend toward larger effects with higher compliance, although this trend does not appear exceptionally strong.

Moderations (Ego Depletion)

It seems like only IAT (implicit aggression) and not NOBAGS (explicit attitude toward aggression) is a significant moderator.

Moderator: IAT*

blastintensity.duration

# CREATE OUR DUMMY VARIABLES FOR T1_Group!

data$T1_GroupReflection <- as.numeric(data$T1_Group == "Reflection")
data$T1_GroupMeditation <- as.numeric(data$T1_Group == "Meditation")

################################################
################################################

# T2_blastintensity.duration - IAT
T2_blastintensity.duration.IAT <- lm(
  T2_blastintensity.duration ~ T1_GroupReflection + T1_GroupMeditation +
    T2_IAT + T2_Condition + T1_GroupReflection:T2_IAT + T1_GroupMeditation:T2_IAT +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition + T2_IAT:T2_Condition +
    T1_GroupReflection:T2_IAT:T2_Condition + T1_GroupMeditation:T2_IAT:T2_Condition,
  data = data
)

check_model(T2_blastintensity.duration.IAT)

T2_blastintensity.duration.IAT %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)

Dependent Variable

Predictor

df

b

t

p

sr2

95% CI

T2_blastintensity.duration

T1_GroupReflection

186

-0.31

-0.77

.444

.00

[0.00, 0.02]

T1_GroupMeditation

186

0.01

0.03

.978

.00

[0.00, 0.00]

T2_IAT

186

-0.09

-0.20

.844

.00

[0.00, 0.00]

T2_ConditionDepleted

186

0.49

1.10

.274

.01

[0.00, 0.03]

T1_GroupReflection × T2_IAT

186

-0.19

-0.30

.768

.00

[0.00, 0.01]

T1_GroupMeditation × T2_IAT

186

0.08

0.10

.924

.00

[0.00, 0.00]

T1_GroupReflection × T2_ConditionDepleted

186

-0.61

-0.77

.443

.00

[0.00, 0.02]

T1_GroupMeditation × T2_ConditionDepleted

186

-1.30

-1.67

.096

.01

[0.00, 0.05]

T2_IAT × T2_ConditionDepleted

186

0.60

0.99

.322

.01

[0.00, 0.02]

T1_GroupReflection × T2_IAT × T2_ConditionDepleted

186

-0.80

-0.72

.475

.00

[0.00, 0.02]

T1_GroupMeditation × T2_IAT × T2_ConditionDepleted

186

-1.91

-1.72

.087

.02

[0.00, 0.05]

# Make interaction plot
interact_plot(T2_blastintensity.duration.IAT,
  pred = "T2_IAT", modx = "T2_Condition",
  interval = TRUE, mod2 = "T1_GroupMeditation",
  x.label = "Implicit Aggression (IAT)",
  y.label = "Blast Intensity * Duration (Taylor)",
  mod2.labels = c("Waitlist Group", "Meditation Group"),
  legend.main = "Condition"
) +
  theme(text = element_text(size = 25))

What we see here is that for the waitlist group, there is no interaction between being depleted and implicit aggression, whereas for the meditation group, people become less aggressive after being depleted, the higher their implicit aggression.

blastintensity

# T2_blastintensity - IAT
T2_blastintensity.IAT <- lm(
  T2_blastintensity ~ T1_GroupReflection + T1_GroupMeditation +
    T2_IAT + T2_Condition + T1_GroupReflection:T2_IAT + T1_GroupMeditation:T2_IAT +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_IAT:T2_Condition + T1_GroupReflection:T2_IAT:T2_Condition +
    T1_GroupMeditation:T2_IAT:T2_Condition,
  data = data
)

check_model(T2_blastintensity.IAT)

T2_blastintensity.IAT %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)

Dependent Variable

Predictor

df

b

t

p

sr2

95% CI

T2_blastintensity

T1_GroupReflection

186

-0.26

-0.64

.522

.00

[0.00, 0.01]

T1_GroupMeditation

186

-0.13

-0.25

.801

.00

[0.00, 0.01]

T2_IAT

186

-0.04

-0.09

.925

.00

[0.00, 0.00]

T2_ConditionDepleted

186

0.40

0.91

.365

.00

[0.00, 0.02]

T1_GroupReflection × T2_IAT

186

-0.17

-0.26

.792

.00

[0.00, 0.01]

T1_GroupMeditation × T2_IAT

186

-0.08

-0.10

.917

.00

[0.00, 0.00]

T1_GroupReflection × T2_ConditionDepleted

186

-0.57

-0.71

.478

.00

[0.00, 0.02]

T1_GroupMeditation × T2_ConditionDepleted

186

-1.08

-1.38

.169

.01

[0.00, 0.04]

T2_IAT × T2_ConditionDepleted

186

0.52

0.86

.391

.00

[0.00, 0.02]

T1_GroupReflection × T2_IAT × T2_ConditionDepleted

186

-0.73

-0.66

.510

.00

[0.00, 0.02]

T1_GroupMeditation × T2_IAT × T2_ConditionDepleted

186

-1.75

-1.58

.117

.01

[0.00, 0.04]

blastduration *

# T2_blastduration - IAT
T2_blastduration.IAT <- lm(
  T2_blastduration ~ T1_GroupReflection + T1_GroupMeditation +
    T2_IAT + T2_Condition + T1_GroupReflection:T2_IAT + T1_GroupMeditation:T2_IAT +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_IAT:T2_Condition + T1_GroupReflection:T2_IAT:T2_Condition +
    T1_GroupMeditation:T2_IAT:T2_Condition,
  data = data
)

check_model(T2_blastduration.IAT)

T2_blastduration.IAT %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)

Dependent Variable

Predictor

df

b

t

p

sr2

95% CI

T2_blastduration

T1_GroupReflection

186

-0.26

-0.65

.520

.00

[0.00, 0.01]

T1_GroupMeditation

186

0.25

0.48

.631

.00

[0.00, 0.01]

T2_IAT

186

-0.17

-0.38

.708

.00

[0.00, 0.01]

T2_ConditionDepleted

186

0.86

1.96

.051

.02

[0.00, 0.06]

T1_GroupReflection × T2_IAT

186

-0.10

-0.17

.869

.00

[0.00, 0.00]

T1_GroupMeditation × T2_IAT

186

0.33

0.42

.678

.00

[0.00, 0.01]

T1_GroupReflection × T2_ConditionDepleted

186

-0.96

-1.22

.223

.01

[0.00, 0.03]

T1_GroupMeditation × T2_ConditionDepleted

186

-1.95

-2.53

.012*

.03

[0.00, 0.08]

T2_IAT × T2_ConditionDepleted

186

0.98

1.65

.101

.01

[0.00, 0.04]

T1_GroupReflection × T2_IAT × T2_ConditionDepleted

186

-1.11

-1.01

.315

.01

[0.00, 0.02]

T1_GroupMeditation × T2_IAT × T2_ConditionDepleted

186

-2.54

-2.32

.021*

.03

[0.00, 0.07]

# Make interaction plot
interact_plot(T2_blastduration.IAT,
  pred = "T2_IAT", modx = "T2_Condition",
  interval = TRUE, mod2 = "T1_GroupMeditation",
  x.label = "Implicit Aggression (IAT)",
  y.label = "Blast Duration (Taylor)",
  mod2.labels = c("Waitlist Group", "Meditation Group"),
  legend.main = "Condition"
) +
  theme(text = element_text(size = 25))

What we see here is that for the waitlist group, there is no interaction between being depleted and implicit aggression, whereas for the meditation group, people become less aggressive after being depleted, the higher their implicit aggression. However, that variable (blastduration alone) was not in the preregistration, so we might not report this finding.

Charity

# T2_Charity - IAT
T2_Charity.IAT <- lm(
  T2_Charity ~ T1_GroupReflection + T1_GroupMeditation + T2_IAT +
    T2_Condition + T1_GroupReflection:T2_IAT + T1_GroupMeditation:T2_IAT +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_IAT:T2_Condition + T1_GroupReflection:T2_IAT:T2_Condition +
    T1_GroupMeditation:T2_IAT:T2_Condition + T2_Familiarity,
  data = data
)

check_model(T2_Charity.IAT)

T2_Charity.IAT %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)

Dependent Variable

Predictor

df

b

t

p

sr2

95% CI

T2_Charity

T1_GroupReflection

185

0.37

0.93

.354

.00

[0.00, 0.02]

T1_GroupMeditation

185

0.10

0.19

.852

.00

[0.00, 0.00]

T2_IAT

185

-0.16

-0.37

.712

.00

[0.00, 0.01]

T2_ConditionDepleted

185

0.40

0.92

.357

.00

[0.00, 0.02]

T2_Familiarity

185

0.25

3.48

.001***

.06

[0.00, 0.12]

T1_GroupReflection × T2_IAT

185

0.65

1.05

.297

.01

[0.00, 0.02]

T1_GroupMeditation × T2_IAT

185

0.30

0.38

.706

.00

[0.00, 0.01]

T1_GroupReflection × T2_ConditionDepleted

185

-1.06

-1.37

.172

.01

[0.00, 0.03]

T1_GroupMeditation × T2_ConditionDepleted

185

-0.43

-0.57

.569

.00

[0.00, 0.01]

T2_IAT × T2_ConditionDepleted

185

0.47

0.79

.430

.00

[0.00, 0.02]

T1_GroupReflection × T2_IAT × T2_ConditionDepleted

185

-0.89

-0.82

.412

.00

[0.00, 0.02]

T1_GroupMeditation × T2_IAT × T2_ConditionDepleted

185

-0.27

-0.25

.806

.00

[0.00, 0.00]

Compassion*

# Compassion - IAT
T2_CLS.IAT <- lm(
  T2_CLS ~ T1_GroupReflection + T1_GroupMeditation + T2_IAT +
    T2_Condition + T1_GroupReflection:T2_IAT + T1_GroupMeditation:T2_IAT +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_IAT:T2_Condition + T1_GroupReflection:T2_IAT:T2_Condition +
    T1_GroupMeditation:T2_IAT:T2_Condition,
  data = data
)

check_model(T2_CLS.IAT)

table5 <- T2_CLS.IAT %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
table5

Dependent Variable

Predictor

df

b

t

p

sr2

95% CI

T2_CLS

T1_GroupReflection

186

1.48

3.86

< .001***

.07

[0.00, 0.13]

T1_GroupMeditation

186

1.25

2.53

.012*

.03

[0.00, 0.07]

T2_IAT

186

-1.48

-3.54

.001***

.06

[0.00, 0.12]

T2_ConditionDepleted

186

1.16

2.80

.006**

.04

[0.00, 0.08]

T1_GroupReflection × T2_IAT

186

1.83

3.06

.003**

.04

[0.00, 0.09]

T1_GroupMeditation × T2_IAT

186

1.54

2.02

.045*

.02

[0.00, 0.05]

T1_GroupReflection × T2_ConditionDepleted

186

0.22

0.29

.771

.00

[0.00, 0.01]

T1_GroupMeditation × T2_ConditionDepleted

186

-1.65

-2.26

.025*

.02

[0.00, 0.06]

T2_IAT × T2_ConditionDepleted

186

1.98

3.51

.001***

.06

[0.00, 0.11]

T1_GroupReflection × T2_IAT × T2_ConditionDepleted

186

0.02

0.02

.986

.00

[0.00, 0.00]

T1_GroupMeditation × T2_IAT × T2_ConditionDepleted

186

-2.87

-2.75

.006**

.03

[0.00, 0.08]

# Make interaction plot
p_CLS <- interact_plot(T2_CLS.IAT,
  pred = "T2_IAT", modx = "T2_Condition",
  interval = TRUE, mod2 = "T1_GroupMeditation",
  x.label = "Implicit Aggression (IAT)",
  y.label = "Compassionate Love",
  mod2.labels = c("Waitlist Group", "Meditation Group"),
  legend.main = "Condition"
) +
  theme(text = element_text(size = 25))
p_CLS

What we see here is that for the waitlist group, the effect of implicit aggression clearly depends on depletion: implicit aggression leads to lower compassion in the control condition (expected), but to higher compassion in the depletion group (unexpected). Whereas for the meditation group, it seems the depletion has little effect on compassion (but the trend is reversed relative to the waitlist group).

Helping

# T2_WHS - IAT
T2_WHS.IAT <- lm(
  T2_WHS ~ T1_GroupReflection + T1_GroupMeditation + T2_IAT + T2_Condition +
    T1_GroupReflection:T2_IAT + T1_GroupMeditation:T2_IAT + T1_GroupReflection:T2_Condition +
    T1_GroupMeditation:T2_Condition + T2_IAT:T2_Condition + T1_GroupReflection:T2_IAT:T2_Condition +
    T1_GroupMeditation:T2_IAT:T2_Condition,
  data = data
)

check_model(T2_WHS.IAT)

T2_WHS.IAT %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)

Dependent Variable

Predictor

df

b

t

p

sr2

95% CI

T2_WHS

T1_GroupReflection

186

0.60

1.45

.148

.01

[0.00, 0.04]

T1_GroupMeditation

186

0.60

1.13

.258

.01

[0.00, 0.03]

T2_IAT

186

-0.81

-1.81

.072

.02

[0.00, 0.05]

T2_ConditionDepleted

186

0.56

1.25

.214

.01

[0.00, 0.03]

T1_GroupReflection × T2_IAT

186

0.93

1.46

.147

.01

[0.00, 0.04]

T1_GroupMeditation × T2_IAT

186

0.85

1.05

.296

.01

[0.00, 0.03]

T1_GroupReflection × T2_ConditionDepleted

186

0.44

0.55

.580

.00

[0.00, 0.01]

T1_GroupMeditation × T2_ConditionDepleted

186

-0.82

-1.05

.294

.01

[0.00, 0.03]

T2_IAT × T2_ConditionDepleted

186

0.94

1.55

.123

.01

[0.00, 0.04]

T1_GroupReflection × T2_IAT × T2_ConditionDepleted

186

0.61

0.54

.588

.00

[0.00, 0.01]

T1_GroupMeditation × T2_IAT × T2_ConditionDepleted

186

-1.25

-1.12

.262

.01

[0.00, 0.03]

Altruistic Memory*

# T2_memory.altruistic - IAT
T2_memory.altruistic.IAT <- lm(
  T2_memory.altruistic ~ T1_GroupReflection + T1_GroupMeditation + T2_IAT +
    T2_Condition + T1_GroupReflection:T2_IAT + T1_GroupMeditation:T2_IAT +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_IAT:T2_Condition + T1_GroupReflection:T2_IAT:T2_Condition +
    T1_GroupMeditation:T2_IAT:T2_Condition,
  data = data
)

check_model(T2_memory.altruistic.IAT)

table6 <- T2_memory.altruistic.IAT %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
table6

Dependent Variable

Predictor

df

b

t

p

sr2

95% CI

T2_memory.altruistic

T1_GroupReflection

186

-0.95

-2.41

.017*

.03

[0.00, 0.07]

T1_GroupMeditation

186

-1.11

-2.17

.031*

.02

[0.00, 0.06]

T2_IAT

186

0.25

0.57

.566

.00

[0.00, 0.01]

T2_ConditionDepleted

186

-1.15

-2.68

.008**

.03

[0.00, 0.08]

T1_GroupReflection × T2_IAT

186

-0.81

-1.30

.194

.01

[0.00, 0.03]

T1_GroupMeditation × T2_IAT

186

-1.56

-1.98

.049*

.02

[0.00, 0.05]

T1_GroupReflection × T2_ConditionDepleted

186

1.41

1.83

.068

.02

[0.00, 0.05]

T1_GroupMeditation × T2_ConditionDepleted

186

1.86

2.46

.015*

.03

[0.00, 0.07]

T2_IAT × T2_ConditionDepleted

186

-1.31

-2.25

.026*

.02

[0.00, 0.07]

T1_GroupReflection × T2_IAT × T2_ConditionDepleted

186

1.88

1.74

.084

.01

[0.00, 0.05]

T1_GroupMeditation × T2_IAT × T2_ConditionDepleted

186

2.67

2.48

.014*

.03

[0.00, 0.07]

# Make interaction plot
p_memalt <- interact_plot(T2_memory.altruistic.IAT,
  pred = "T2_IAT", modx = "T2_Condition",
  interval = TRUE, mod2 = "T1_GroupMeditation",
  x.label = "Implicit Aggression (IAT)",
  y.label = "Reaction Time (Altruistic Memory)",
  mod2.labels = c("Waitlist Group", "Meditation Group"),
  legend.main = "Condition"
) +
  theme(text = element_text(size = 25))
p_memalt

What we see here is that for the waitlist group, the depletion seems to have little effect on reaction time to remember an altruistic event. Whereas for the meditation group, higher implicit aggression relates to shorter reaction time (unexpected), unless they are depleted (but the trend is reversed relative to the waitlist group). This result is somewhat unexpected, but it could be that in their case, the heart takes over the mind.

Aggressive Memory

# T2_memory.aggressive - IAT
T2_memory.aggressive.IAT <- lm(
  T2_memory.aggressive ~ T1_GroupReflection + T1_GroupMeditation + T2_IAT +
    T2_Condition + T1_GroupReflection:T2_IAT + T1_GroupMeditation:T2_IAT +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_IAT:T2_Condition + T1_GroupReflection:T2_IAT:T2_Condition +
    T1_GroupMeditation:T2_IAT:T2_Condition,
  data = data
)

check_model(T2_memory.aggressive.IAT)

T2_memory.aggressive.IAT %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)

Dependent Variable

Predictor

df

b

t

p

sr2

95% CI

T2_memory.aggressive

T1_GroupReflection

186

-0.08

-0.19

.848

.00

[0.00, 0.00]

T1_GroupMeditation

186

0.20

0.37

.710

.00

[0.00, 0.01]

T2_IAT

186

-0.43

-0.97

.334

.00

[0.00, 0.02]

T2_ConditionDepleted

186

-0.19

-0.44

.661

.00

[0.00, 0.01]

T1_GroupReflection × T2_IAT

186

0.08

0.12

.904

.00

[0.00, 0.00]

T1_GroupMeditation × T2_IAT

186

0.48

0.59

.555

.00

[0.00, 0.01]

T1_GroupReflection × T2_ConditionDepleted

186

0.12

0.16

.876

.00

[0.00, 0.00]

T1_GroupMeditation × T2_ConditionDepleted

186

0.26

0.33

.744

.00

[0.00, 0.01]

T2_IAT × T2_ConditionDepleted

186

-0.39

-0.65

.518

.00

[0.00, 0.01]

T1_GroupReflection × T2_IAT × T2_ConditionDepleted

186

0.08

0.07

.946

.00

[0.00, 0.00]

T1_GroupMeditation × T2_IAT × T2_ConditionDepleted

186

0.13

0.12

.905

.00

[0.00, 0.00]

Moderator: NOBAGS

blastintensity.duration

# T2_blastintensity.duration - NOBAGS
T2_blastintensity.duration.NOBAGS <- lm(
  T2_blastintensity.duration ~ T1_GroupReflection + T1_GroupMeditation +
    T2_NOBAGS + T2_Condition + T1_GroupReflection:T2_NOBAGS + T1_GroupMeditation:T2_NOBAGS +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_NOBAGS:T2_Condition + T1_GroupReflection:T2_NOBAGS:T2_Condition +
    T1_GroupMeditation:T2_NOBAGS:T2_Condition,
  data = data
)

check_model(T2_blastintensity.duration.NOBAGS)

T2_blastintensity.duration.NOBAGS %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)

Dependent Variable

Predictor

df

b*

t

p

sr2

95% CI

T2_blastintensity.duration

T1_GroupReflection

186

-0.28

-1.23

.219

.01

[0.00, 0.03]

T1_GroupMeditation

186

-0.05

-0.21

.835

.00

[0.00, 0.00]

T2_NOBAGS

186

-0.12

-0.81

.417

.00

[0.00, 0.02]

T2_ConditionDepleted

186

0.04

0.21

.836

.00

[0.00, 0.00]

T1_GroupReflection × T2_NOBAGS

186

0.19

0.89

.372

.00

[0.00, 0.02]

T1_GroupMeditation × T2_NOBAGS

186

0.01

0.06

.951

.00

[0.00, 0.00]

T1_GroupReflection × T2_ConditionDepleted

186

0.11

0.31

.755

.00

[0.00, 0.01]

T1_GroupMeditation × T2_ConditionDepleted

186

0.24

0.70

.487

.00

[0.00, 0.02]

T2_NOBAGS × T2_ConditionDepleted

186

0.46

2.08

.039*

.02

[0.00, 0.06]

T1_GroupReflection × T2_NOBAGS × T2_ConditionDepleted

186

-0.17

-0.49

.627

.00

[0.00, 0.01]

T1_GroupMeditation × T2_NOBAGS × T2_ConditionDepleted

186

0.12

0.33

.743

.00

[0.00, 0.01]

blastintensity

# T2_blastintensity - NOBAGS
T2_blastintensity.NOBAGS <- lm(
  T2_blastintensity ~ T1_GroupReflection + T1_GroupMeditation +
    T2_NOBAGS + T2_Condition + T1_GroupReflection:T2_NOBAGS +
    T1_GroupMeditation:T2_NOBAGS + T1_GroupReflection:T2_Condition +
    T1_GroupMeditation:T2_Condition + T2_NOBAGS:T2_Condition +
    T1_GroupReflection:T2_NOBAGS:T2_Condition +
    T1_GroupMeditation:T2_NOBAGS:T2_Condition,
  data = data
)

check_model(T2_blastintensity.NOBAGS)

T2_blastintensity.NOBAGS %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)

Dependent Variable

Predictor

df

b*

t

p

sr2

95% CI

T2_blastintensity

T1_GroupReflection

186

-0.23

-1.01

.316

.01

[0.00, 0.02]

T1_GroupMeditation

186

-0.11

-0.45

.657

.00

[0.00, 0.01]

T2_NOBAGS

186

-0.13

-0.84

.403

.00

[0.00, 0.02]

T2_ConditionDepleted

186

0.01

0.06

.955

.00

[0.00, 0.00]

T1_GroupReflection × T2_NOBAGS

186

0.23

1.08

.283

.01

[0.00, 0.03]

T1_GroupMeditation × T2_NOBAGS

186

0.04

0.18

.856

.00

[0.00, 0.00]

T1_GroupReflection × T2_ConditionDepleted

186

0.10

0.27

.790

.00

[0.00, 0.01]

T1_GroupMeditation × T2_ConditionDepleted

186

0.37

1.08

.283

.01

[0.00, 0.03]

T2_NOBAGS × T2_ConditionDepleted

186

0.44

1.99

.048*

.02

[0.00, 0.06]

T1_GroupReflection × T2_NOBAGS × T2_ConditionDepleted

186

-0.22

-0.61

.544

.00

[0.00, 0.01]

T1_GroupMeditation × T2_NOBAGS × T2_ConditionDepleted

186

0.09

0.26

.796

.00

[0.00, 0.01]

blastduration*

# T2_blastduration - NOBAGS
T2_blastduration.NOBAGS <- lm(
  T2_blastduration ~ T1_GroupReflection + T1_GroupMeditation +
    T2_NOBAGS + T2_Condition + T1_GroupReflection:T2_NOBAGS +
    T1_GroupMeditation:T2_NOBAGS + T1_GroupReflection:T2_Condition +
    T1_GroupMeditation:T2_Condition + T2_NOBAGS:T2_Condition +
    T1_GroupReflection:T2_NOBAGS:T2_Condition +
    T1_GroupMeditation:T2_NOBAGS:T2_Condition,
  data = data
)

check_model(T2_blastduration.NOBAGS)

T2_blastduration.NOBAGS %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)

Dependent Variable

Predictor

df

b*

t

p

sr2

95% CI

T2_blastduration

T1_GroupReflection

186

-0.27

-1.23

.222

.01

[0.00, 0.03]

T1_GroupMeditation

186

0.04

0.17

.868

.00

[0.00, 0.00]

T2_NOBAGS

186

-0.12

-0.79

.431

.00

[0.00, 0.02]

T2_ConditionDepleted

186

0.16

0.78

.437

.00

[0.00, 0.02]

T1_GroupReflection × T2_NOBAGS

186

0.16

0.73

.464

.00

[0.00, 0.02]

T1_GroupMeditation × T2_NOBAGS

186

0.01

0.05

.959

.00

[0.00, 0.00]

T1_GroupReflection × T2_ConditionDepleted

186

-0.03

-0.09

.927

.00

[0.00, 0.00]

T1_GroupMeditation × T2_ConditionDepleted

186

-0.01

-0.03

.973

.00

[0.00, 0.00]

T2_NOBAGS × T2_ConditionDepleted

186

0.48

2.15

.033*

.02

[0.00, 0.06]

T1_GroupReflection × T2_NOBAGS × T2_ConditionDepleted

186

-0.16

-0.45

.656

.00

[0.00, 0.01]

T1_GroupMeditation × T2_NOBAGS × T2_ConditionDepleted

186

0.07

0.20

.843

.00

[0.00, 0.00]

# Make interaction plot
interact_plot(T2_blastduration.NOBAGS,
  pred = "T2_NOBAGS", modx = "T2_Condition",
  interval = TRUE,
  x.label = "Normative beliefs about aggression (NOBAGS)",
  y.label = "Blast Duration (Taylor)",
  legend.main = "Condition"
) +
  theme(text = element_text(size = 25))

What we see here is that there seems to be no relation between attitude toward aggression on behavioural aggression in the control condition, whereas when depleted, attitude toward aggression relates to higher behavioural aggression. Although this result is theoretically consistent with the literature, it is likely a false positive given our high number of tests, the fact that this is the only variable that NOBAGS moderates, and that the p value is relatively close to 0.5. Furthermore that variable (blastduration alone) was not in the preregistration, so we might not report this finding.

Charity

# T2_Charity - NOBAGS
T2_Charity.NOBAGS <- lm(
  T2_Charity ~ T1_GroupReflection + T1_GroupMeditation + T2_NOBAGS +
    T2_Condition + T1_GroupReflection:T2_NOBAGS + T1_GroupMeditation:T2_NOBAGS +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_NOBAGS:T2_Condition + T1_GroupReflection:T2_NOBAGS:T2_Condition +
    T1_GroupMeditation:T2_NOBAGS:T2_Condition + T2_Familiarity,
  data = data
)

check_model(T2_Charity.NOBAGS)

T2_Charity.NOBAGS %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)

Dependent Variable

Predictor

df

b*

t

p

sr2

95% CI

T2_Charity

T1_GroupReflection

185

0.07

0.33

.741

.00

[0.00, 0.01]

T1_GroupMeditation

185

-0.06

-0.24

.810

.00

[0.00, 0.00]

T2_NOBAGS

185

0.09

0.61

.544

.00

[0.00, 0.01]

T2_ConditionDepleted

185

0.12

0.58

.560

.00

[0.00, 0.01]

T2_Familiarity

185

0.26

3.57

< .001***

.06

[0.00, 0.13]

T1_GroupReflection × T2_NOBAGS

185

0.03

0.12

.902

.00

[0.00, 0.00]

T1_GroupMeditation × T2_NOBAGS

185

-0.20

-0.84

.403

.00

[0.00, 0.02]

T1_GroupReflection × T2_ConditionDepleted

185

-0.59

-1.62

.106

.01

[0.00, 0.04]

T1_GroupMeditation × T2_ConditionDepleted

185

-0.32

-0.93

.352

.00

[0.00, 0.02]

T2_NOBAGS × T2_ConditionDepleted

185

-0.25

-1.12

.264

.01

[0.00, 0.03]

T1_GroupReflection × T2_NOBAGS × T2_ConditionDepleted

185

0.27

0.75

.451

.00

[0.00, 0.02]

T1_GroupMeditation × T2_NOBAGS × T2_ConditionDepleted

185

0.37

1.05

.296

.01

[0.00, 0.02]

Compassion

# Compassion - NOBAGS
T2_CLS.NOBAGS <- lm(
  T2_CLS ~ T1_GroupReflection + T1_GroupMeditation + T2_NOBAGS +
    T2_Condition + T1_GroupReflection:T2_NOBAGS + T1_GroupMeditation:T2_NOBAGS +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_NOBAGS:T2_Condition + T1_GroupReflection:T2_NOBAGS:T2_Condition +
    T1_GroupMeditation:T2_NOBAGS:T2_Condition,
  data = data
)

check_model(T2_CLS.NOBAGS)

T2_CLS.NOBAGS %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)

Dependent Variable

Predictor

df

b*

t

p

sr2

95% CI

T2_CLS

T1_GroupReflection

186

0.38

1.71

.089

.01

[0.00, 0.05]

T1_GroupMeditation

186

0.29

1.22

.224

.01

[0.00, 0.03]

T2_NOBAGS

186

-0.12

-0.83

.406

.00

[0.00, 0.02]

T2_ConditionDepleted

186

-0.11

-0.52

.602

.00

[0.00, 0.01]

T1_GroupReflection × T2_NOBAGS

186

0.02

0.11

.911

.00

[0.00, 0.00]

T1_GroupMeditation × T2_NOBAGS

186

-0.21

-0.89

.374

.00

[0.00, 0.02]

T1_GroupReflection × T2_ConditionDepleted

186

-0.01

-0.02

.985

.00

[0.00, 0.00]

T1_GroupMeditation × T2_ConditionDepleted

186

0.15

0.42

.674

.00

[0.00, 0.01]

T2_NOBAGS × T2_ConditionDepleted

186

-0.15

-0.68

.498

.00

[0.00, 0.01]

T1_GroupReflection × T2_NOBAGS × T2_ConditionDepleted

186

0.05

0.14

.886

.00

[0.00, 0.00]

T1_GroupMeditation × T2_NOBAGS × T2_ConditionDepleted

186

0.34

0.98

.328

.00

[0.00, 0.02]

Helping

# T2_WHS - NOBAGS
T2_WHS.NOBAGS <- lm(
  T2_WHS ~ T1_GroupReflection + T1_GroupMeditation + T2_NOBAGS + T2_Condition +
    T1_GroupReflection:T2_NOBAGS + T1_GroupMeditation:T2_NOBAGS +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_NOBAGS:T2_Condition + T1_GroupReflection:T2_NOBAGS:T2_Condition +
    T1_GroupMeditation:T2_NOBAGS:T2_Condition,
  data = data
)

check_model(T2_WHS.NOBAGS)

T2_WHS.NOBAGS %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)

Dependent Variable

Predictor

df

b*

t

p

sr2

95% CI

T2_WHS

T1_GroupReflection

186

-0.01

-0.04

.966

.00

[0.00, 0.00]

T1_GroupMeditation

186

0.05

0.22

.827

.00

[0.00, 0.00]

T2_NOBAGS

186

-0.25

-1.63

.105

.01

[0.00, 0.04]

T2_ConditionDepleted

186

-0.09

-0.45

.655

.00

[0.00, 0.01]

T1_GroupReflection × T2_NOBAGS

186

0.23

1.04

.301

.01

[0.00, 0.03]

T1_GroupMeditation × T2_NOBAGS

186

-0.17

-0.72

.470

.00

[0.00, 0.02]

T1_GroupReflection × T2_ConditionDepleted

186

-0.01

-0.03

.973

.00

[0.00, 0.00]

T1_GroupMeditation × T2_ConditionDepleted

186

0.07

0.21

.836

.00

[0.00, 0.00]

T2_NOBAGS × T2_ConditionDepleted

186

0.42

1.86

.064

.02

[0.00, 0.05]

T1_GroupReflection × T2_NOBAGS × T2_ConditionDepleted

186

-0.51

-1.41

.160

.01

[0.00, 0.04]

T1_GroupMeditation × T2_NOBAGS × T2_ConditionDepleted

186

0.15

0.43

.666

.00

[0.00, 0.01]

Altruistic Memory

# T2_memory.altruistic - NOBAGS
T2_memory.altruistic.NOBAGS <- lm(
  T2_memory.altruistic ~ T1_GroupReflection + T1_GroupMeditation +
    T2_NOBAGS + T2_Condition + T1_GroupReflection:T2_NOBAGS +
    T1_GroupMeditation:T2_NOBAGS + T1_GroupReflection:T2_Condition +
    T1_GroupMeditation:T2_Condition + T2_NOBAGS:T2_Condition +
    T1_GroupReflection:T2_NOBAGS:T2_Condition +
    T1_GroupMeditation:T2_NOBAGS:T2_Condition,
  data = data
)

check_model(T2_memory.altruistic.NOBAGS)

T2_memory.altruistic.NOBAGS %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)

Dependent Variable

Predictor

df

b*

t

p

sr2

95% CI

T2_memory.altruistic

T1_GroupReflection

186

-0.54

-2.37

.019*

.03

[0.00, 0.07]

T1_GroupMeditation

186

-0.20

-0.82

.413

.00

[0.00, 0.02]

T2_NOBAGS

186

0.16

1.03

.304

.01

[0.00, 0.03]

T2_ConditionDepleted

186

-0.27

-1.29

.198

.01

[0.00, 0.03]

T1_GroupReflection × T2_NOBAGS

186

-0.10

-0.44

.659

.00

[0.00, 0.01]

T1_GroupMeditation × T2_NOBAGS

186

-0.19

-0.81

.419

.00

[0.00, 0.02]

T1_GroupReflection × T2_ConditionDepleted

186

0.24

0.66

.511

.00

[0.00, 0.01]

T1_GroupMeditation × T2_ConditionDepleted

186

0.23

0.66

.512

.00

[0.00, 0.01]

T2_NOBAGS × T2_ConditionDepleted

186

0.09

0.38

.701

.00

[0.00, 0.01]

T1_GroupReflection × T2_NOBAGS × T2_ConditionDepleted

186

-0.40

-1.12

.265

.01

[0.00, 0.03]

T1_GroupMeditation × T2_NOBAGS × T2_ConditionDepleted

186

-0.02

-0.06

.949

.00

[0.00, 0.00]

Aggressive Memory

# T2_memory.aggressive - NOBAGS
T2_memory.aggressive.NOBAGS <- lm(
  T2_memory.aggressive ~ T1_GroupReflection + T1_GroupMeditation + T2_NOBAGS +
    T2_Condition + T1_GroupReflection:T2_NOBAGS + T1_GroupMeditation:T2_NOBAGS +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_NOBAGS:T2_Condition + T1_GroupReflection:T2_NOBAGS:T2_Condition +
    T1_GroupMeditation:T2_NOBAGS:T2_Condition,
  data = data
)

check_model(T2_memory.aggressive.NOBAGS)

T2_memory.aggressive.NOBAGS %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)

Dependent Variable

Predictor

df

b*

t

p

sr2

95% CI

T2_memory.aggressive

T1_GroupReflection

186

-0.20

-0.88

.380

.00

[0.00, 0.02]

T1_GroupMeditation

186

-0.10

-0.41

.682

.00

[0.00, 0.01]

T2_NOBAGS

186

-0.04

-0.26

.795

.00

[0.00, 0.01]

T2_ConditionDepleted

186

0.09

0.40

.687

.00

[0.00, 0.01]

T1_GroupReflection × T2_NOBAGS

186

-0.18

-0.79

.430

.00

[0.00, 0.02]

T1_GroupMeditation × T2_NOBAGS

186

-0.10

-0.40

.691

.00

[0.00, 0.01]

T1_GroupReflection × T2_ConditionDepleted

186

0.13

0.35

.724

.00

[0.00, 0.01]

T1_GroupMeditation × T2_ConditionDepleted

186

0.05

0.14

.889

.00

[0.00, 0.00]

T2_NOBAGS × T2_ConditionDepleted

186

-0.07

-0.29

.775

.00

[0.00, 0.01]

T1_GroupReflection × T2_NOBAGS × T2_ConditionDepleted

186

0.22

0.60

.548

.00

[0.00, 0.01]

T1_GroupMeditation × T2_NOBAGS × T2_ConditionDepleted

186

-0.01

-0.02

.984

.00

[0.00, 0.00]

Change scores 1

Compassion

plot_means_over_time_depletion <- function(variable, 
                                           ytitle, 
                                           error_bars = FALSE, 
                                           data_delta,
                                           T1_Group = "T1_Group",
                                           T2_Condition = "T2_Condition") {
  time <- data_delta %>%
    ungroup() %>%
    select(ends_with(variable)) %>%
    names()
  data_delta[time] <- lapply(data_delta[time], as.numeric)

  data_control <- data_delta %>%
    filter(T2_Condition == "Control")

  data_depleted <- data_delta %>%
    filter(T2_Condition == "Depleted") %>%
    mutate(subject_ID = seq(n())) %>%
    select(subject_ID, T1_Group, all_of(time)) %>%
    pivot_longer(
      cols = all_of(time), names_to = "Time",
      names_ptypes = factor()
    )

  data_depleted_summary <- data_depleted %>%
    # group_by(Time, T1_Group) %>%
    filter(Time == time[2]) %>%
    summarize(value = mean(value), .groups = "keep") %>%
    # mutate(Time = gsub("[^0-9]", "", .data$Time))
    mutate(Time = 2)

  data_depleted_summary <-
    bind_rows(data_depleted_summary, data_depleted_summary)

  data_depleted_summary$value[1:3] <- 0
  data_depleted_summary$Time[1:3] <- 1

  # Basic attempt 1!
  pd <- ggplot2::position_dodge(0.2) # move them .01 to the left and right
  plot_means_over_time(
    data_control,
    response = time,
    group = "T1_Group",
    error_bars = error_bars,
    ytitle = ytitle
  ) +
    ggplot2::geom_line(
      data = data_depleted_summary,
      ggplot2::aes(color = .data$T1_Group),
      linewidth = 3,
      linetype = "dotted",
      position = pd
    ) +
    ggplot2::geom_point(
      data = data_depleted_summary,
      size = 4,
      fill = "white",
      stroke = 1.5,
      position = pd
    ) #+
  # annotate(geom = "text",
  #          label = c("Depletion"),
  #          x = c(2.25),
  #          y = c(-0.1, 0.2, 0.57),
  #          size = 6)
}

p_CLS <- plot_means_over_time_depletion(
  data_delta = data_delta,
  variable = "CLS",
  ytitle = "Compassionate Love"
)
p_CLS

NOBAGS

p_NOBAGS <- plot_means_over_time_depletion(
  variable = "NOBAGS",
  ytitle = "Aggression Attitude",
  data = data_delta
)
p_NOBAGS

Attitudes

p_attitude <- plot_means_over_time_depletion(
  variable = "attitude",
  ytitle = "Intergroup Attitude (Positive)",
  data = data_delta
)
p_attitude

Dehumanization

p_dehumanization <- plot_means_over_time_depletion(
  variable = "dehumanization",
  ytitle = "Humanization",
  data = data_delta
)
p_dehumanization

IAT

p_IAT <- plot_means_over_time_depletion(
  variable = "IAT",
  ytitle = "Implicit Aggression (IAT)",
  data = data_delta
)
p_IAT

Helping

p_WHS <- plot_means_over_time_depletion(
  variable = "WHS",
  ytitle = "Willingness to Help",
  data = data_delta
)
p_WHS

Aggression (Blast Intensity * Duration)

p_blastintensity.duration <- plot_means_over_time_depletion(
  variable = "blastintensity.duration",
  ytitle = "Behavioural Aggression",
  data = data_delta
)
p_blastintensity.duration

Hypothesis

# Simulated data
toy_data <- data.frame(
  Time = factor(c(1, 2, 3)),
  Prosociality = c(10, 20, 20, 10, 15, 15, 10, 10, 10),
  Group = factor(
    c(
      rep("Meditation", 3),
      rep("Reflection", 3),
      rep("Waitlist", 3)
    ),
    levels = c("Meditation", "Reflection", "Waitlist")
  )
)

# Depletion
deplete <- data.frame(
  Time = factor(c(1, 2)),
  Prosociality = c(10, 19.3, 10, 12.5, 10, 7.5),
  Group = factor(
    c(
      rep("Meditation", 2),
      rep("Reflection", 2),
      rep("Waitlist", 2)
    ),
    levels = c("Meditation", "Reflection", "Waitlist")
  )
)

data_deplete <- bind_rows("Non-Depleted" = toy_data, "Depleted" = deplete, .id = "Depletion")
data_deplete$Depletion <- factor(data_deplete$Depletion, levels = c("Non-Depleted", "Depleted"))

data_deplete2 <- data_deplete %>%
  rename(
    T1_Group = "Group",
    T2_Condition = "Depletion",
    mean = "Prosociality"
  ) %>%
  select(T1_Group, T2_Condition, Time, mean) %>%
  arrange(Time)

plot_means_over_time_depletion_summary <- function(data_summary,
                                                   T1_Group = "T1_Group",
                                                   T2_Condition = "T2_Condition",
                                                   mean = "mean",
                                                   ytitle = "") {
  ggplot(data_summary, aes(
    x = Time, y = mean, color = T1_Group,
    shape = T1_Group, linetype = T2_Condition,
    group = interaction(T1_Group, T2_Condition)
  )) +
    geom_line(linewidth = 3) +
    geom_point(size = 4, fill = "white", stroke = 1.5) +
    ggplot2::discrete_scale("shape",
      palette = function(n) {
        c(21:25, 0:20)[1:n]
      }
    ) +
    theme_bw(base_size = 24) +
    theme(
      axis.text.x = element_text(colour = "black"),
      axis.text.y = element_text(colour = "black"),
      panel.grid.major = element_blank(),
      panel.grid.minor = element_blank(),
      panel.border = element_blank(),
      axis.line = element_line(colour = "black"),
      axis.ticks = element_line(colour = "black")
    ) +
    scale_linetype_manual(values = c("solid", "dotted"), name = "Depletion") +
    # Fix the order of the legends
    guides(
      linetype = guide_legend(order = 2), # "Depletion" second
      color = guide_legend(order = 1), # "Group" first
      shape = guide_legend(order = 1) # "Group" legend for shape, same order
    ) +
    theme(legend.title = element_blank()) +
    labs(y = ytitle)
}

hypothesis <- plot_means_over_time_depletion_summary(
  data_summary = data_deplete2,
  ytitle = "Prosociality (Hypothesis)"
) +
  scale_y_continuous(limits = c(7.5, 20), breaks = seq(7.5, 20, by = 2.5)) +
  geom_segment(x = 2, y = 14.3, xend = 2, yend = 13.2, linewidth = 1, arrow = arrow(
    length = unit(0.2, "cm"), type = "closed"
  ), colour = "black", show.legend = FALSE) +
  geom_segment(x = 2, y = 9.3, xend = 2, yend = 8.2, linewidth = 1, arrow = arrow(
    length = unit(0.2, "cm"), type = "closed"
  ), colour = "black", show.legend = FALSE) +
  annotate(geom = "text", label = c("Depletion"), x = c(2.35), y = c(19.3, 12.5, 7.5), size = 6)
hypothesis

All together

Change scores 2

Compassion

Need to correct this so that Time 1 and Time 3 include all data, not just data from those who were not depleted!

OR, we let the depleted group run its course til Time 3 too to see how they differ at T3…

plot_means_over_time_depletion_real <- function(data_delta,
                                                variable,
                                                T1_Group = "T1_Group",
                                                T2_Condition = "T2_Condition",
                                                mean = "mean",
                                                ytitle = "") {
  time <- data %>%
    select(ends_with(variable)) %>%
    names()
  data_long <- data_delta %>%
    mutate(subject_ID = row_number(),
           across(all_of(time), as.numeric)) %>%
    pivot_longer(
      cols = time,
      names_to = "Time",
      values_to = "mean"
    ) %>%
    mutate(Time = factor(gsub("[^0-9]", "", .data$Time)))

  data_summary <- data_long %>% 
    group_by(T1_Group, T2_Condition, Time) %>% 
    summarize(mean = mean(mean), .groups = "keep")
  
  plot_means_over_time_depletion_summary(
    data_summary = data_summary,
    T1_Group = T1_Group,
    T2_Condition = T2_Condition,
    ytitle = ytitle
  )
}

# Plot
p_CLS <- plot_means_over_time_depletion_real(
  data_delta = data_delta,
  variable = "CLS",
  ytitle = "Compassionate Love"
)
p_CLS

Conclusion is that they’re all pretty similar at time 3!!! So I think we can go with the final version of using overall values for time 1 and 3, and just by condition for time 2… Let’s try it at least…

NOBAGS

p_NOBAGS <- plot_means_over_time_depletion_real(
  data_delta = data_delta,
  variable = "NOBAGS",
  ytitle = "Aggression Attitude"
)
p_NOBAGS

Attitudes

p_attitude <- plot_means_over_time_depletion_real(
  data_delta = data_delta,
  variable = "attitude",
  ytitle = "Intergroup Attitude (Positive)"
)
p_attitude

Dehumanization

p_dehumanization <- plot_means_over_time_depletion_real(
  data_delta = data_delta,
  variable = "dehumanization",
  ytitle = "Humanization"
)
p_dehumanization

IAT

p_IAT <- plot_means_over_time_depletion_real(
  data_delta = data_delta,
  variable = "IAT",
  ytitle = "Implicit Aggression (IAT)"
)
p_IAT

Helping

p_WHS <- plot_means_over_time_depletion_real(
  data_delta = data_delta,
  variable = "WHS",
  ytitle = "Willingness to Help"
)
p_WHS

Aggression (Blast Intensity * Duration)

p_blastintensity.duration <- plot_means_over_time_depletion_real(
  data_delta = data_delta,
  variable = "blastintensity.duration",
  ytitle = "Behavioural Aggression"
)
p_blastintensity.duration

All together

Change scores 3

Basically the solution here should be to replace the values at Time 1 and 3 for all conditions to their group average values.

Compassion

plot_means_over_time_depletion_real_final <- function(data_delta,
                                                      variable,
                                                      T1_Group = "T1_Group",
                                                      T2_Condition = "T2_Condition",
                                                      mean = "mean",
                                                      ytitle = "") {
  time <- data %>%
    select(ends_with(variable)) %>%
    names()
  data_long <- data_delta %>%
    mutate(subject_ID = row_number(),
           across(time, as.numeric)) %>%
    pivot_longer(
      cols = c(time),
      names_to = "Time",
      values_to = "mean"
    ) %>%
    mutate(Time = factor(gsub("[^0-9]", "", .data$Time)))

  data_summary <- data_long %>% 
    group_by(T1_Group, T2_Condition, Time) %>% 
    summarize(mean = mean(mean), .groups = "keep")
  
  data_summary2 <- data_long %>% 
    group_by(T1_Group, Time) %>% 
    summarize(mean = mean(mean), .groups = "keep")
  
  data_summary <- data_summary %>% 
    mutate(mean = case_when(
      T1_Group == "Meditation" & Time == 3 ~
        data_summary2 %>% 
        filter(T1_Group == "Meditation" & Time == 3) %>% 
        pull(mean),
      T1_Group == "Reflection" & Time == 3 ~
        data_summary2 %>% 
        filter(T1_Group == "Reflection" & Time == 3) %>% 
        pull(mean),
      T1_Group == "Waitlist" & Time == 3 ~
        data_summary2 %>% 
        filter(T1_Group == "Waitlist" & Time == 3) %>% 
        pull(mean),
      .default = mean
    ))
  
  # Plot
  plot_means_over_time_depletion_summary(
    data_summary = data_summary,
    T1_Group = T1_Group,
    T2_Condition = T2_Condition,
    ytitle = ytitle
  )
}

p_CLS <- plot_means_over_time_depletion_real_final(
  data_delta = data_delta,
  variable = "CLS",
  ytitle = "Compassionate Love"
  )
p_CLS

NOBAGS

p_NOBAGS <- plot_means_over_time_depletion_real_final(
  data_delta = data_delta,
  variable = "NOBAGS",
  ytitle = "Aggression Attitude"
)
p_NOBAGS

Attitudes

p_attitude <- plot_means_over_time_depletion_real_final(
  data_delta = data_delta,
  variable = "attitude",
  ytitle = "Intergroup Attitude (Positive)"
)
p_attitude

Dehumanization

p_dehumanization <- plot_means_over_time_depletion_real_final(
  data_delta = data_delta,
  variable = "dehumanization",
  ytitle = "Humanization"
)
p_dehumanization

IAT

p_IAT <- plot_means_over_time_depletion_real_final(
  data_delta = data_delta,
  variable = "IAT",
  ytitle = "Implicit Aggression (IAT)"
)
p_IAT

Helping

p_WHS <- plot_means_over_time_depletion_real_final(
  data_delta = data_delta,
  variable = "WHS",
  ytitle = "Willingness to Help"
)
p_WHS

Aggression (Blast Intensity * Duration)

p_blastintensity.duration <- plot_means_over_time_depletion_real_final(
  data_delta = data_delta,
  variable = "blastintensity.duration",
  ytitle = "Behavioural Aggression"
)
p_blastintensity.duration

All together

Depletion Bar Charts

Try bar charts…

# Visualization
library(ggpubr)
data_delta %>%
  # group_by(T1_Group) %>%
  ungroup() %>%
  # mutate(T2_CLS = T2_CLS - min(T2_CLS)) %>%
  mutate(
    T2_CLS = as.numeric(T2_CLS),
    T2_NOBAGS = as.numeric(T2_NOBAGS)
  ) %>%
  ggbarplot(
    x = "T1_Group", y = c(
      "T2_CLS", "T2_WHS", "T2_blastintensity.duration", "T2_attitude",
      "T2_dehumanization", "T2_NOBAGS", "T2_IAT"
    ),
    fill = "T2_Condition", palette = "grey",
    add = "mean_ci", add.params = list(group = "T2_Condition"),
    position = position_dodge(0.8)
  )
## $T2_CLS

## 
## $T2_WHS

## 
## $T2_blastintensity.duration

## 
## $T2_attitude

## 
## $T2_dehumanization

## 
## $T2_NOBAGS

## 
## $T2_IAT

# group <- "T1_Group"
# response = "T2_CLS"
#
# data_delta %>%
#   group_by(T1_Group, T2_Condition) %>%
#   mutate(T2_CLS = T2_CLS - min(T2_CLS)) %>%
#   summarize(
#     n = n(),
#     mean = mean(T2_CLS),
#     sd = sd(T2_CLS)) %>%
#   mutate(se = sd / sqrt(n),
#          ic = se * qt((1 - 0.05) / 2 + .5, n - 1)) %>%
#   ggplot2::ggplot(ggplot2::aes(x = T1_Group, fill = T2_Condition,
#     y = mean)) +
#   ggplot2::geom_bar(stat = "identity",
#     position = ggplot2::position_dodge()) +
#   geom_errorbar(stat = "identity", aes(ymin = mean - ic, ymax = mean + ic),
#                 width = 0.1, group = c("T1_Group", "T2_Condition")) +
#   ylab("Effects of ego depletion on compassionate love")

# zz <- data_delta %>%
#   ungroup() %>%
#   select(all_of(T1_Group, T2_Condition, T2_CLS)) %>%
#   mutate(ID = row_number(),
#          T2_CLS = T2_CLS - min(T2_CLS)) %>%
#   pivot_wider(id_cols = c("ID", "T1_Group"),
#               names_from = "T2_Condition",
#               values_from = "T2_CLS") %>%
#   select(-ID)
#
# View(zz)
#
# zz <- zz %>%
#   group_by(T1_Group) %>%
#   summarize(Control = mean(Control, na.rm = TRUE),
#             Depleted = mean(Depleted, na.rm = TRUE)) %>%
#   mutate(diff = Depleted - Control)
#
# zz %>%
#   group_by(T1_Group) %>%
#   ggplot2::ggplot(ggplot2::aes(x = T1_Group, fill = T1_Group, y = diff)) +
#   ggplot2::geom_bar(stat = "identity",
#     position = ggplot2::position_dodge())

Attitudes & Charity

Let us explore some differences in attitudes toward various social groups and charities.

Social Groups

data %>%
  select(starts_with("T1_attitude")) %>% 
  get_label %>% 
  lapply(function(x) gsub(".*- ", "", x)) %>% 
  unlist() %>% unname
##  [1] "" "" "" "" "" "" "" "" "" "" "" "" ""
social.groups <- c("Blacks", "Homeless", "Native", "Muslims", "Refugees", "Women",
                   "Animals", "Elderly", "Whites")

charities <- data %>%
  select(ends_with("1_1") & contains("charity")) %>% 
  get_label %>% 
  lapply(function(x) gsub(".*- ", "", x)) %>% 
  unlist() %>% unname
charities
##  [1] "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ""
regions <- c("Montreal", "Quebec", "Canada", "International")
regions
## [1] "Montreal"      "Quebec"        "Canada"        "International"

Time 1

Explicit Attitude

data$ID <- seq_len(nrow(data))

T1_attitude <- data %>%
  select(ID, T1_Group, T1_attitude_1:T1_attitude_9) %>% 
  pivot_longer(cols = T1_attitude_1:T1_attitude_9, 
               names_to = "Group",
               values_to = "attitude") %>% 
  mutate(Group = factor(Group, labels = social.groups),
         Time = 1)

nice_violin(T1_attitude,
            group = "Group",
            response = "attitude",
            ytitle = "Positive Explicit Attitude",
            CIcap.width = 0.3,
            obs = "jitter",
            border.size = 1,
            alpha = 0.7,
            groups.order = "increasing")

Dehumanization

T1_dehumanization <- data %>%
  select(ID, T1_Group, T1_dehumanization_1:T1_dehumanization_9) %>% 
  pivot_longer(cols = T1_dehumanization_1:T1_dehumanization_9, 
               names_to = "Group",
               values_to = "attitude") %>% 
  mutate(Group = factor(Group, labels = social.groups),
         Time = 1)

nice_violin(T1_dehumanization,
            group = "Group",
            response = "attitude",
            ytitle = "Humanization",
            CIcap.width = 0.3,
            obs = "jitter",
            border.size = 1,
            alpha = 0.7,
            groups.order = "increasing")

Time 2

Explicit Attitude

T2_attitude <- data %>%
  select(ID, T1_Group, T2_attitude_1:T2_attitude_9) %>% 
  pivot_longer(cols = T2_attitude_1:T2_attitude_9, 
               names_to = "Group",
               values_to = "attitude") %>% 
  mutate(Group = factor(Group, labels = social.groups),
         Time = 2)

nice_violin(T2_attitude,
            group = "Group",
            response = "attitude",
            ytitle = "Positive Explicit Attitude",
            CIcap.width = 0.3,
            obs = "jitter",
            border.size = 1,
            alpha = 0.7,
            groups.order = "increasing")

Dehumanization

T2_dehumanization <- data %>%
  select(ID, T1_Group, T2_dehumanization_1:T2_dehumanization_9) %>% 
  pivot_longer(cols = T2_dehumanization_1:T2_dehumanization_9, 
               names_to = "Group",
               values_to = "attitude") %>% 
  mutate(Group = factor(Group, labels = social.groups),
         Time = 2)

nice_violin(T2_dehumanization,
            group = "Group",
            response = "attitude",
            ytitle = "Humanization",
            CIcap.width = 0.3,
            obs = "jitter",
            border.size = 1,
            alpha = 0.7,
            groups.order = "increasing")

Amount Donated

data %>%
  select(contains("charity") & ends_with("1_1")) %>% 
  pivot_longer(cols = everything(), 
               names_to = "charity",
               values_to = "donation") %>% 
  mutate(charity = factor(charity, labels = charities)) %>% 
  nice_violin(group = "charity",
              response = "donation",
              ytitle = "Amount Donated",
              CIcap.width = 0.5,
              obs = "jitter",
              border.size = 1,
              alpha = 0.7,
              groups.order = "increasing",
              xlabels.angle = 75)

Charity Familiarity

data %>%
  select(contains("charity") & ends_with("2_1")) %>% 
  pivot_longer(cols = everything(), 
               names_to = "charity",
               values_to = "familiarity") %>% 
  mutate(charity = factor(charity, labels = charities)) %>% 
  nice_violin(group = "charity",
              response = "familiarity",
              ytitle = "Familiarity with Charity",
              CIcap.width = 0.5,
              obs = "jitter",
              border.size = 1,
              alpha = 0.7,
              groups.order = "increasing",
              xlabels.angle = 75)

Familiarity & Donation

data %>%
  nice_scatter(predictor = "T2_Familiarity",
               response = "T2_Charity",
               ytitle = "Donation Amount",
               xtitle = "Familiarity with Charity",
               has.jitter = TRUE,
               has.legend = TRUE,
               has.r = TRUE,
               has.p = TRUE)

Region & Donation

data %>%
  select(contains("charity") & ends_with("1_1")) %>% 
  pivot_longer(cols = everything(), 
               names_to = "charity",
               values_to = "donation") %>% 
  mutate(charity = factor(charity, labels = charities),
         region = case_match(
           charity,
           charities[1:6] ~ regions[1],
           charities[7:12] ~ regions[2],
           charities[13:18] ~ regions[3],
           charities[19:24] ~ regions[4]
         )) %>% 
  nice_violin(group = "region",
              response = "donation",
              ytitle = "Amount Donated",
              obs = "jitter",
              groups.order = "increasing")

Region & Familiarity

data %>%
  select(contains("charity") & ends_with("2_1")) %>% 
  pivot_longer(cols = everything(), 
               names_to = "charity",
               values_to = "familiarity") %>% 
  mutate(charity = factor(charity, labels = charities),
         region = case_match(
           charity,
           charities[1:6] ~ regions[1],
           charities[7:12] ~ regions[2],
           charities[13:18] ~ regions[3],
           charities[19:24] ~ regions[4]
         )) %>% 
  nice_violin(group = "region",
              response = "familiarity",
              ytitle = "Familiarity with Charity",
              obs = "jitter",
              groups.order = "increasing")

Time 3

Explicit Attitude

T3_attitude <- data %>%
  select(ID, T1_Group, T3_attitude_1:T3_attitude_9) %>% 
  pivot_longer(cols = T3_attitude_1:T3_attitude_9, 
               names_to = "Group",
               values_to = "attitude") %>% 
  mutate(Group = factor(Group, labels = social.groups),
         Time = 3)

nice_violin(T3_attitude,
            group = "Group",
            response = "attitude",
            ytitle = "Positive Explicit Attitude",
            CIcap.width = 0.3,
            obs = "jitter",
            border.size = 1,
            alpha = 0.7,
            groups.order = "increasing")

Dehumanization

T3_dehumanization <- data %>%
  select(ID, T1_Group, T3_dehumanization_1:T3_dehumanization_9) %>% 
  pivot_longer(cols = T3_dehumanization_1:T3_dehumanization_9, 
               names_to = "Group",
               values_to = "attitude") %>% 
  mutate(Group = factor(Group, labels = social.groups),
         Time = 3)

nice_violin(T3_dehumanization,
            group = "Group",
            response = "attitude",
            ytitle = "Humanization",
            CIcap.width = 0.3,
            obs = "jitter",
            border.size = 1,
            alpha = 0.7,
            groups.order = "increasing")

Over time (change scores)

Explicit Attitude

attitudes <- rbind(T1_attitude, T2_attitude, T3_attitude) %>% 
  pivot_wider(values_from = attitude,
              names_from = Time,
              names_prefix = "attitude_T")

# Correct scores for baseline (change scores)
# attitudes <- attitudes %>% 
#   mutate(across(starts_with("attitude"), \(x) {x - attitude_T1}))

time <- attitudes %>% select(starts_with("attitude")) %>% names()
p_attitudes <- plot_means_over_time(
  attitudes,
  response = time,
  group = "Group",
  error_bars = TRUE,
  ytitle = "Intergroup Attitude (Positive)",
  groups.order = "decreasing")
p_attitudes

p_attitudes <- plot_means_over_time(
  attitudes %>% filter(T1_Group == "Meditation"),
  response = time,
  group = "Group",
  error_bars = TRUE,
  ytitle = "Attitude (Meditation)",
  groups.order = "decreasing")
p_attitudes

p_attitudes <- plot_means_over_time(
  attitudes %>% filter(T1_Group == "Reflection"),
  response = time,
  group = "Group",
  error_bars = TRUE,
  ytitle = "Attitude (Reflection)",
  groups.order = "decreasing")
p_attitudes

p_attitudes <- plot_means_over_time(
  attitudes %>% filter(T1_Group == "Waitlist"),
  response = time,
  group = "Group",
  error_bars = TRUE,
  ytitle = "Attitude (Waitlist)",
  groups.order = "decreasing")
p_attitudes

Dehumanization

dehumanization <- rbind(T1_dehumanization, T2_dehumanization, T3_dehumanization) %>% 
  pivot_wider(values_from = attitude,
              names_from = Time,
              names_prefix = "attitude_T")

# Correct scores for baseline (change scores)
dehumanization <- dehumanization %>%
  mutate(across(starts_with("attitude"), \(x) {x - attitude_T1}))

time <- attitudes %>% select(starts_with("attitude")) %>% names()
p_dehumanization <- plot_means_over_time(
  dehumanization,
  response = time,
  group = "Group",
  error_bars = TRUE,
  ytitle = "Humanization",
  groups.order = "decreasing")
p_dehumanization

p_dehumanization <- plot_means_over_time(
  dehumanization %>% filter(T1_Group == "Meditation"),
  response = time,
  group = "Group",
  error_bars = TRUE,
  ytitle = "Humanization - Meditation",
  groups.order = "decreasing")
p_dehumanization

p_dehumanization <- plot_means_over_time(
  dehumanization %>% filter(T1_Group == "Reflection"),
  response = time,
  group = "Group",
  error_bars = TRUE,
  ytitle = "Humanization - Reflection",
  groups.order = "decreasing")
p_dehumanization

p_dehumanization <- plot_means_over_time(
  dehumanization %>% filter(T1_Group == "Waitlist"),
  response = time,
  group = "Group",
  error_bars = TRUE,
  ytitle = "Humanization - Waitlist",
  groups.order = "decreasing")
p_dehumanization

Who Meditated

Let us explore which groups meditated. We can see below that even groups that were not supposed to meditate did meditate. Should we control for this in analyses, and if so, how? How should we discuss this in the paper?

Meditation Practice

data %>% 
  mutate(T3_post.medipractice = factor(T3_post.medipractice, 
                                       levels = c("Yes", "No", NA))) %>% 
  grouped_bar_chart("T3_post.medipractice", 
                    "Meditation Practice \nDuring Intervention")

Q: “Have you practiced meditation in the past 13 weeks?”

Meditation Technique

data <- data %>% 
  mutate(T3_medipractice.which_aggregate = case_when(
    grepl("LKM", T3_medipractice.which) ~ "LKM", 
    T3_medipractice.which == "NA" ~ "NA",
    .default = "Other"))

grouped_bar_chart(data, "T3_medipractice.which_aggregate",
                  "LKM Practice \nDuring Intervention",
                  proportion = TRUE)

Q: “What type of meditation have you practiced?”

report(data$T3_medipractice.which)
## x: 21 levels, namely chakras (n = 3, 1.52%), LKM (n = 14, 7.07%), LKM, chakras
## (n = 1, 0.51%), LKM, chakras, transcendental (n = 1, 0.51%), mindfulness (n =
## 41, 20.71%), mindfulness, chakras (n = 1, 0.51%), mindfulness, LKM (n = 16,
## 8.08%), mindfulness, LKM, chakras (n = 1, 0.51%), mindfulness, LKM, chakras,
## other, (amour-propre et healing) (n = 1, 0.51%), mindfulness, LKM, other, (en
## marchant) (n = 1, 0.51%), mindfulness, LKM, other, (J'ai fait les 42 jours de
## méditation par l'étude et de la méditation seul (avec bol)) (n = 1, 0.51%),
## mindfulness, LKM, vipassana (n = 2, 1.01%), mindfulness, other, (Respiration)
## (n = 1, 0.51%), mindfulness, transcendental (n = 1, 0.51%), NA (n = 106,
## 53.54%), other, (Générique, juste se coucher et se vider l'esprit) (n = 1,
## 0.51%), other, (Guidée) (n = 2, 1.01%), other, (Libre pensée /
## se-laisser-porter) (n = 1, 0.51%), other, (Relaxation) (n = 1, 0.51%), other,
## (Respiration-visualisation) (n = 1, 0.51%) and vipassana (n = 1, 0.51%)

Meditation Time

data %>% 
  mutate(T3_medipractice.time = as.factor(T3_medipractice.time),
         T3_medipractice.time = factor(
           T3_medipractice.time, levels = levels(
             .data$T3_medipractice.time)[c(1, 4:6, 3, 2)])) %>% 
  grouped_bar_chart("T3_medipractice.time",
                    "Weekly Meditation Time \nDuring Intervention")

Q: “How often did you meditate per week?”

Motivation to Receive Intervention

data %>% 
  grouped_bar_chart("T3_choice.medicomp", 
                    "Wants to receive \nMeditation Intervention")

Q: “Would you like to receive the meditation program for free?”

Time 2 Contrasts - Exploratory

What if we try excluding participants in the control and reflection groups who practiced LKM?

data_no_LKM <- data %>% 
  filter(T3_medipractice.which_aggregate != "LKM" |
           T1_Group == "Meditation")

# Make list of all models
models.list <- sapply(formulas, lm, data = data_no_LKM, simplify = FALSE, USE.NAMES = TRUE)

# Attempt with nice_lm_contrasts 
set.seed(100)
x <- nice_lm_contrasts(models.list, group = "T1_Group", data = data_no_LKM)

nice_table(x, highlight = TRUE)

Dependent Variable

Comparison

df

t

p

d

95% CI

T2_NOBAGS

Meditation - Reflection

179

-0.19

.848

-0.15

[-0.56, 0.29]

Meditation - Waitlist

179

-1.52

.131

-0.31

[-0.66, 0.03]

Reflection - Waitlist

179

-1.17

.242

-0.16

[-0.53, 0.24]

T2_attitude

Meditation - Reflection

179

-0.79

.429

0.16

[-0.25, 0.53]

Meditation - Waitlist

179

1.61

.109

0.34

[-0.02, 0.69]

Reflection - Waitlist

179

2.32

.022*

0.19

[-0.17, 0.56]

T2_dehumanization

Meditation - Reflection

179

0.36

.720

0.20

[-0.22, 0.66]

Meditation - Waitlist

179

0.61

.544

0.12

[-0.18, 0.45]

Reflection - Waitlist

179

0.16

.871

-0.08

[-0.49, 0.31]

T2_IAT

Meditation - Reflection

179

-0.95

.345

-0.40

[-0.80, 0.01]

Meditation - Waitlist

179

0.33

.743

0.05

[-0.26, 0.38]

Reflection - Waitlist

179

1.32

.189

0.45

[0.02, 0.85]

T2_SMS5

Meditation - Reflection

182

-1.07

.284

-0.22

[-0.61, 0.17]

Meditation - Waitlist

182

-1.90

.059

-0.33

[-0.65, 0.02]

Reflection - Waitlist

182

-0.56

.574

-0.11

[-0.44, 0.27]

T2_blastintensity

Meditation - Reflection

179

1.93

.055

0.11

[-0.27, 0.54]

Meditation - Waitlist

179

-1.39

.165

-0.02

[-0.33, 0.32]

Reflection - Waitlist

179

-3.33

.001**

-0.12

[-0.47, 0.29]

T2_blastduration

Meditation - Reflection

179

2.10

.037*

0.23

[-0.17, 0.61]

Meditation - Waitlist

179

-1.32

.188

-0.06

[-0.38, 0.30]

Reflection - Waitlist

179

-3.47

.001***

-0.29

[-0.67, 0.11]

T2_blastintensity.duration

Meditation - Reflection

179

2.15

.033*

0.17

[-0.27, 0.57]

Meditation - Waitlist

179

-1.41

.160

-0.03

[-0.39, 0.32]

Reflection - Waitlist

179

-3.59

< .001***

-0.20

[-0.56, 0.20]

T2_memory.altruistic

Meditation - Reflection

182

1.39

.166

0.28

[-0.12, 0.66]

Meditation - Waitlist

182

-0.80

.424

-0.14

[-0.47, 0.19]

Reflection - Waitlist

182

-2.24

.026*

-0.42

[-0.82, -0.08]

T2_memory.aggressive

Meditation - Reflection

182

0.45

.653

0.09

[-0.35, 0.51]

Meditation - Waitlist

182

-0.08

.937

-0.01

[-0.37, 0.30]

Reflection - Waitlist

182

-0.56

.575

-0.11

[-0.49, 0.28]

T2_WHS

Meditation - Reflection

179

-0.61

.541

0.01

[-0.39, 0.40]

Meditation - Waitlist

179

0.40

.686

0.08

[-0.28, 0.42]

Reflection - Waitlist

179

1.03

.303

0.07

[-0.30, 0.43]

T2_CLS

Meditation - Reflection

179

-2.03

.043*

-0.08

[-0.51, 0.36]

Meditation - Waitlist

179

2.08

.039*

0.44

[0.11, 0.81]

Reflection - Waitlist

179

4.12

< .001***

0.53

[0.14, 0.91]

T2_Charity

Meditation - Reflection

179

-0.76

.451

-0.15

[-0.57, 0.28]

Meditation - Waitlist

179

-1.59

.114

-0.22

[-0.56, 0.13]

Reflection - Waitlist

179

-0.62

.537

-0.07

[-0.44, 0.31]

T2_PANAS_pos

Meditation - Reflection

182

1.04

.300

0.21

[-0.19, 0.63]

Meditation - Waitlist

182

2.69

.008**

0.46

[0.14, 0.80]

Reflection - Waitlist

182

1.33

.186

0.25

[-0.17, 0.62]

T2_PANAS_neg

Meditation - Reflection

182

1.14

.256

0.23

[-0.14, 0.65]

Meditation - Waitlist

182

1.41

.160

0.24

[-0.09, 0.59]

Reflection - Waitlist

182

0.05

.962

0.01

[-0.35, 0.38]

Time 3 Contrasts - Exploratory

What if we try excluding participants in the control and reflection groups who practiced LKM?

# Make list of all models
models.list2 <- sapply(formulas2, lm, data = data_no_LKM, simplify = FALSE, USE.NAMES = TRUE)

## Attempt with nice_lm_contrasts
set.seed(100)
x2 <- nice_lm_contrasts(models.list2, group = "T1_Group", data = data_no_LKM)

nice_table(x2, highlight = TRUE)

Dependent Variable

Comparison

df

t

p

d

95% CI

T3_NOBAGS

Meditation - Reflection

179

-1.62

.107

-0.42

[-0.85, 0.02]

Meditation - Waitlist

179

-1.41

.160

-0.30

[-0.66, 0.01]

Reflection - Waitlist

179

0.48

.634

0.12

[-0.28, 0.52]

T3_attitude

Meditation - Reflection

179

0.26

.793

0.35

[-0.05, 0.75]

Meditation - Waitlist

179

1.90

.059

0.37

[0.03, 0.70]

Reflection - Waitlist

179

1.43

.156

0.03

[-0.35, 0.39]

T3_dehumanization

Meditation - Reflection

179

1.09

.276

0.31

[-0.09, 0.76]

Meditation - Waitlist

179

0.82

.413

0.13

[-0.18, 0.46]

Reflection - Waitlist

179

-0.44

.659

-0.19

[-0.59, 0.23]

T3_IAT

Meditation - Reflection

179

-0.50

.615

-0.29

[-0.70, 0.16]

Meditation - Waitlist

179

1.22

.223

0.20

[-0.13, 0.52]

Reflection - Waitlist

179

1.63

.104

0.49

[0.10, 0.90]

T3_blastintensity

Meditation - Reflection

179

0.83

.408

-0.01

[-0.44, 0.39]

Meditation - Waitlist

179

-1.71

.089

-0.09

[-0.42, 0.24]

Reflection - Waitlist

179

-2.42

.017*

-0.07

[-0.46, 0.32]

T3_blastduration

Meditation - Reflection

179

1.03

.306

0.10

[-0.30, 0.52]

Meditation - Waitlist

179

-1.19

.236

-0.06

[-0.38, 0.27]

Reflection - Waitlist

179

-2.19

.030*

-0.16

[-0.53, 0.20]

T3_blastintensity.duration

Meditation - Reflection

179

0.81

.418

0.02

[-0.39, 0.43]

Meditation - Waitlist

179

-1.44

.151

-0.07

[-0.38, 0.28]

Reflection - Waitlist

179

-2.17

.031*

-0.09

[-0.48, 0.30]

T3_WHS

Meditation - Reflection

179

-1.02

.310

-0.01

[-0.45, 0.44]

Meditation - Waitlist

179

1.88

.062

0.25

[-0.12, 0.57]

Reflection - Waitlist

179

2.81

.005**

0.26

[-0.12, 0.65]

T3_CLS

Meditation - Reflection

179

-1.95

.053

-0.09

[-0.51, 0.32]

Meditation - Waitlist

179

1.54

.125

0.37

[0.01, 0.70]

Reflection - Waitlist

179

3.54

.001***

0.47

[0.06, 0.85]

Conclusion: Excluding participants in the other groups who meditated does not change the results. Let’s plot the data just to see if the trends look similar.

Means over time - Exploratory

What if we try excluding participants in the control and reflection groups who practiced LKM?

data_delta <- data_no_LKM %>% 
  group_by(T1_Group) %>% 
  mutate(
    across(ends_with("NOBAGS"), \(x) {x - T1_NOBAGS}),
    across(ends_with("attitude"), \(x) {x - T1_attitude}),
    across(ends_with("dehumanization"), \(x) {x - T1_dehumanization}),
    across(ends_with("IAT"), \(x) {x - T1_IAT}),
    across(ends_with("blastintensity"), \(x) {x - T1_blastintensity}),
    across(ends_with("blastduration"), \(x) {x - T1_blastduration}),
    across(ends_with("blastintensity.duration"), 
           \(x) {x - T1_blastintensity.duration}),
    across(ends_with("WHS"), \(x) {x - T1_WHS}),
    across(ends_with("CLS"), \(x) {x - T1_CLS})
    )

time <- data_no_LKM %>% select(ends_with("NOBAGS")) %>% names()
p_NOBAGS <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE)

time <- data_no_LKM %>% select(ends_with("attitude")) %>% names()
p_attitude <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Positive Intergroup Attitude")

time <- data_no_LKM %>% select(ends_with("dehumanization")) %>% names()
p_dehumanization <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Humanization")

time <- data_no_LKM %>% select(ends_with("IAT")) %>% names()
p_IAT <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE)

time <- data_no_LKM %>% select(ends_with("blastintensity")) %>% names()
p_blastintensity <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE)

time <- data_no_LKM %>% select(ends_with("blastduration")) %>% names()
p_blastduration <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE)

time <- data_no_LKM %>% select(ends_with("blastintensity.duration")) %>% names()
p_blastintensity.duration <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Aggression")

time <- data_no_LKM %>% select(ends_with("WHS")) %>% names()
p_WHS <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Willingness to help")

time <- data_no_LKM %>% select(ends_with("CLS")) %>% names()
p_CLS <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Compassionate Love")

p_CLS <- p_CLS +
  theme(legend.position = "none")
p_blastintensity.duration <- p_blastintensity.duration +
  theme(legend.position = "none")
p_dehumanization <- p_dehumanization +
  theme(legend.position = "none")
p_IAT <- p_IAT +
  theme(legend.position = "none")

plots(p_CLS, p_WHS, p_blastintensity.duration, p_attitude, 
      p_dehumanization, p_NOBAGS, p_IAT, n_columns = 2)

Error bars represent 95% confidence intervals adjusted for within-subject variance as by the method of Morey (2008).

Discussion

In this report, we aimed to compare two types of loving-kindness meditation, one more embodied, based on meditation, and one more cognitive, based on intellectual reflection, to a waitlist control group. We compared those groups on several variables relating to prosociality (i.e., on affect, attitude and behaviour). Groups were measured three times: Week 0 (T1), Week 6 (T2), and Week 13 (T3) so it was able to compare for baseline but also see how robust the effects, if any, are through times. We were also interested in assessing whether these effects depend on other personality variables (i.e., moderators).

Group Differences at Time 2

Our contrasts analyses first revealed group differences at Time 2. Both the meditation and reflection groups showed moderately more compassionate love than the waitlist group, but only the meditation group showed moderately more positive affect than the waitlist group. However, the reflection group showed a little more positive explicit attitudes toward various social groups, as well as moderately shorter reaction times to remember an altruistic event than the waitlist group (suggesting that altruism was more cognitively accessible to them). Furthermore, the reflection group showed a little lower behavioural aggression (blast intensity, blast duration, and blast intensity * duration) than both the waitlist group and the meditation group.

Group Differences at Time 3

Our contrasts analyses also revealed group differences at Time 3. However, only the reflection group showed lasting positive effects on attitudes (still small effect), behavioural aggression (still small effect), and compassion (still moderate effect), suggesting these effects are durable in time. Furthermore, the reflection group showed a delayed onset effect on willingness to help, whereas they were a little more willing to help in various hypothetical scenarios than the control group.

Moderations at Time 2

First, attitudes toward aggression (NOBAGS) only moderated one variable, blast duration. In short, while NOBAGS does not affect blast duration in the control condition, it relates to higher blast duration in the depletion condition. Although this result is theoretically consistent with the literature, it is likely a false positive given our high number of tests, the fact that this is the only variable that NOBAGS moderates, and that the p value is relatively close to 0.5. Furthermore that variable (blastduration alone) was not in the preregistration, so we might not report this finding.

Second, implicit aggression (IAT) seems to have moderated several variables. Like for NOBAGS, it also moderated blast duration, but in a three-way interaction this time. Surprisingly, for the meditation group, implicit aggression related to lower aggression, but only when depleted, whereas there was no such interaction in the waitlist group. However, as mentioned before, that variable (blastduration alone) was not in the preregistration, so we might not report this finding.

Third, implicit aggression also moderated compassionate love, again in a three-way interaction. For the waitlist group, the effect of implicit aggression clearly depends on depletion: implicit aggression relates to lower compassion in the control group (expected), but to higher compassion in the depletion group (unexpected). However, for the meditation group, the effect was absent or partly reversed.

Fourth, implicit aggression also moderated reaction time to remember an altruistic event, again in a three-way interaction. For the meditation group, higher implicit aggression relates to shorter reaction time (unexpected), unless they are depleted. However, for the waitlist group, the effect was absent or partly reversed.

Conclusion

In conclusion, there seems to be group differences at Time 2 and Time 3, between the experimental conditions and the control group. However, the effects in the reflection group appear not only stronger, but also more robust (i.e,. they are the only ones lasting at Time 3). Furthermore, there are also several three-way interactions between implicit attitudes, ego depletion, and group, as expected. The nature of the interactions do not seem however to perfectly align with our original predictions. A deeper exploration of the meaning of these interactions will be required.

Full Code & References

The package references and the full script of executive code contained in this document is reproduced in the tabs below.

Package References

sessionInfo() %>% report %>% summary

The analysis was done using the R Statistical language (v4.4.1; R Core Team, 2024) on Windows 11 x64, using the packages iterators (v1.0.14), GPArotation (v2024.3.1), datscience (v0.2.6), pwr (v1.3.0), doParallel (v1.0.17), flextable (v0.9.6), eefAnalytics (v1.0.6), ggpubr (v0.6.0), survminer (v0.4.9), emmeans (v1.10.4), interactions (v1.2.0), sjlabelled (v1.2.0), performance (v0.12.3), see (v0.9.0), modelbased (v0.8.8), report (v0.5.9), foreach (v1.5.2), datawizard (v0.12.3), patchwork (v1.2.0.9000), bestNormalize (v1.9.1), missForest (v1.5), rempsyc (v0.1.8.2), survival (v3.7.0), visdat (v0.6.0), naniar (v1.1.0), ggplot2 (v3.5.0), dplyr (v1.1.4), tidyr (v1.3.1) and psych (v2.4.6.26).

report::cite_packages(sessionInfo())

Full Code

library(dplyr)
library(interactions)
library(performance)
library(see)
library(report)
library(datawizard)
library(modelbased)
library(ggplot2)
library(bestNormalize)
library(psych)
library(GPArotation)
library(visdat)
library(missForest)
library(doParallel)
library(ggplot2)
library(emmeans)
library(sjlabelled)
library(tidyr)
library(tools)
library(flextable)
library(pwr)
library(patchwork)
library(ggpubr)
library(survival)
library(survminer)
if(packageVersion("rempsyc") < "0.1.7.6") stop("Please install 'rempsyc' package version > '1.1.7.6'")
library(rempsyc)
# remotes::install_github("Buedenbender/datscience")
library(datscience)
# remotes::install_version("eefAnalytics", version = "1.0.6", repos = "http://cran.us.r-project.org")
if(packageVersion("eefAnalytics") > "1.0.6") stop("Please install 'eefAnalytics' package version '1.0.6'")
library(eefAnalytics)

# Read data
# data <- readRDS("Data/finaldataset_n496.rds")
# inner.join <- FALSE
data <- readRDS("Data/finaldataset_n217.rds")
inner.join <- TRUE

report_participants(data, threshold = 1) %>% cat

# Allocation ratio
report(data$T1_Group)
report(data$T2_Condition)

sessionInfo() %>% report %>% summary

report::cite_packages(sessionInfo())
data <- data %>% 
  mutate(part.percent = convert_na_to(part.percent, 1))

data %>% 
  filter(part.percent < 2/3) %>% 
  count(T1_Group)

data2 <- data

data <- data %>% 
  filter(part.percent >= 2/3)

report_participants(data, threshold = 1) %>% cat

# Allocation ratio
report(data$T1_Group)
report(data$T2_Condition)

data <- data %>% 
    mutate(att_check = rowSums(
      select(., T1_attention1, T2_attention1, T3_attention1), na.rm = TRUE))

data %>% 
  count(att_check)

data %>% 
  group_by(T1_Group) %>% 
  count(att_check)

data <- data %>% 
  filter(att_check >= 2)

report_participants(data, threshold = 1) %>% cat

report_participants(data, threshold = 1, by = "T1_Group") %>% cat

# Allocation ratio
report(data$T1_Group)
report(data$T2_Condition)

get_label(data$T1_recruitment) %>% cat

report(data$T1_recruitment)

data %>% 
    count(T1_recruitment, sort = TRUE)

data %>% 
  nice_density("age", histogram = TRUE)

data %>% 
    count(gender, sort = TRUE)

data <- data %>% 
  var_labels(T1_psycho.class = "Have you already completed a psychology course?")

get_label(data$T1_psycho.class) %>% cat

data %>% 
    count(T1_psycho.class, sort = TRUE)

get_label(data$T1_virtual.reality) %>% cat

data %>% 
    count(T1_virtual.reality, sort = TRUE)

data <- data %>% 
  var_labels(T1_medi.experience = "What is your meditation experience?")

get_label(data$T1_medi.experience) %>% cat

data %>% 
    count(T1_medi.experience, sort = TRUE, .drop = FALSE)

get_label(data$T1_disorders) %>% cat

data %>% 
    count(T1_disorders, sort = TRUE)

get_label(data$T1_vision) %>% cat

data %>% 
    count(T1_vision, sort = TRUE)

get_label(data$T1_phone) %>% cat

data %>% 
    count(T1_phone, sort = TRUE)

get_label(data$T1_quebec) %>% cat

data %>% 
    count(T1_quebec, sort = TRUE)

data <- data %>% 
  var_labels(T1_student = "Are you a student?")

get_label(data$T1_student) %>% cat

data %>% 
    count(T1_student, sort = TRUE)

get_label(data$T1_student.program_cat) %>% cat

report(data$T1_student.program)

data %>% 
    count(T1_student.program_cat, sort = TRUE) %>% 
  filter(n > 3)

get_label(data$T1_workplace) %>% cat

report(data$T1_workplace)

data %>% 
    count(T1_workplace_cat, sort = TRUE) %>% 
  filter(n > 1)

get_label(data$T3_post.medipractice) %>% cat

data %>% 
    count(T3_post.medipractice, sort = TRUE)

get_label(data$T3_medipractice.which) %>% cat

report(data$T3_medipractice.which)

data %>% 
    count(T3_medipractice.which, sort = TRUE)

get_label(data$T3_medipractice.time) %>% cat

data %>% 
    count(T3_medipractice.time, sort = TRUE)

get_label(data$T3_choice.medicomp) %>% cat

data %>% 
    count(T3_choice.medicomp, sort = TRUE)

data <- data %>% 
  mutate(T3_medipractice.time = ifelse(
    is.na(T3_medipractice.time), "NA", T3_medipractice.time))

get_label(data$T3_followup) %>% cat

data %>% 
    count(T3_followup, sort = TRUE)

get_label(data$T3_consent) %>% cat

data %>% 
    count(T3_consent, sort = TRUE)

# Part1: setting up the data ----------------------------------------------
data_surv <- readRDS("Data/finaldataset_n496.rds")

# Add Time points completions
any_not_na <- \(x) any(!is.na(x))

data_surv <- data_surv %>% 
  rowwise() %>% 
  mutate(T1_completed = any_not_na(pick(contains("T1_"))),
         T2_completed = any_not_na(pick(contains("T2_"))),
         T3_completed = any_not_na(pick(contains("T3_"))),
         T_completed = rowSums(pick(T1_completed:T3_completed)))

## Step 1 ------------------------------------------------------------------

# Step 1 = calculate number of questionnaires completed...
# I think we can make the decision here as NOT MISSING, instead of
# completed more than 70%... it's not about how much they engaged in the
# treatment here... but rather about whether they opened the survey at all,
# that they are still checking their emails.

# For now we should only focus on the intervention groups and ignore waitlist...
# So we should have...
n1 <- data_surv %>% 
  filter(T1_Group != "Waitlist") %>% 
  count(T1_Group)
n1
sum(n1$n)

# 306 people
# But the participation file only has 285 rows... hum
# But importantly, data_surv doesn't contain the raw exercise or activity data,
# just the questionnaires... so likely more people did the questionnaires but
# didn't do the surveys...
# And only 154 in the reflection group

# Also ChatGPT suggests excluding participants for the failed attention checks
# BEFORE doing the survival analysis for consistency with my inclusion
# criteria for my main analyses... so here we go

# data_surv <- data_surv %>% 
#   mutate(att_check = rowSums(
#     select(., T1_attention1, T2_attention1, T3_attention1), na.rm = TRUE)) %>% 
#   filter(att_check >= 2)
# But that woudln't work because it would underestimate dropout since all the
# dropouts of course couldn't do the attention checks

# We would also have to deal with activites (weekly) vs exercises (daily)... but maybe ignore activities for now for the sake of simplicity...

# For the exercises for reflection group for instance, 
# maybe that makes sense that we have a small sample, because 
# med.ex2 gives us only 124 unique rows of all surveys
# > lapply(med.ex2, \(x) nrow(x)) %>% unlist %>% max()
# [1] 124
# And even med.ex gives us 137(not necessarily unique) rows of all surveys
# > lapply(med.ex, \(x) nrow(x)) %>% unlist %>% max()
# [1] 137
# So I think we can go for the 124 reflection group exercises number...

D_duration_percentage <- paste0("D", 1:42, "_duration_percentage")
W_duration_percentage <- paste0("W", 1:6, "_duration_percentage")
duration_percentage <- c(D_duration_percentage, W_duration_percentage)
non_na_to_1 <- \(x) {ifelse(is.na(x), x, 1)}

data_surv <- data_surv %>% 
  mutate(across(all_of(duration_percentage), non_na_to_1),
  exercise_completed_discrete = rowSums(across(all_of(
    D_duration_percentage)), na.rm = TRUE),
  activity_completed_discrete = rowSums(across(all_of(
    W_duration_percentage)), na.rm = TRUE)) %>% 
  select(T1_Group, part.percent, 
         exercise_completed_discrete, 
         activity_completed_discrete,
         T_completed,
         age:T1_quebec)

## Step 2 ------------------------------------------------------------------

# Step 2 = determine binary status of inclusion or exclusion... (0 and 1)

data_surv <- data_surv %>% 
  mutate(Status = ifelse(part.percent >= 2/3, 0, 1),
         Status = ifelse(is.na(Status), 1, Status),
         completed_discrete = exercise_completed_discrete + activity_completed_discrete,
         .after = "T1_Group") %>% 
  as.data.frame()

# Part 2: Analysis proper -------------------------------------------------

## Time points only ------------------------------------------------------------------

# Next, we look at survival curves by treatment.
fit <- survfit(Surv(T_completed, Status) ~ T1_Group, data = data_surv)

# add method to grid.draw to avoid error in survminer when printing
grid.draw.ggsurvplot <- function(x){
  survminer:::print.ggsurvplot(x, newpage = FALSE)
}

# Enhanced plot using survminer
loss_to_followup <- ggsurvplot(
  fit, 
  data = data_surv,
  xlab = "Number of Surveys",
  ylab = "Dropout Probability (questionnaires only)",
  title = "Survival Curve Based on Kaplan-Meier Estimates (Thériault et al., 2024)",
  caption = "Error bands = 95% confidence bands. Dotted lines = median survival values.",
  font.caption = 13,
  conf.int = TRUE,
  pval = TRUE,
  pval.method = TRUE,
  risk.table = TRUE,
  legend.title = "",
  legend.labs = c("Meditation", "Reflection", "Waitlist"),
  font.main = c(14, "bold"),
  font.x = c(12, "plain"),
  font.y = c(12, "plain"),
  font.tickslab = c(10, "plain"),
  break.x.by = 1,
  surv.median.line = "hv"
)
loss_to_followup

# Save a high-res .png image file
ggsave("Figure X_survival_T1-T3.pdf", plot = loss_to_followup, 
       width = 9, height = 7, unit = "in", dpi = 300)
ggsave("Figure X_survival_T1-T3.png", plot = loss_to_followup, 
       width = 9, height = 7, unit = "in", dpi = 300)

# Perform the log-rank test
survdiff(Surv(T_completed, Status) ~ T1_Group, data = data_surv)

# Fit a Cox model
cox_model <- coxph(Surv(T_completed, Status) ~ T1_Group + age + gender +
                     #T1_already.participated + 
                     T1_psycho.class + 
                     T1_virtual.reality + T1_medi.experience #+ 
                   #T1_phone #+ 
                   #T1_quebec
                   , 
                   data = data_surv)

# View the summary
summary(cox_model)

# Visualize the Cox model
ggforest(cox_model, data = data_surv)
## By treatment ------------------------------------------------------------------

# We have to exclude waitlist for the time being
data_surv <- data_surv %>%
  filter(T1_Group != "Waitlist")
# Or instead recode it properly...
# data_surv <- data_surv %>%
#   mutate(completed_discrete = case_when(
#     T1_Group == "Waitlist" ~ T_completed * 20,
#     TRUE ~ completed_discrete))

# Next, we look at survival curves by treatment.
fit <- survfit(Surv(completed_discrete, Status) ~ T1_Group, data = data_surv)

# Enhanced plot using survminer
nonusage <- ggsurvplot(
  fit, 
  data = data_surv,
  xlab = "Number of Surveys",
  ylab = "Dropout Probability",
  title = "Survival Curve Based on Kaplan-Meier Estimates (Thériault et al., 2024)",
  caption = "Error bands = 95% confidence bands. Dotted lines = median survival values.",
  font.caption = 13,
  conf.int = TRUE,
  pval = TRUE,
  pval.method = TRUE,
  risk.table = TRUE,
  palette = c("#F8766D", "#53B400"),
  legend.title = "",
  legend.labs = c("Meditation", "Reflection"),
  # legend.labs = c("Meditation", "Reflection", "Waitlist"),
  font.main = c(14, "bold"),
  font.x = c(12, "plain"),
  font.y = c(12, "plain"),
  font.tickslab = c(10, "plain"),
  surv.median.line = "hv",
  break.time.by = 8
)
nonusage

# Save a high-res .png image file
ggsave("Figure X_survival.pdf", plot = nonusage, width = 9, height = 7, unit = "in", dpi = 300)
ggsave("Figure X_survival.png", plot = nonusage, width = 9, height = 7, unit = "in", dpi = 300)

# Perform the log-rank test
survdiff(Surv(completed_discrete, Status) ~ T1_Group, data = data_surv)

# Fit a Cox model
cox_model <- coxph(Surv(completed_discrete, Status) ~ T1_Group + age + gender +
                     #T1_already.participated + 
                     T1_psycho.class + 
                     T1_virtual.reality + T1_medi.experience #+ 
                   #T1_phone #+ 
                   #T1_quebec
                   , 
                   data = data_surv)

# View the summary
summary(cox_model)

# Visualize the Cox model
ggforest(cox_model, data = data_surv)

## Exercises only ------------------------------------------------------------------

# Or instead recode it properly...
# data_surv <- data_surv %>%
#   mutate(exercise_completed_discrete = case_when(
#     T1_Group == "Waitlist" ~ T_completed * 14,
#     TRUE ~ exercise_completed_discrete))

# Next, we look at survival curves by treatment.
fit <- survfit(Surv(exercise_completed_discrete, Status) ~ T1_Group, data = data_surv)

# Enhanced plot using survminer
survp <- ggsurvplot(
  fit, 
  data = data_surv,
  xlab = "Number of Surveys",
  ylab = "Dropout Probability (exercises only)",
  title = "Survival Curve Based on Kaplan-Meier Estimates (Thériault et al., 2024)",
  caption = "Error bands = 95% confidence bands. Dotted lines = median survival values.",
  font.caption = 13,
  conf.int = TRUE,
  pval = TRUE,
  pval.method = TRUE,
  risk.table = TRUE,
  palette = c("#F8766D", "#53B400"),
  legend.title = "",
  legend.labs = c("Meditation", "Reflection"),
  # legend.labs = c("Meditation", "Reflection", "Waitlist"),
  font.main = c(14, "bold"),
  font.x = c(12, "plain"),
  font.y = c(12, "plain"),
  font.tickslab = c(10, "plain"),
  surv.median.line = "hv",
  break.time.by = 7,
  xlim = c(0, 43)
)
survp

# Save a high-res .png image file
ggsave("Figure X_survival_exercise.pdf", plot = survp, 
       width = 9, height = 7, unit = "in", dpi = 300)
ggsave("Figure X_survival_exercise.png", plot = survp, 
       width = 9, height = 7, unit = "in", dpi = 300)

# Perform the log-rank test
survdiff(Surv(exercise_completed_discrete, Status) ~ T1_Group, data = data_surv)

# Fit a Cox model
cox_model <- coxph(Surv(exercise_completed_discrete, Status) ~ T1_Group + age + gender +
                     #T1_already.participated + 
                     T1_psycho.class + 
                     T1_virtual.reality + T1_medi.experience #+ 
                   #T1_phone #+ 
                   #T1_quebec
                   , 
                   data = data_surv)

# View the summary
summary(cox_model)

# Visualize the Cox model
ggforest(cox_model, data = data_surv)

## Activities only ------------------------------------------------------------------

# Next, we look at survival curves by treatment.
fit <- survfit(Surv(activity_completed_discrete, Status) ~ T1_Group, data = data_surv)

# Enhanced plot using survminer
survp <- ggsurvplot(
  fit, 
  data = data_surv,
  xlab = "Number of Surveys",
  ylab = "Dropout Probability (activities only)",
  title = "Survival Curve Based on Kaplan-Meier Estimates (Thériault et al., 2024)",
  caption = "Error bands = 95% confidence bands. Dotted lines = median survival values.",
  font.caption = 13,
  conf.int = TRUE,
  pval = TRUE,
  pval.method = TRUE,
  risk.table = TRUE,
  palette = c("#F8766D", "#53B400"),
  legend.title = "",
  legend.labs = c("Meditation", "Reflection"),
  font.main = c(14, "bold"),
  font.x = c(12, "plain"),
  font.y = c(12, "plain"),
  font.tickslab = c(10, "plain"),
  surv.median.line = "hv"
)
survp

# Save a high-res .png image file
ggsave("Figure X_survival_activity.pdf", plot = survp, 
       width = 9, height = 7, unit = "in", dpi = 300)
ggsave("Figure X_survival_activity.png", plot = survp, 
       width = 9, height = 7, unit = "in", dpi = 300)

# Perform the log-rank test
survdiff(Surv(activity_completed_discrete, Status) ~ T1_Group, data = data_surv)

# Fit a Cox model
cox_model <- coxph(Surv(activity_completed_discrete, Status) ~ T1_Group + age + gender +
                     #T1_already.participated + 
                     T1_psycho.class + 
                     T1_virtual.reality + T1_medi.experience #+ 
                   #T1_phone #+ 
                   #T1_quebec
                   , 
                   data = data_surv)

# View the summary
summary(cox_model)

# Visualize the Cox model
ggforest(cox_model, data = data_surv)

loss_to_followup
nonusage
str_formula <- "~ age + gender + T1_psycho.class + T1_medi.experience | T1_Group"
table_caption <- c("Table 1", "Sample demographics split by intervention group")

data <- data %>% 
  var_labels(
    age = "Age",
    gender = "Gender",
    T1_medi.experience = "Meditation experience",
    T3_post.medipractice = "Practiced meditation over last 13 weeks",
    T1_psycho.class = "Already completed a psychology course")

x <- flex_table1(str_formula, data = data, table_caption = table_caption)
x

save_flextable(x, "Results/table1.docx", overwrite = TRUE)

# Check for nice_na
nice_na(data, scales = c(
  "T1_BSCS", "T1_BAQ", "T1_NOBAGS", "T1_attitude", "T1_dehumanization", 
  "T1_WHS", "T1_CLS", "T2_NOBAGS", "T2_attitude", "T2_dehumanization", 
  "T2_SMS5", "T2_PANAS", "T2_WHS", "T2_CLS", "T2_charity", "T3_NOBAGS", 
  "T3_attitude", "T3_dehumanization", "T3_WHS", "T3_CLS"))


# Smaller subset of data for easier inspection
data %>%
  # select(manualworkerId:att_check2_raw, 
  #        condition:condition_dum) %>%
  vis_miss

# Let's use Little's MCAR test to confirm
# We have to proceed by "scale" because the function can only
# support 30 variables max at a time
library(naniar)

# We only check for the variable that had missing data, charity

# Have to divide this one in two because it is too large for the function
data %>% 
  select(T2_charity.moisson1_1:T2_charity.suzuki2_1) %>% 
  mcar_test

data %>% 
  select(T2_charity.conserv1_1:T2_charity.armee2_1) %>% 
  mcar_test

# Need logical and character variables as factors for missForest
# "Error: Can not handle categorical predictors with more than 53 categories."
new.data <- data %>% 
  select(-c(T1_student.program, # T1_student.program = Too many categories (> 53)
            contains("text.answer"),
            T1_already.participated, # T1_already.participated = lead to error
            T1_workplace, # T1_workplace = lead to error (only when n = 496)
            ID, # have to remove
            T3_consent)) %>% # T3_consent = lead to error
  mutate(across(c(where(is.character), where(is.logical)), as.factor)) %>% 
  as.data.frame()
# sapply(data %>% select(!where(is.numeric)), \(x) length(unique(x)))

# Parallel processing
registerDoParallel(cores = 11)

# Variables
set.seed(100)
time1 <- Sys.time()
data.imp <- missForest(new.data, verbose = TRUE, parallelize = "variables")
# Total time is 2 sec (4*0.5) - 4 cores
time2 <- Sys.time()
total_time <- time2-time1
total_time

# 2 cores = 2 min
# 4 cores = 41 sec
# 6 cores = 49 sec
# 8 cores = 1 min
# 9 cores = 57 esc
# 10 cores = 25 sec
# 11 cores = 18 sec
# 12 cores = 1 min
# 15 cores = 42 sec

# Add back ID
data.imp$ximp$ID <- data$ID

# Extract imputed dataset
data <- data.imp$ximp

# Reverse code items 2, 4, 6, 7
data <- data %>% 
  mutate(across(contains("BSCS"), .names = "{col}r"))

data <- data %>% 
  mutate(across(ends_with(paste("BSCS", c(2, 4, 6, 7), sep = "_")), 
                ~nice_reverse(.x, 5), .names = "{col}r"))

# Get mean BSCS
data <- data %>% 
  mutate(T1_BSCS = rowMeans(pick(T1_BSCS_1r:T1_BSCS_7r)))

# Get alpha & omega
data %>% 
  select(T1_BSCS_1r:T1_BSCS_7r) %>% 
  omega(nfactors = 1)

# Reverse code item 7
data <- data %>% 
  mutate(across(contains("BAQ"), .names = "{col}r"))

data <- data %>% 
  mutate(across(T1_BAQ_7, ~nice_reverse(.x, 7), .names = "{col}r"))

# Get mean BAQ
data <- data %>% 
  mutate(T1_BAQ = rowMeans(pick(T1_BAQ_1r:T1_BAQ_12r)))

# Get alpha & omega
data %>% 
  select(T1_BAQ_1r:T1_BAQ_12r) %>% 
  omega(nfactors = 1)

data <- data %>% 
  mutate(T1_attitude = rowMeans(pick(T1_attitude_1:T1_attitude_9)),
         T2_attitude = rowMeans(pick(T2_attitude_1:T2_attitude_9)),
         T3_attitude = rowMeans(pick(T3_attitude_1:T3_attitude_9)))

# Get alpha & omega
data %>% 
  select(T1_attitude_1:T1_attitude_9) %>% 
  omega(nfactors = 1)

data %>% 
  select(T2_attitude_1:T2_attitude_9) %>% 
  omega(nfactors = 1)

data %>% 
  select(T3_attitude_1:T3_attitude_9) %>% 
  omega(nfactors = 1)

data <- data %>% 
  mutate(T1_dehumanization = rowMeans(pick(T1_dehumanization_1:T1_dehumanization_9)),
         T2_dehumanization = rowMeans(pick(T2_dehumanization_1:T2_dehumanization_9)),
         T3_dehumanization = rowMeans(pick(T3_dehumanization_1:T3_dehumanization_9)))

# Get alpha & omega
data %>% 
  select(T1_dehumanization_1:T1_dehumanization_9) %>% 
  omega(nfactors = 1)

data %>% 
  select(T2_dehumanization_1:T2_dehumanization_9) %>% 
  omega(nfactors = 1)

data %>% 
  select(T3_dehumanization_1:T3_dehumanization_9) %>% 
  omega(nfactors = 1)

data <- data %>% 
  mutate(across(contains("NOBAGS"), .names = "{col}r"))

# Reverse code NOBAGS (items 1:2, 5:6, 10,12, 14:16, 20)
data <- data %>%
  mutate(across(ends_with(paste0("NOBAGS.", c(
    "1_1", "1_2", "3_1", "3_2", "6_1", "8_1", "10_1", "12_1", "16_1"))), 
    ~nice_reverse(.x, 4), .names = "{col}r"))

# Get mean NOBAGS
data <- data %>% 
  mutate(T1_NOBAGS = rowMeans(pick(T1_NOBAGS.1_1r:T1_NOBAGS.16_1r)),
         T2_NOBAGS = rowMeans(pick(T2_NOBAGS.1_1r:T2_NOBAGS.16_1r)),
         T3_NOBAGS = rowMeans(pick(T3_NOBAGS.1_1r:T3_NOBAGS.16_1r)))

# Get alpha & omega
data %>% 
  select(T1_NOBAGS.1_1r:T1_NOBAGS.16_1r) %>% 
  omega(nfactors = 1)

data %>% 
  select(T2_NOBAGS.1_1r:T2_NOBAGS.16_1r) %>% 
  omega(nfactors = 1)

data %>% 
  select(T3_NOBAGS.1_1r:T3_NOBAGS.16_1r) %>% 
  omega(nfactors = 1)

data <- data %>% 
  mutate(T1_WHS = rowMeans(pick(T1_WHS_1:T1_WHS_6)),
         T2_WHS = rowMeans(pick(T2_WHS_1:T2_WHS_6)),
         T3_WHS = rowMeans(pick(T3_WHS_1:T3_WHS_6)))

# Get alpha & omega
data %>% 
  select(T1_WHS_1:T1_WHS_6) %>% 
  omega(nfactors = 1)

data %>% 
  select(T2_WHS_1:T2_WHS_6) %>% 
  omega(nfactors = 1)

data %>% 
  select(T3_WHS_1:T3_WHS_6) %>% 
  omega(nfactors = 1)

data <- data %>% 
  mutate(T1_CLS = rowMeans(pick(T1_CLS_1:T1_CLS_21)),
         T2_CLS = rowMeans(pick(T2_CLS_1:T2_CLS_21)),
         T3_CLS = rowMeans(pick(T3_CLS_1:T3_CLS_21)))

# Get alpha & omega
data %>% 
  select(T1_CLS_1:T1_CLS_21) %>% 
  omega(nfactors = 1)

data %>% 
  select(T2_CLS_1:T2_CLS_21) %>% 
  omega(nfactors = 1)

data %>% 
  select(T3_CLS_1:T3_CLS_21) %>% 
  omega(nfactors = 1)

data <- data %>% 
  mutate(across(contains("SMS5"), .names = "{col}r"))

# Reverse code SMS5 (items 3 et 5)
data <- data %>%
  mutate(across(ends_with(paste0("SMS5_", c(3, 5))), 
                ~nice_reverse(.x, 5), .names = "{col}r"))

# Get mean SMS5
data <- data %>% 
  mutate(T2_SMS5 = rowMeans(pick(T2_SMS5_1r:T2_SMS5_6r)))

# Get alpha & omega
data %>% 
  select(T2_SMS5_1r:T2_SMS5_6r) %>% 
  omega(nfactors = 1)

# Get mean of PANAS
# Positive affect = 1, 3, 5, 7, 9
# Negative affect = 2, 4, 6, 8, 10
data <- data %>% mutate(
  T2_PANAS_pos = rowMeans(pick(paste0("T2_PANAS_", seq(1, 9, 2)))),
  T2_PANAS_neg = rowMeans(pick(paste0("T2_PANAS_", seq(2, 10, 2)))))

# Get alpha & omega
# Positive affect
data %>% 
  select(all_of(paste0("T2_PANAS_", seq(1, 9, 2)))) %>% 
  omega(nfactors = 1)

# Negative affect
data %>% 
  select(all_of(paste0("T2_PANAS_", seq(2, 10, 2)))) %>% 
  omega(nfactors = 1)

data <- data %>% mutate(
  T2_Charity = rowMeans(pick(contains("charity") & ends_with("1_1"))),
  T2_Familiarity = rowMeans(pick(contains("charity") & ends_with("2_1"))))

# Get alpha & omega
data %>% 
  select(contains("charity") & ends_with("1_1")) %>% 
  omega(nfactors = 1)

# Create new variable blastintensity.duration
data <- data %>% 
  mutate(T1_blastintensity.duration = T1_blastintensity * T1_blastduration,
         T2_blastintensity.duration = T2_blastintensity * T2_blastduration,
         T3_blastintensity.duration = T3_blastintensity * T3_blastduration)

data_temp <- data
data <- data2
data2 <- data
data <- data_temp
# Make list of DVs
col.list <- c("T1_blastintensity", "T1_blastduration", "T1_blastintensity.duration",
              "T2_blastintensity", "T2_blastduration", "T2_blastintensity.duration",
              "T3_blastintensity", "T3_blastduration", "T3_blastintensity.duration",
              "T1_BSCS", "T1_BAQ", 
              "T1_attitude", "T2_attitude", "T3_attitude",
              "T1_dehumanization", "T2_dehumanization", "T3_dehumanization", 
              "T1_NOBAGS", "T2_NOBAGS", "T3_NOBAGS", 
              "T1_WHS", "T2_WHS", "T3_WHS", 
              "T1_CLS", "T2_CLS", "T3_CLS", 
              "T2_SMS5", "T2_PANAS_pos", "T2_PANAS_neg", "T2_Charity", 
              "T2_Familiarity", "T2_memory.altruistic", "T2_memory.aggressive")

# According to Cohen:
# d = 0.2 == 'small'
# d = 0.5 == 'medium'
# d = 0.8 == 'large'

# Sensitivity analysis
pwr.t.test(n = 50, d = , sig.level = .05, power = .80, type = c("two.sample"))
# A sample size of 50 only allows to detect medium effect sizes greater than 0.57

###########################
# Required sample size for large effect
pwr.t.test(n = , d = 0.80, sig.level = .05, power = .80, type = c("two.sample"))
# Required sample size: 26 PER GROUP.

# Required sample size for medium effect
pwr.t.test(n = , d = 0.50, sig.level = .05, power = .80, type = c("two.sample"))
# Required sample size: 64 PER GROUP.

# Required sample size for small effect
pwr.t.test(n = , d = 0.20, sig.level = .05, power = .80, type = c("two.sample"))
# Required sample size: 394 PER GROUP.

# Function to compute Cohen's D from means and SDs for Kang, Gray, & Dovidio (2014)
get.d <- function(m1, s1, m2, s2) {
  SDpooled <- sqrt((s1^2 + s2^2) / 2)
  (d <- (m2 - m1) / SDpooled)
}

### Race IAT ###
# Meditation vs (discussion + waitlist): d = 0.80 (large)
d = get.d(m1 = -0.163, s1 = 0.33, m2 = mean(c(0.050, 0.096)), s2 = mean(c(0.31, 0.20)))
(n = ceiling(pwr.t.test(n = , d = d, sig.level = .05, power = .80, type = c("two.sample"))$n))
n*3
# [n >= 26 per group, T=78]

# Meditation vs waitlist: d = 0.95 (large)
d = get.d(m1 = -0.163, s1 = 0.33, m2 = 0.096, s2 = 0.20)
(n = ceiling(pwr.t.test(n = , d = d, sig.level = .05, power = .80, type = c("two.sample"))$n))
n*3
# [n >= 19 per group, T=57]

# Meditation vs discussion: d = 0.67 (medium-large) 
d = get.d(m1 = -0.163, s1 = 0.33, m2 = 0.050, s2 = 0.31)
(n = ceiling(pwr.t.test(n = , d = d, sig.level = .05, power = .80, type = c("two.sample"))$n))
n*3
# [n >= 37 per group, T=111]

### Homeless IAT ###
# Meditation vs (discussion + waitlist): d = 0.42 (medium)
d = get.d(m1 = -0.021, s1 = 0.29, m2 = mean(c(0.056, 0.149)), s2 = mean(c(0.31, 0.31)))
(n = ceiling(pwr.t.test(n = , d = d, sig.level = .05, power = .80, type = c("two.sample"))$n))
n*3
# [n >= 94 per group, T=282]

# Meditation vs waitlist : d = 0.57 (medium)
d = get.d(m1 = -0.021, s1 = 0.29, m2 = 0.149, s2 = 0.31)
(n = ceiling(pwr.t.test(n = , d = d, sig.level = .05, power = .80, type = c("two.sample"))$n))
n*3
# [n >= 50 per group, T=150]

# Meditation vs discussion : d = 0.26 (small)
d = get.d(m1 = -0.021, s1 = 0.29, m2 = 0.056, s2 = 0.31)
(n = ceiling(pwr.t.test(n = , d = d, sig.level = .05, power = .80, type = c("two.sample"))$n))
n*3
# [n >= 240 per group, T=720]


###########################
### FOR REGRESSIONS #######
###########################

# According to Cohen:
# f2 = 0.02 == 'small'
# f2 = 0.15 == 'medium'
# f2 = 0.35 == 'large'

# Sensitivity analysis
# n = number of observations
n = 150
# p = the number of predictors
p = 7 # number of coefficients (one line/term per group (3) + one for T1)
# u = numerator degrees of freedom = number of predictors - 1
u = p - 1
# v = denominator degrees of freedom = sample size - number of predictors
v = n - u - 1
# f2 = effect size measure
round(pwr.f2.test(u = u, v = v, f2 = , sig.level = .05, power = .80)$f2,2)
# A sample size of 150 only allows to detect medium effect sizes greater than f2 = 0.09

# BUT note that Aguinis, Beaty, Boik, and Pierce (2005) has shown that the average effect size in tests of moderation is only f2 = 0.009.  

###########################
# Required sample size for large effect
v = pwr.f2.test(u = u, v = , f2 = .25, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
# 62 PER GROUP.

# Required sample size for medium effect
v = pwr.f2.test(u = u, v = , f2 = .15, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
# 98 PER GROUP.

# Required sample size for small effect
v = pwr.f2.test(u = u, v = , f2 = .02, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
# 688 PER GROUP.

###############################################
### Calculating f2 from Kang et al. (2014) ###
# Cohen's d to Cohen's f = d/2
f = 0.57/2

# OR calculate f with function rather than d/2
pwr.anova.test(k = 3, n = 34, f = , sig.level = .05, power = .80)$f

# According to G*Power
f = 0.06976247

f2 = f^2
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
# 175 PER GROUP.

### Calculating f2 ###
# Existing R2 for moderated regressions on ego depletion are:

# 1) R2 = .21 (Hofmann, Rauch, & Gawronski, 2007; food + depletion). Note: they got one R2 for both automatic and explicit attitudes because they included both in the same model...
# 2) R2 = .15 + .22 = .37 (Schmidt, Zimmermann, Banse, & Imhoff, 2015; aggression + depletion) Note: this is the R2 from the second step of the three-step hierarchical regression + the first step (because the R2 provided for step 2 is delta R2 so step 1 was already subtracted...)
# 3) R2 = .27 (Friese, Hofmann, & Wänke, 2008, study 1; food + cognitive load)
# 4) R2 = .30 (Friese, Hofmann, & Wänke, 2008, study 2; food + depletion)
# 5) R2 = .40 (Friese, Hofmann, & Wänke, 2008, study 3; food + depletion)
# 6) R2 = .12 (Ostafin, Marlatt, & Greenwald, 2008; alcohol + depletion)
# 7) R2 = .42 (Hofmann & Friese, 2008; food + alcohol). Note : they got one R2 for both automatic and explicit attitudes because they included both in the same model... they also included positive affect, negative affect, and alcohol-related problems as covariates.
# 8) R2 = .22 (Hofmann, Gschwendner, Friese, Wiers, & Schmitt, 2008, study 1; sex + WMC)
# 9) R2 = .14 (Hofmann, Gschwendner, Friese, Wiers, & Schmitt, 2008, study 2; food + WMC)
# 10) R2 = .27 (Hofmann, Gschwendner, Friese, Wiers, & Schmitt, 2008, study 3; anger behaviour + WMC)
# If we are evaluating the impact of a set of predictors on an outcome, then the f2 formula is:
f2 = R2/(1 - R2)

# Else, if we are evaluating the impact of one set of predictors above and beyond a second set of predictors (or covariates), then the f2 formula is:
f2 = (R2AB - R2A)/(1 - R2AB)
# p = the number of predictors
p = 3 # one per predictor (attitude, condition) + interaction (if we don't add explicit and implicit in the same model... else + 1)
# u = numerator degrees of freedom = number of predictors - 1
u = p - 1

# Required sample size for effect size of Hofmann, Rauch, & Gawronski (2007; food + depletion)
R2 = 0.21
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
# 40 PER GROUP.

# Required sample size for effect size of Schmidt, Zimmermann, Banse, & Imhoff (2015; aggression + depletion)
R2 = .22 + 0.15
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
# 20 PER GROUP.

# For this paper only (for the sake of time), we will also do it the "right" way since they did a hierarchical regression (but this is not the analysis we are doing, so it might not apply...)
R2A = .22
R2AB = R2A + 0.15
f2 = (R2AB - R2A)/(1 - R2AB)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
# 64 PER GROUP.

# Required sample size for effect size of Friese, Hofmann, & Wänke (2008, study 1; food + cognitive load)
R2 = .27
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
# 30 PER GROUP.

# Required sample size for effect size of Friese, Hofmann, & Wänke (2008, study 2; food + depletion)
R2 = .30
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
# 26 PER GROUP.

# Required sample size for effect size of Friese, Hofmann, & Wänke (2008, study 3; food + depletion)
R2 = .40
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
# 18 PER GROUP.

# Required sample size for effect size of Ostafin, Marlatt, & Greenwald (2008; alcohol + depletion)
R2 = .12
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
# 74 PER GROUP.

# Required sample size for effect size of Hofmann & Friese (2008; food + alcohol)
R2 = .42
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
# 17 PER GROUP.

# Required sample size for effect size of Hofmann, Gschwendner, Friese, Wiers, & Schmitt (2008, study 1; sex + WMC)
R2 = .22
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
# 38 PER GROUP.

# Required sample size for effect size of Hofmann, Gschwendner, Friese, Wiers, & Schmitt (2008, study 2; food + WMC)
R2 = .14
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
# 63 PER GROUP.

# Required sample size for effect size of Hofmann, Gschwendner, Friese, Wiers, & Schmitt (2008, study 3; anger behaviour + WMC)
R2 = .27
f2 = R2/(1 - R2)
v = pwr.f2.test(u = u, v = , f2 = f2, sig.level = .05, power = .80)$v
# Required sample size: v + p
ceiling(v + p)
# 30 PER GROUP.

# Average sample size needed:
ceiling(mean(c(40, 20, 30, 26, 18, 74, 17, 38, 63, 30)))
# 36 per group

lapply(col.list, function(x) 
  nice_normality(data, 
                 variable = x, 
                 title = x,
                 group = "T1_Group",
                 shapiro = TRUE,
                 histogram = TRUE))

predict_bestNormalize <- function(var) {
  x <- bestNormalize(var, standardize = FALSE, allow_orderNorm = FALSE)
  print(cur_column())
  print(x$chosen_transform)
  cat("\n")
  y <- predict(x)
  attr(y, "transform") <- c(attributes(y), attributes(x$chosen_transform)$class[1])
  y
}

set.seed(42)
data <- data %>% 
  mutate(across(all_of(col.list), 
                predict_bestNormalize,
                .names = "{.col}.t"))
col.list <- paste0(col.list, ".t")

# Group normality
named.col.list <- setNames(col.list, unlist(lapply(data, function(x) attributes(x)$transform)))
lapply(named.col.list, function(x) 
  nice_normality(data, 
                 x, 
                 "T1_Group",
                 shapiro = TRUE,
                 title = x,
                 histogram = TRUE))

# Plotting variance
plots(lapply(col.list, function(x) {
  nice_varplot(data, x, group = "T1_Group")
  }),
  n_columns = 2)

plots(lapply(col.list, function(x) {
  plot_outliers(data, x, group = "T1_Group", ytitle = x, binwidth = 0.3)
  }),
  n_columns = 2)

data %>% 
  as.data.frame %>% 
  filter(T1_Group == "Waitlist") %>% 
  find_mad(col.list, criteria = 3)

data %>% 
  as.data.frame %>% 
  filter(T1_Group == "Reflection") %>% 
  find_mad(col.list, criteria = 3)

data %>% 
  as.data.frame %>% 
  filter(T1_Group == "Meditation") %>% 
  find_mad(col.list, criteria = 3)

# We have to exclude two variables that are too large following the transformations
# Otherwise we get an error:
# Error in solve.default(cov, ...) : 
#  system is computationally singular: reciprocal condition number = 8.99059e-20
data.na <- na.omit(data[col.list])
x <- check_outliers(data.na[-c(15:16)], method = "mcd")
x

# Warning message:
# The sample size is too small in your data, relative to the number of variables, for MCD to be
# reliable. You may try to increase the `percentage_central` argument (must be between 0 and 1), 
# or choose another method.

# So we have to rely on Mahalanobis instead

x <- check_outliers(data.na[-c(15:16)], method = "mahalanobis")
x
# 5 outliers only! That's more reasonable!

data <- data[-which(x), ]

# Winsorize variables of interest with MAD
data <- data %>% 
  group_by(T1_Group) %>% 
  mutate(across(all_of(col.list), 
                winsorize_mad,
                .names = "{.col}.w")) %>% 
  ungroup()

# Update col.list
col.list <- paste0(col.list, ".w")

data <- data %>%
  mutate(across(all_of(col.list), standardize, .names = "{col}.s"))

# Update col.list
col.list <- paste0(col.list, ".s")

# Let's replace original variables with the transformed variables
data[gsub(".t.w.s", "", col.list)] <- data[col.list]

# If we decide to only center variables instead of standardizing them
# (as in the preregistration), then let's do this instead
data <- data %>%
  mutate(across(all_of(gsub(".t.w.s", "", col.list)), 
                \(x) standardize(x, center = TRUE, scale = FALSE), 
                .names = "{col}"))

data_temp <- data
data <- data2
col.list_temp <- col.list
col.list <- c("T1_blastintensity", "T1_blastduration", "T1_blastintensity.duration",
              "T2_blastintensity", "T2_blastduration", "T2_blastintensity.duration",
              "T3_blastintensity", "T3_blastduration", "T3_blastintensity.duration",
              "T1_BSCS", "T1_BAQ", 
              "T1_attitude", "T2_attitude", "T3_attitude",
              "T1_dehumanization", "T2_dehumanization", "T3_dehumanization", 
              "T1_NOBAGS", "T2_NOBAGS", "T3_NOBAGS", 
              "T1_WHS", "T2_WHS", "T3_WHS", 
              "T1_CLS", "T2_CLS", "T3_CLS", 
              "T2_SMS5", "T2_PANAS_pos", "T2_PANAS_neg", "T2_Charity",
              "T2_memory.altruistic", "T2_memory.aggressive")
col.list_short <- col.list
data2 <- data
data <- data_temp
col.list <- col.list_temp
str_formula <- "~ T1_BSCS + T1_BAQ + T1_attitude + T1_dehumanization + T1_NOBAGS + T1_WHS + T1_CLS + T1_blastintensity.duration | T1_Group"
table_caption <- c("Table 2", "Baseline Differences by intervention group")

x <- flex_table1(str_formula, data = data, table_caption = table_caption)
x

save_flextable(x, "Results/table2.docx", overwrite = TRUE)

# suppressWarnings(suppressMessages(library(dplyr)))
# suppressWarnings(library(report))
data_chi2 <- data.frame(
  group = c("Meditation", "Waitlist", "Reflection"),
  pre = c(152, 190, 154),
  post = c(104, 146, 78),
  follow = c(91, 124, 73)
)

data_chi2 <- data_chi2 %>% 
  mutate(dropout_pre_post = pre - post,
         dropout_post_follow = post - follow)
data_chi2

percentages <- data_chi2 %>% 
  mutate(t2_drop_p = round(dropout_pre_post / pre * 100, 2),
         t3_drop_p = round(dropout_post_follow / post * 100, 2))
percentages

x <- data_chi2[c(3, 5)]
x

chisq.test(x) %>% 
  report

x <- data_chi2[c(4, 6)]
x

chisq.test(x) %>% 
  report

data_496 <- readRDS("Data/finaldataset_n496.rds")

data_496 <- data_496 %>% 
  mutate(included = ID %in% data$ID,
         drop = as.numeric(!included))

x <- glm(drop ~ T1_Group + age + gender #+ T1_psycho.class
         #+ T1_virtual.reality + T1_medi.experience 
         #+ T1_quebec
        , data = data_496, family = binomial)

x %>% 
  report() %>% 
  nice_table(highlight = TRUE)

# plot logistic regression curve
ggplot(data_496, aes(x=age, y=drop)) + 
  geom_point(alpha=.5) +
  stat_smooth(method="glm", se=FALSE, method.args = list(family=binomial))

# Are groups from the whole sample equivalent on age?
str_formula <- "~ age + gender + T1_psycho.class + T1_medi.experience | T1_Group"
table_caption <- c("Table 1", "Sample demographics split by intervention group")

data_496 <- data_496 %>% 
  var_labels(
    age = "Age",
    gender = "Gender",
    T1_medi.experience = "Meditation experience",
    T3_post.medipractice = "Practiced meditation over last 13 weeks",
    T1_psycho.class = "Already completed a psychology course")

x <- flex_table1(str_formula, data = data_496, table_caption = table_caption)
x

# data_496 %>% 
#   mutate(gender = as.factor(gender)) %>% 
#   group_by(gender) %>% 
#   summarize(drop = sum(drop) / n() * 100)

library(survival)
library(ranger)
library(ggplot2)
library(dplyr)
library(ggfortify)

means_comparisons <- data_496 %>% 
  mutate(included = ifelse(included, "included", "drop"),
         across(any_of(col.list_short), standardize)) %>% 
  filter(T1_Group == "Reflection") %>% 
  summarize(across(any_of(col.list_short), \(x) mean(x, na.rm = TRUE)),
            .by = "included") %>% 
  pivot_longer(-included)

means_comparisons2 <- cbind(
  filter(means_comparisons[1:3], included == "included"),
  filter(means_comparisons[c(1, 3)], included == "drop"))
means_comparisons2 <- means_comparisons2[-c(1, 4)]
names(means_comparisons2)[2:3] <- c("value_included", "value_drop")

means_comparisons2 %>% 
  mutate(diff = value_included - value_drop) %>% 
  nice_table()

col.list_496 <- data_496 %>% 
  select(any_of(col.list_short)) %>% 
  names

nice_t_test(
  data = data_496,
  response = col.list_496,
  group = "included"
) %>% 
  nice_table(highlight = TRUE)
data %>% 
  filter(T1_Group == "Reflection") %>% 
  describe_distribution(col.list) %>% 
  write.csv("reg_to_mean.csv")

cov(data$T1_CLS, data$T2_CLS)

cov(data$T1_blastintensity.duration, data$T2_blastintensity.duration)

cor(data$T1_blastintensity.duration, data$T2_blastintensity.duration)

# library(regtomean)
# 
# mee_chua <- replicate_data(50,60,"Before","After",data=language_test)
# ## sort mu ##
# mee_chua_sort <- mee_chua[with(mee_chua,order(mu)),]
# 
# meechua_reg(mee_chua_sort)

# Specify the order of factor levels for "Group". 
# Otherwise R will alphabetize them.
data$T1_Group <- factor(data$T1_Group, levels = c("Meditation", "Reflection", "Waitlist"))

# Define our dependent variables
DV <- data %>% select(T2_NOBAGS:T2_Charity) %>% names

# First column (which variable)
Variable <- rep(DV, each = 3)

# Second column (which comparison)
Comparison <- rep(c("MeditationvsCTR", 
                    "ReflectionvsCTR", 
                    "MeditationvsReflection"), 
                  length(DV))
# 14 == number of DV

# Make list of all formulas
formulas <- c(
  "T2_NOBAGS ~ T1_Group * T1_NOBAGS",
  "T2_attitude ~ T1_Group * T1_attitude",
  "T2_dehumanization ~ T1_Group * T1_dehumanization",
  "T2_IAT ~ T1_Group * T1_IAT",
  "T2_SMS5 ~ T1_Group",
  "T2_blastintensity ~ T1_Group * T1_blastintensity",
  "T2_blastduration ~ T1_Group * T1_blastduration",
  "T2_blastintensity.duration ~ T1_Group * T1_blastintensity.duration",
  "T2_memory.altruistic ~ T1_Group",
  "T2_memory.aggressive ~ T1_Group",
  "T2_WHS ~ T1_Group * T1_WHS",
  "T2_CLS ~ T1_Group * T1_CLS",
  "T2_Charity ~ T1_Group * T2_Familiarity",
  "T2_PANAS_pos ~ T1_Group",
  "T2_PANAS_neg ~ T1_Group"
)

# Make list of all models
models.list <- sapply(formulas, lm, data = data, simplify = FALSE, USE.NAMES = TRUE)

# Attempt with nice_lm_contrasts 
set.seed(100)
x <- nice_lm_contrasts(models.list, group = "T1_Group", data = data)

table3 <- nice_table(x, highlight = TRUE)
table3

x %>% 
  mutate(`Dependent Variable` = text_remove(
    `Dependent Variable`, "T2_")) %>% 
  filter(!`Dependent Variable` %in% c(
    "blastintensity", "blastduration",
    "SMS5")) %>% 
  mutate(
    `Dependent Variable` = case_match(
      `Dependent Variable`,
      "PANAS_pos" ~ "PANAS (Positive)",
      "PANAS_neg" ~ "PANAS (Negative)",
      "blastintensity.duration" ~ "CRTT",
      "memory.altruistic" ~ "Memory (Altruistic)",
      "memory.aggressive" ~ "Memory (Aggressive)",
      .default = `Dependent Variable`),
  across(`Dependent Variable`, toTitleCase)) %>% 
  nice_table(highlight = TRUE,
    title = c("Table 3", "Planned Comparisons at Time 2"),
    note = c("d = adjusted Cohen’s d (adjusted for contrasts, but not covariates); CI = bootstrapped 95% confidence interval; NOBAGS = attitude toward aggression; IAT = implicit aggression; CRTT = Competitive Reaction Time Task (blast intensity × duration); WHS = Willingness to Help Scale; CLS = Compassionate Love Scale; PANAS = Positive and Negative Affect Schedule (this measure was exploratory).",
    "* p < .05, ** p < .01, *** p < .001. Rows with grey shading indicate statistical significance.")) %>%
  flextable::save_as_docx(path = "Results/table3.docx")

# Make list of all formulas
formulas2 <- c(
  "T3_NOBAGS ~ T1_Group * T1_NOBAGS",
  "T3_attitude ~ T1_Group * T1_attitude",
  "T3_dehumanization ~ T1_Group * T1_dehumanization",
  "T3_IAT ~ T1_Group * T1_IAT",
  "T3_blastintensity ~ T1_Group * T1_blastintensity",
  "T3_blastduration ~ T1_Group * T1_blastduration",
  "T3_blastintensity.duration ~ T1_Group * T1_blastintensity.duration",
  "T3_WHS ~ T1_Group * T1_WHS",
  "T3_CLS ~ T1_Group * T1_CLS"
)

# Make list of all models
models.list2 <- sapply(formulas2, lm, data = data, simplify = FALSE, USE.NAMES = TRUE)

## Attempt with nice_lm_contrasts
set.seed(100)
x2 <- nice_lm_contrasts(models.list2, group = "T1_Group", data = data)

table4 <- nice_table(x2, highlight = TRUE)
table4
x2 %>% 
  mutate(`Dependent Variable` = text_remove(
    `Dependent Variable`, "T3_")) %>% 
  filter(!`Dependent Variable` %in% c(
    "blastintensity", "blastduration")) %>% 
  mutate(
    `Dependent Variable` = case_match(
      `Dependent Variable`,
      "PANAS_pos" ~ "PANAS (Positive)",
      "PANAS_neg" ~ "PANAS (Negative)",
      "blastintensity.duration" ~ "CRTT",
      "memory.altruistic" ~ "Memory (Altruistic)",
      "memory.aggressive" ~ "Memory (Aggressive)",
      .default = `Dependent Variable`),
  across(`Dependent Variable`, toTitleCase)) %>% 
  nice_table(highlight = TRUE, 
    title = c("Table 4", "Planned Comparisons at Time 3"),
    note = c("d = adjusted Cohen’s d (adjusted for contrasts, but not covariates); CI = bootstrapped 95% confidence interval; NOBAGS = attitude toward aggression; IAT = implicit aggression; CRTT = Competitive Reaction Time Task (blast intensity × duration); WHS = Willingness to Help Scale; CLS = Compassionate Love Scale.",
    "* p < .05, ** p < .01. Rows with grey shading indicate statistical significance.")) %>% 
  flextable::save_as_docx(path = "Results/table4.docx")

means <- estimate_means(models.list$`T2_attitude ~ T1_Group * T1_attitude`)

T2_attitude_means <- means

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Attitude", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(models.list$`T2_attitude ~ T1_Group * T1_attitude`)
plot(contrasts, estimate_means(models.list$`T2_attitude ~ T1_Group * T1_attitude`)) +
  ylab(paste("Adjusted", "Attitude", "Mean")) +
  theme_modern()

means <- estimate_means(models.list$`T2_PANAS_pos ~ T1_Group`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Positive Affect", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(models.list$`T2_PANAS_pos ~ T1_Group`)
plot(contrasts, estimate_means(models.list$`T2_PANAS_pos ~ T1_Group`)) +
  ylab(paste("Adjusted", "Positive Affect", "Mean")) +
  theme_modern()
violin_PANAS_pos <- nice_violin(data, 
            group = "T1_Group", 
            response = "T2_PANAS_pos",
            obs = TRUE,
            comp1 = 1,
            comp2 = 3,
            has.d = TRUE,
            d.x = 3.2,
            d.y = 2,
            ytitle = "Positive Affect")
violin_PANAS_pos

violin_PANAS_neg <- nice_violin(data, 
            group = "T1_Group", 
            response = "T2_PANAS_neg",
            obs = TRUE,
            ytitle = "Negative Affect")
violin_PANAS_neg

means <- estimate_means(models.list$`T2_blastintensity ~ T1_Group * T1_blastintensity`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Blast Intensity", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list$`T2_blastintensity ~ T1_Group * T1_blastintensity`)
plot(contrasts, estimate_means(
  models.list$`T2_blastintensity ~ T1_Group * T1_blastintensity`)) +
  ylab(paste("Adjusted", "Blast Intensity", "Mean")) +
  theme_modern()

means <- estimate_means(models.list$`T2_blastduration ~ T1_Group * T1_blastduration`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Blast Duration", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list$`T2_blastduration ~ T1_Group * T1_blastduration`)
plot(contrasts, estimate_means(
  models.list$`T2_blastduration ~ T1_Group * T1_blastduration`)) +
  ylab(paste("Adjusted", "Blast Duration", "Mean")) +
  theme_modern()

means <- estimate_means(
  models.list$`T2_blastintensity.duration ~ T1_Group * T1_blastintensity.duration`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Blast Intensity * Duration", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list$`T2_blastintensity.duration ~ T1_Group * T1_blastintensity.duration`)
plot(contrasts, estimate_means(
  models.list$`T2_blastintensity.duration ~ T1_Group * T1_blastintensity.duration`)) +
  ylab(paste("Adjusted", "Blast Intensity * Duration", "Mean")) +
  theme_modern()

means <- estimate_means(
  models.list$`T2_memory.altruistic ~ T1_Group`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "ms to remember altruistic event", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list$`T2_memory.altruistic ~ T1_Group`)
plot(contrasts, estimate_means(
  models.list$`T2_memory.altruistic ~ T1_Group`)) +
  ylab(paste("Adjusted", "ms to remember altruistic event", "Mean")) +
  theme_modern()
violin_memory.altruistic <- nice_violin(data, 
            group = "T1_Group", 
            response = "T2_memory.altruistic",
            obs = TRUE,
            has.d = TRUE,
            d.x = 2.75,
            d.y = 2,
            signif_annotation = c("*"),
            signif_yposition = 3,
            signif_xmin = 2,
            signif_xmax = 3,
            ytitle = "ms to Remember Altruistic Event")
violin_memory.altruistic

violin_memory.aggressive <- nice_violin(data, 
            group = "T1_Group", 
            response = "T2_memory.aggressive",
            obs = TRUE,
            ytitle = "ms to Remember Aggressive Event")
violin_memory.aggressive

means <- estimate_means(
  models.list$`T2_Charity ~ T1_Group`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Charity Donation", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list$`T2_Charity ~ T1_Group`)
plot(contrasts, estimate_means(
  models.list$`T2_Charity ~ T1_Group`)) +
  ylab(paste("Adjusted", "Charity Donation", "Mean")) +
  theme_modern()
violin_charity <- nice_violin(data, 
            group = "T1_Group", 
            response = "T2_Charity",
            obs = TRUE,
            ytitle = "Charity Donation")
violin_charity

means <- estimate_means(models.list$`T2_CLS ~ T1_Group * T1_CLS`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Compassion", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(models.list$`T2_CLS ~ T1_Group * T1_CLS`)
plot(contrasts, estimate_means(models.list$`T2_CLS ~ T1_Group * T1_CLS`)) +
  ylab(paste("Adjusted", "Compassion", "Mean")) +
  theme_modern()

plots(violin_PANAS_neg, violin_PANAS_pos,
      violin_memory.aggressive, violin_memory.altruistic, 
      violin_charity, n_columns = 2)

# Save a high-res .png image file
ggsave('Figure 4.pdf', width=16, height=21, unit='in', dpi=300)
ggsave('Figure 4.png', width=16, height=21, unit='in', dpi=300)
means <- estimate_means(models.list2$`T3_blastintensity ~ T1_Group * T1_blastintensity`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Blast Intensity", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list2$`T3_blastintensity ~ T1_Group * T1_blastintensity`)
plot(contrasts, estimate_means(
  models.list2$`T3_blastintensity ~ T1_Group * T1_blastintensity`)) +
  ylab(paste("Adjusted", "Blast Intensity", "Mean")) +
  theme_modern()

means <- estimate_means(models.list2$`T3_blastduration ~ T1_Group * T1_blastduration`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Blast Duration", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list2$`T3_blastduration ~ T1_Group * T1_blastduration`)
plot(contrasts, estimate_means(
  models.list2$`T3_blastduration ~ T1_Group * T1_blastduration`)) +
  ylab(paste("Adjusted", "Blast Duration", "Mean")) +
  theme_modern()

means <- estimate_means(
  models.list2$`T3_blastintensity.duration ~ T1_Group * T1_blastintensity.duration`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Blast Intensity * Duration", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list2$`T3_blastintensity.duration ~ T1_Group * T1_blastintensity.duration`)
plot(contrasts, estimate_means(
  models.list2$`T3_blastintensity.duration ~ T1_Group * T1_blastintensity.duration`)) +
  ylab(paste("Adjusted", "Blast Intensity * Duration", "Mean")) +
  theme_modern()

means <- estimate_means(models.list2$`T3_WHS ~ T1_Group * T1_WHS`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Willingness to Help", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(
  models.list2$`T3_WHS ~ T1_Group * T1_WHS`)
plot(contrasts, estimate_means(
  models.list2$`T3_WHS ~ T1_Group * T1_WHS`)) +
  ylab(paste("Adjusted", "Willingness to Help", "Mean")) +
  theme_modern()

means <- estimate_means(models.list2$`T3_CLS ~ T1_Group * T1_CLS`)

ggplot(means, aes(x = T1_Group, y = Mean)) +
  geom_line(aes(group = 1)) +
  geom_pointrange(aes(color = T1_Group, ymin = CI_low, ymax = CI_high)) +
  ylab(paste("Adjusted", "Compassion", "Mean")) +
  theme_modern()

contrasts <- estimate_contrasts(models.list2$`T3_CLS ~ T1_Group * T1_CLS`)
plot(contrasts, estimate_means(models.list2$`T3_CLS ~ T1_Group * T1_CLS`)) +
  ylab(paste("Adjusted", "Compassion", "Mean")) +
  theme_modern()

time <- data %>% select(ends_with("NOBAGS")) %>% names()
p_NOBAGS <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  ytitle = "Aggression Attitude",
  error_bars = TRUE)
p_NOBAGS
time <- data %>% select(ends_with("attitude")) %>% names()
p_attitude <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Intergroup Attitude (Positive)",
  significance_bars_x = c(2.15, 3.15),
  significance_stars = c("*", "*"),
  significance_stars_x = c(2.25, 3.25),
  significance_stars_y = list(c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_attitude
time <- data %>% select(ends_with("dehumanization")) %>% names()
p_dehumanization <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Humanization")
p_dehumanization
time <- data %>% select(ends_with("IAT")) %>% names()
p_IAT <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  ytitle = "Implicit Aggression (IAT)",
  error_bars = TRUE)
p_IAT
time <- data %>% select(ends_with("blastintensity")) %>% names()
p_blastintensity <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  significance_bars_x = c(1.82, 2.16, 3.17),
  significance_stars = c("**", "***", "**"),
  significance_stars_x = c(1.62, 2.45, 3.35),
  significance_stars_y = list(c("Meditation", "Reflection", time = 2),
                              c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_blastintensity
time <- data %>% select(ends_with("blastduration")) %>% names()
p_blastduration <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  significance_bars_x = c(1.82, 2.16, 3.17),
  significance_stars = c("**", "***", "**"),
  significance_stars_x = c(1.62, 2.45, 3.35),
  significance_stars_y = list(c("Meditation", "Reflection", time = 2),
                              c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_blastduration
time <- data %>% select(ends_with("blastintensity.duration")) %>% names()
p_blastintensity.duration <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Behavioural Aggression",
  significance_bars_x = c(1.82, 2.16, 3.17),
  significance_stars = c("**", "***", "**"),
  significance_stars_x = c(1.66, 2.40, 3.33),
  significance_stars_y = list(c("Meditation", "Reflection", time = 2),
                              c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_blastintensity.duration

time <- data %>% select(ends_with("WHS")) %>% names()
p_WHS <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Willingness to Help",
  significance_bars_x = 3.2,
  significance_stars = "**",
  significance_stars_x = 3.35,
  significance_stars_y = list(c("Reflection", "Waitlist", time = 3)))
p_WHS
time <- data %>% select(ends_with("CLS")) %>% names()
p_CLS <- plot_means_over_time(
  data,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Compassionate Love",
  significance_bars_x = c(1.85, 2.15, 3.15),
  significance_stars = c("**", "***", "*"),
  significance_stars_x = c(1.68, 2.38, 3.25),
  significance_stars_y = list(c("Meditation", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_CLS

plots(p_CLS, p_WHS, p_blastintensity.duration, p_attitude, 
      p_dehumanization, p_NOBAGS, p_IAT, n_columns = 2,
      guides = "collect", tags = "a",
      tag_prefix = "(", tag_suffix = ")") & 
  theme(legend.position = 'bottom')

# Save a high-res .png image file
ggsave('Figure 3.pdf', width=13, height=17, unit='in', dpi=300)
ggsave('Figure 3.png', width=13, height=17, unit='in', dpi=300)
data_delta <- data %>% 
  group_by(T1_Group) %>% 
  mutate(
    across(ends_with("NOBAGS"), \(x) {x - T1_NOBAGS}),
    across(ends_with("attitude"), \(x) {x - T1_attitude}),
    across(ends_with("dehumanization"), \(x) {x - T1_dehumanization}),
    across(ends_with("IAT"), \(x) {x - T1_IAT}),
    across(ends_with("blastintensity"), \(x) {x - T1_blastintensity}),
    across(ends_with("blastduration"), \(x) {x - T1_blastduration}),
    across(ends_with("blastintensity.duration"), 
           \(x) {x - T1_blastintensity.duration}),
    across(ends_with("WHS"), \(x) {x - T1_WHS}),
    across(ends_with("CLS"), \(x) {x - T1_CLS})
    )

# Check it worked correctly
data_delta %>%
  summarize(m1 = mean(T1_NOBAGS),
            m2 = mean(T2_NOBAGS),
            m3 = mean(T3_NOBAGS))

data_delta %>%
  summarize(m1 = mean(T1_attitude),
            m2 = mean(T2_attitude),
            m3 = mean(T3_attitude))
# OK

time <- data %>% select(ends_with("NOBAGS")) %>% names()
p_NOBAGS <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  ytitle = "Aggression Attitude",
  error_bars = TRUE)
p_NOBAGS
time <- data %>% select(ends_with("attitude")) %>% names()
p_attitude <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Intergroup Attitude (Positive)",
  significance_bars_x = c(2.15, 3.15),
  significance_stars = c("*", "*"),
  significance_stars_x = c(2.25, 3.25),
  significance_stars_y = list(c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_attitude
time <- data %>% select(ends_with("dehumanization")) %>% names()
p_dehumanization <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Humanization")
p_dehumanization
time <- data %>% select(ends_with("IAT")) %>% names()
p_IAT <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  ytitle = "Implicit Aggression (IAT)",
  error_bars = TRUE)
p_IAT
time <- data %>% select(ends_with("blastintensity")) %>% names()
p_blastintensity <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  significance_bars_x = c(1.82, 2.16, 3.17),
  significance_stars = c("**", "***", "**"),
  significance_stars_x = c(1.62, 2.45, 3.35),
  significance_stars_y = list(c("Meditation", "Reflection", time = 2),
                              c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_blastintensity
time <- data %>% select(ends_with("blastduration")) %>% names()
p_blastduration <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  significance_bars_x = c(1.82, 2.16, 3.17),
  significance_stars = c("**", "***", "**"),
  significance_stars_x = c(1.62, 2.45, 3.35),
  significance_stars_y = list(c("Meditation", "Reflection", time = 2),
                              c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_blastduration
time <- data %>% select(ends_with("blastintensity.duration")) %>% names()
p_blastintensity.duration <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Behavioural Aggression",
  significance_bars_x = c(1.82, 2.16, 3.17),
  significance_stars = c("**", "***", "**"),
  significance_stars_x = c(1.66, 2.40, 3.33),
  significance_stars_y = list(c("Meditation", "Reflection", time = 2),
                              c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_blastintensity.duration

time <- data %>% select(ends_with("WHS")) %>% names()
p_WHS <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Willingness to Help",
  significance_bars_x = 3.2,
  significance_stars = "**",
  significance_stars_x = 3.35,
  significance_stars_y = list(c("Reflection", "Waitlist", time = 3)))
p_WHS
time <- data %>% select(ends_with("CLS")) %>% names()
p_CLS <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Compassionate Love",
  significance_bars_x = c(1.85, 2.15, 3.15),
  significance_stars = c("**", "***", "*"),
  significance_stars_x = c(1.68, 2.38, 3.25),
  significance_stars_y = list(c("Meditation", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 2),
                              c("Reflection", "Waitlist", time = 3)))
p_CLS

data_delta_long <- data_delta %>% 
  select(T1_Group, time) %>% 
  mutate(across(time, as.numeric)) %>% 
  pivot_longer(time, names_to = "Time", values_to = "CLS") %>% 
  mutate(Time = gsub("[^0-9]", "", Time))

ggline(data_delta_long, x = "Time", y = "CLS", group = "T1_Group",
       color = "T1_Group",
       shape = "T1_Group", 
       add = "mean",  # "mean_ci"
       size = 2,
       plot.type = "l"#,
       # point.color = "white",
       # stroke = 1.5
       )

plots(p_CLS, p_WHS, p_blastintensity.duration, p_attitude, 
      p_dehumanization, p_NOBAGS, p_IAT, n_columns = 2,
      guides = "collect", tags = "a",
      tag_prefix = "(", tag_suffix = ")") & 
  theme(legend.position = 'bottom')

# Save a high-res .png image file
ggsave('Figure 3_delta.pdf', width=13, height=17, unit='in', dpi=300)
ggsave('Figure 3_delta.png', width=13, height=17, unit='in', dpi=300)
data3 <- data2 %>% 
  mutate(part.percent = ifelse(T1_Group == "Waitlist", 0, part.percent),
         part.percent = part.percent * 100,
         T2_memory.altruistic = T2_memory.altruistic * -1,
         T1_GroupReflection = as.numeric(T1_Group == "Reflection"),
         T1_GroupMeditation = as.numeric(T1_Group == "Meditation"),
         across(contains("blast"), \(x) x * -1)) %>% 
  as.data.frame()

caceOutput <- caceSRTBoot(
  T2_attitude ~ T1_attitude + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Attitude (Reflection VS Waitlist)")

caceOutput <- caceSRTBoot(
  T2_PANAS_pos ~ T1_GroupMeditation,
  intervention = "T1_GroupMeditation",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Positive Affect (Meditation VS Waitlist)")

caceOutput <- caceSRTBoot(
  T2_blastintensity ~ T1_blastintensity + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Blast Intensity (Reflection VS Waitlist)")

caceOutput <- caceSRTBoot(
  T2_blastduration ~ T1_blastduration + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Blast Duration (Reflection VS Waitlist)")

caceOutput <- caceSRTBoot(
  T2_blastintensity.duration ~ T1_blastintensity.duration + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Blast Intensity * Duration (Reflection VS Waitlist)")

caceOutput <- caceSRTBoot(
  T2_memory.altruistic ~ T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "ms to remember altruistic event (Reflection VS Waitlist)")

caceOutput <- caceSRTBoot(
  T2_CLS ~ T1_CLS + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Compassionate Love (Reflection VS Waitlist)")

caceOutput <- caceSRTBoot(
  T2_CLS ~ T1_CLS + T1_GroupMeditation,
  intervention = "T1_GroupMeditation",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Compassionate Love (Meditation VS Waitlist)")

caceOutput <- caceSRTBoot(
  T3_attitude ~ T1_attitude + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Attitude (Reflection VS Waitlist)")

caceOutput <- caceSRTBoot(
  T3_blastintensity ~ T1_blastintensity + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Blast Intensity (Reflection VS Waitlist)")

caceOutput <- caceSRTBoot(
  T3_blastduration ~ T1_blastduration + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Blast Duration (Reflection VS Waitlist)")

caceOutput <- caceSRTBoot(
  T3_blastintensity.duration ~ T1_blastintensity.duration + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Blast Intensity * Duration (Reflection VS Waitlist)")

caceOutput <- caceSRTBoot(
  T3_WHS ~ T1_WHS + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Willingness to Help (Reflection VS Waitlist)")

caceOutput <- caceSRTBoot(
  T3_CLS ~ T1_CLS + T1_GroupReflection,
  intervention = "T1_GroupReflection",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Compassionate Love (Reflection VS Waitlist)")

caceOutput <- caceSRTBoot(
  T3_CLS ~ T1_CLS + T1_GroupMeditation,
  intervention = "T1_GroupMeditation",
  compliance = "part.percent",
  nBoot = 2000,
  data = data3)
plot(caceOutput)
title(main = "Compassionate Love (Meditation VS Waitlist)")

# CREATE OUR DUMMY VARIABLES FOR T1_Group!

data$T1_GroupReflection <- as.numeric(data$T1_Group == "Reflection")
data$T1_GroupMeditation <- as.numeric(data$T1_Group == "Meditation")

################################################
################################################

# T2_blastintensity.duration - IAT
T2_blastintensity.duration.IAT <- lm(
  T2_blastintensity.duration ~ T1_GroupReflection + T1_GroupMeditation +
    T2_IAT + T2_Condition + T1_GroupReflection:T2_IAT + T1_GroupMeditation:T2_IAT +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition + T2_IAT:T2_Condition +
    T1_GroupReflection:T2_IAT:T2_Condition + T1_GroupMeditation:T2_IAT:T2_Condition,
  data = data
)

check_model(T2_blastintensity.duration.IAT)

T2_blastintensity.duration.IAT %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
# Make interaction plot
interact_plot(T2_blastintensity.duration.IAT,
  pred = "T2_IAT", modx = "T2_Condition",
  interval = TRUE, mod2 = "T1_GroupMeditation",
  x.label = "Implicit Aggression (IAT)",
  y.label = "Blast Intensity * Duration (Taylor)",
  mod2.labels = c("Waitlist Group", "Meditation Group"),
  legend.main = "Condition"
) +
  theme(text = element_text(size = 25))
# T2_blastintensity - IAT
T2_blastintensity.IAT <- lm(
  T2_blastintensity ~ T1_GroupReflection + T1_GroupMeditation +
    T2_IAT + T2_Condition + T1_GroupReflection:T2_IAT + T1_GroupMeditation:T2_IAT +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_IAT:T2_Condition + T1_GroupReflection:T2_IAT:T2_Condition +
    T1_GroupMeditation:T2_IAT:T2_Condition,
  data = data
)

check_model(T2_blastintensity.IAT)

T2_blastintensity.IAT %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
# T2_blastduration - IAT
T2_blastduration.IAT <- lm(
  T2_blastduration ~ T1_GroupReflection + T1_GroupMeditation +
    T2_IAT + T2_Condition + T1_GroupReflection:T2_IAT + T1_GroupMeditation:T2_IAT +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_IAT:T2_Condition + T1_GroupReflection:T2_IAT:T2_Condition +
    T1_GroupMeditation:T2_IAT:T2_Condition,
  data = data
)

check_model(T2_blastduration.IAT)

T2_blastduration.IAT %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
# Make interaction plot
interact_plot(T2_blastduration.IAT,
  pred = "T2_IAT", modx = "T2_Condition",
  interval = TRUE, mod2 = "T1_GroupMeditation",
  x.label = "Implicit Aggression (IAT)",
  y.label = "Blast Duration (Taylor)",
  mod2.labels = c("Waitlist Group", "Meditation Group"),
  legend.main = "Condition"
) +
  theme(text = element_text(size = 25))
# T2_Charity - IAT
T2_Charity.IAT <- lm(
  T2_Charity ~ T1_GroupReflection + T1_GroupMeditation + T2_IAT +
    T2_Condition + T1_GroupReflection:T2_IAT + T1_GroupMeditation:T2_IAT +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_IAT:T2_Condition + T1_GroupReflection:T2_IAT:T2_Condition +
    T1_GroupMeditation:T2_IAT:T2_Condition + T2_Familiarity,
  data = data
)

check_model(T2_Charity.IAT)

T2_Charity.IAT %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
# Compassion - IAT
T2_CLS.IAT <- lm(
  T2_CLS ~ T1_GroupReflection + T1_GroupMeditation + T2_IAT +
    T2_Condition + T1_GroupReflection:T2_IAT + T1_GroupMeditation:T2_IAT +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_IAT:T2_Condition + T1_GroupReflection:T2_IAT:T2_Condition +
    T1_GroupMeditation:T2_IAT:T2_Condition,
  data = data
)

check_model(T2_CLS.IAT)

table5 <- T2_CLS.IAT %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
table5
x <- T2_CLS.IAT %>%
  nice_lm() %>%
  mutate(
    `Dependent Variable` = text_remove(
      `Dependent Variable`, "T2_"
    ),
    `Predictor` = text_remove(
      `Predictor`, "T2_|T1_"
    )
  ) %>%
  mutate(
    `Dependent Variable` = case_match(
      `Dependent Variable`,
      "CLS" ~ "Compassionate Love",
      .default = `Dependent Variable`
    ),
    across(`Dependent Variable`, toTitleCase)
  ) %>%
  nice_table(
    highlight = TRUE,
    title = c("Table 5", "Moderation Analyses for Three-Way Interaction on Compassionate Love"),
    note = c(
      "IAT = implicit aggression (Implicit Association Test). The three-way interaction between group, implicit aggression, and depletion is significant, but only for the meditation group relative to the waitlist, not for the reflection group.",
      "* p < .05, ** p < .01, *** p < .001. Rows with grey shading indicate statistical significance."
    )
  )
x

flextable::save_as_docx(x, path = "Results/table5.docx")
# Make interaction plot
p_CLS <- interact_plot(T2_CLS.IAT,
  pred = "T2_IAT", modx = "T2_Condition",
  interval = TRUE, mod2 = "T1_GroupMeditation",
  x.label = "Implicit Aggression (IAT)",
  y.label = "Compassionate Love",
  mod2.labels = c("Waitlist Group", "Meditation Group"),
  legend.main = "Condition"
) +
  theme(text = element_text(size = 25))
p_CLS
# T2_WHS - IAT
T2_WHS.IAT <- lm(
  T2_WHS ~ T1_GroupReflection + T1_GroupMeditation + T2_IAT + T2_Condition +
    T1_GroupReflection:T2_IAT + T1_GroupMeditation:T2_IAT + T1_GroupReflection:T2_Condition +
    T1_GroupMeditation:T2_Condition + T2_IAT:T2_Condition + T1_GroupReflection:T2_IAT:T2_Condition +
    T1_GroupMeditation:T2_IAT:T2_Condition,
  data = data
)

check_model(T2_WHS.IAT)

T2_WHS.IAT %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
# T2_memory.altruistic - IAT
T2_memory.altruistic.IAT <- lm(
  T2_memory.altruistic ~ T1_GroupReflection + T1_GroupMeditation + T2_IAT +
    T2_Condition + T1_GroupReflection:T2_IAT + T1_GroupMeditation:T2_IAT +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_IAT:T2_Condition + T1_GroupReflection:T2_IAT:T2_Condition +
    T1_GroupMeditation:T2_IAT:T2_Condition,
  data = data
)

check_model(T2_memory.altruistic.IAT)

table6 <- T2_memory.altruistic.IAT %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
table6
x <- T2_memory.altruistic.IAT %>%
  nice_lm() %>%
  mutate(
    `Dependent Variable` = text_remove(
      `Dependent Variable`, "T2_"
    ),
    `Predictor` = text_remove(
      `Predictor`, "T2_|T1_"
    )
  ) %>%
  mutate(
    `Dependent Variable` = case_match(
      `Dependent Variable`,
      "memory.altruistic" ~ "Accessibility of Altruistic Memory",
      .default = `Dependent Variable`
    ),
    across(`Dependent Variable`, toTitleCase)
  ) %>%
  nice_table(
    highlight = TRUE,
    title = c("Table 6", "Moderation Analyses for Three-Way Interaction on the Accessibility of Altruistic Memory"),
    note = c(
      "IAT = implicit aggression (Implicit Association Test). The three-way interaction between group, implicit aggression, and depletion is significant, but only for the meditation group relative to the waitlist, not for the reflection group.",
      "We report the squared semi-partial correlation (sr2), also known as the delta R squared (ΔR2), as an index of effect size. The sr2 allows us to quantify the unique contribution (proportion of variance explained) of an independent variable on the dependent variable, beyond the other variables in the model. The sr2 is often considered a better indicator of the practical relevance of a variable.",
      "* p < .05, ** p < .01. Rows with grey shading indicate statistical significance."
    )
  )
x

flextable::save_as_docx(x, path = "Results/table6.docx")
# Make interaction plot
p_memalt <- interact_plot(T2_memory.altruistic.IAT,
  pred = "T2_IAT", modx = "T2_Condition",
  interval = TRUE, mod2 = "T1_GroupMeditation",
  x.label = "Implicit Aggression (IAT)",
  y.label = "Reaction Time (Altruistic Memory)",
  mod2.labels = c("Waitlist Group", "Meditation Group"),
  legend.main = "Condition"
) +
  theme(text = element_text(size = 25))
p_memalt
plots(p_CLS, p_memalt, n_rows = 2, tags = "A")
# Save a high-res .png image file
ggsave("Figure 5.pdf", width = 12, height = 12, unit = "in", dpi = 300)
ggsave("Figure 5.png", width = 12, height = 12, unit = "in", dpi = 300)
# T2_memory.aggressive - IAT
T2_memory.aggressive.IAT <- lm(
  T2_memory.aggressive ~ T1_GroupReflection + T1_GroupMeditation + T2_IAT +
    T2_Condition + T1_GroupReflection:T2_IAT + T1_GroupMeditation:T2_IAT +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_IAT:T2_Condition + T1_GroupReflection:T2_IAT:T2_Condition +
    T1_GroupMeditation:T2_IAT:T2_Condition,
  data = data
)

check_model(T2_memory.aggressive.IAT)

T2_memory.aggressive.IAT %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
# T2_blastintensity.duration - NOBAGS
T2_blastintensity.duration.NOBAGS <- lm(
  T2_blastintensity.duration ~ T1_GroupReflection + T1_GroupMeditation +
    T2_NOBAGS + T2_Condition + T1_GroupReflection:T2_NOBAGS + T1_GroupMeditation:T2_NOBAGS +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_NOBAGS:T2_Condition + T1_GroupReflection:T2_NOBAGS:T2_Condition +
    T1_GroupMeditation:T2_NOBAGS:T2_Condition,
  data = data
)

check_model(T2_blastintensity.duration.NOBAGS)

T2_blastintensity.duration.NOBAGS %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
# T2_blastintensity - NOBAGS
T2_blastintensity.NOBAGS <- lm(
  T2_blastintensity ~ T1_GroupReflection + T1_GroupMeditation +
    T2_NOBAGS + T2_Condition + T1_GroupReflection:T2_NOBAGS +
    T1_GroupMeditation:T2_NOBAGS + T1_GroupReflection:T2_Condition +
    T1_GroupMeditation:T2_Condition + T2_NOBAGS:T2_Condition +
    T1_GroupReflection:T2_NOBAGS:T2_Condition +
    T1_GroupMeditation:T2_NOBAGS:T2_Condition,
  data = data
)

check_model(T2_blastintensity.NOBAGS)

T2_blastintensity.NOBAGS %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
# T2_blastduration - NOBAGS
T2_blastduration.NOBAGS <- lm(
  T2_blastduration ~ T1_GroupReflection + T1_GroupMeditation +
    T2_NOBAGS + T2_Condition + T1_GroupReflection:T2_NOBAGS +
    T1_GroupMeditation:T2_NOBAGS + T1_GroupReflection:T2_Condition +
    T1_GroupMeditation:T2_Condition + T2_NOBAGS:T2_Condition +
    T1_GroupReflection:T2_NOBAGS:T2_Condition +
    T1_GroupMeditation:T2_NOBAGS:T2_Condition,
  data = data
)

check_model(T2_blastduration.NOBAGS)

T2_blastduration.NOBAGS %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
# Make interaction plot
interact_plot(T2_blastduration.NOBAGS,
  pred = "T2_NOBAGS", modx = "T2_Condition",
  interval = TRUE,
  x.label = "Normative beliefs about aggression (NOBAGS)",
  y.label = "Blast Duration (Taylor)",
  legend.main = "Condition"
) +
  theme(text = element_text(size = 25))
# T2_blastduration.NOBAGS <- lm(
#   T2_blastduration ~ T1_GroupReflection + T1_GroupMeditation +
#     T2_NOBAGS + T2_Condition_dum + T1_GroupReflection:T2_NOBAGS +
#     T1_GroupMeditation:T2_NOBAGS +  T1_GroupReflection:T2_Condition_dum +
#     T1_GroupMeditation:T2_Condition_dum + T2_NOBAGS:T2_Condition_dum +
#     T1_GroupReflection:T2_NOBAGS:T2_Condition_dum +
#     T1_GroupMeditation:T2_NOBAGS:T2_Condition_dum, data = data)

# T2_blastduration.NOBAGS %>%
#   nice_lm_slopes(predictor = "T2_Condition_dum",
#                  moderator = "T2_NOBAGS") %>%
#   nice_table(highlight = TRUE)
#
# ?nice_slopes
#
# probe_interaction(T2_blastintensity.duration.NOBAGS, pred = T2_NOBAGS,
#                   modx = T2_Condition, digits = 3)
#
# emtrends(T2_blastintensity.duration.NOBAGS, pairwise ~ T2_Condition, var = "T2_NOBAGS")
# T2_Charity - NOBAGS
T2_Charity.NOBAGS <- lm(
  T2_Charity ~ T1_GroupReflection + T1_GroupMeditation + T2_NOBAGS +
    T2_Condition + T1_GroupReflection:T2_NOBAGS + T1_GroupMeditation:T2_NOBAGS +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_NOBAGS:T2_Condition + T1_GroupReflection:T2_NOBAGS:T2_Condition +
    T1_GroupMeditation:T2_NOBAGS:T2_Condition + T2_Familiarity,
  data = data
)

check_model(T2_Charity.NOBAGS)

T2_Charity.NOBAGS %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
# Compassion - NOBAGS
T2_CLS.NOBAGS <- lm(
  T2_CLS ~ T1_GroupReflection + T1_GroupMeditation + T2_NOBAGS +
    T2_Condition + T1_GroupReflection:T2_NOBAGS + T1_GroupMeditation:T2_NOBAGS +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_NOBAGS:T2_Condition + T1_GroupReflection:T2_NOBAGS:T2_Condition +
    T1_GroupMeditation:T2_NOBAGS:T2_Condition,
  data = data
)

check_model(T2_CLS.NOBAGS)

T2_CLS.NOBAGS %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
# T2_WHS - NOBAGS
T2_WHS.NOBAGS <- lm(
  T2_WHS ~ T1_GroupReflection + T1_GroupMeditation + T2_NOBAGS + T2_Condition +
    T1_GroupReflection:T2_NOBAGS + T1_GroupMeditation:T2_NOBAGS +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_NOBAGS:T2_Condition + T1_GroupReflection:T2_NOBAGS:T2_Condition +
    T1_GroupMeditation:T2_NOBAGS:T2_Condition,
  data = data
)

check_model(T2_WHS.NOBAGS)

T2_WHS.NOBAGS %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
# T2_memory.altruistic - NOBAGS
T2_memory.altruistic.NOBAGS <- lm(
  T2_memory.altruistic ~ T1_GroupReflection + T1_GroupMeditation +
    T2_NOBAGS + T2_Condition + T1_GroupReflection:T2_NOBAGS +
    T1_GroupMeditation:T2_NOBAGS + T1_GroupReflection:T2_Condition +
    T1_GroupMeditation:T2_Condition + T2_NOBAGS:T2_Condition +
    T1_GroupReflection:T2_NOBAGS:T2_Condition +
    T1_GroupMeditation:T2_NOBAGS:T2_Condition,
  data = data
)

check_model(T2_memory.altruistic.NOBAGS)

T2_memory.altruistic.NOBAGS %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
# T2_memory.aggressive - NOBAGS
T2_memory.aggressive.NOBAGS <- lm(
  T2_memory.aggressive ~ T1_GroupReflection + T1_GroupMeditation + T2_NOBAGS +
    T2_Condition + T1_GroupReflection:T2_NOBAGS + T1_GroupMeditation:T2_NOBAGS +
    T1_GroupReflection:T2_Condition + T1_GroupMeditation:T2_Condition +
    T2_NOBAGS:T2_Condition + T1_GroupReflection:T2_NOBAGS:T2_Condition +
    T1_GroupMeditation:T2_NOBAGS:T2_Condition,
  data = data
)

check_model(T2_memory.aggressive.NOBAGS)

T2_memory.aggressive.NOBAGS %>%
  nice_lm() %>%
  nice_table(highlight = TRUE)
plot_means_over_time_depletion <- function(variable, 
                                           ytitle, 
                                           error_bars = FALSE, 
                                           data_delta,
                                           T1_Group = "T1_Group",
                                           T2_Condition = "T2_Condition") {
  time <- data_delta %>%
    ungroup() %>%
    select(ends_with(variable)) %>%
    names()
  data_delta[time] <- lapply(data_delta[time], as.numeric)

  data_control <- data_delta %>%
    filter(T2_Condition == "Control")

  data_depleted <- data_delta %>%
    filter(T2_Condition == "Depleted") %>%
    mutate(subject_ID = seq(n())) %>%
    select(subject_ID, T1_Group, all_of(time)) %>%
    pivot_longer(
      cols = all_of(time), names_to = "Time",
      names_ptypes = factor()
    )

  data_depleted_summary <- data_depleted %>%
    # group_by(Time, T1_Group) %>%
    filter(Time == time[2]) %>%
    summarize(value = mean(value), .groups = "keep") %>%
    # mutate(Time = gsub("[^0-9]", "", .data$Time))
    mutate(Time = 2)

  data_depleted_summary <-
    bind_rows(data_depleted_summary, data_depleted_summary)

  data_depleted_summary$value[1:3] <- 0
  data_depleted_summary$Time[1:3] <- 1

  # Basic attempt 1!
  pd <- ggplot2::position_dodge(0.2) # move them .01 to the left and right
  plot_means_over_time(
    data_control,
    response = time,
    group = "T1_Group",
    error_bars = error_bars,
    ytitle = ytitle
  ) +
    ggplot2::geom_line(
      data = data_depleted_summary,
      ggplot2::aes(color = .data$T1_Group),
      linewidth = 3,
      linetype = "dotted",
      position = pd
    ) +
    ggplot2::geom_point(
      data = data_depleted_summary,
      size = 4,
      fill = "white",
      stroke = 1.5,
      position = pd
    ) #+
  # annotate(geom = "text",
  #          label = c("Depletion"),
  #          x = c(2.25),
  #          y = c(-0.1, 0.2, 0.57),
  #          size = 6)
}

p_CLS <- plot_means_over_time_depletion(
  data_delta = data_delta,
  variable = "CLS",
  ytitle = "Compassionate Love"
)
p_CLS
p_NOBAGS <- plot_means_over_time_depletion(
  variable = "NOBAGS",
  ytitle = "Aggression Attitude",
  data = data_delta
)
p_NOBAGS
p_attitude <- plot_means_over_time_depletion(
  variable = "attitude",
  ytitle = "Intergroup Attitude (Positive)",
  data = data_delta
)
p_attitude
p_dehumanization <- plot_means_over_time_depletion(
  variable = "dehumanization",
  ytitle = "Humanization",
  data = data_delta
)
p_dehumanization
p_IAT <- plot_means_over_time_depletion(
  variable = "IAT",
  ytitle = "Implicit Aggression (IAT)",
  data = data_delta
)
p_IAT
p_WHS <- plot_means_over_time_depletion(
  variable = "WHS",
  ytitle = "Willingness to Help",
  data = data_delta
)
p_WHS
p_blastintensity.duration <- plot_means_over_time_depletion(
  variable = "blastintensity.duration",
  ytitle = "Behavioural Aggression",
  data = data_delta
)
p_blastintensity.duration
# Simulated data
toy_data <- data.frame(
  Time = factor(c(1, 2, 3)),
  Prosociality = c(10, 20, 20, 10, 15, 15, 10, 10, 10),
  Group = factor(
    c(
      rep("Meditation", 3),
      rep("Reflection", 3),
      rep("Waitlist", 3)
    ),
    levels = c("Meditation", "Reflection", "Waitlist")
  )
)

# Depletion
deplete <- data.frame(
  Time = factor(c(1, 2)),
  Prosociality = c(10, 19.3, 10, 12.5, 10, 7.5),
  Group = factor(
    c(
      rep("Meditation", 2),
      rep("Reflection", 2),
      rep("Waitlist", 2)
    ),
    levels = c("Meditation", "Reflection", "Waitlist")
  )
)

data_deplete <- bind_rows("Non-Depleted" = toy_data, "Depleted" = deplete, .id = "Depletion")
data_deplete$Depletion <- factor(data_deplete$Depletion, levels = c("Non-Depleted", "Depleted"))

data_deplete2 <- data_deplete %>%
  rename(
    T1_Group = "Group",
    T2_Condition = "Depletion",
    mean = "Prosociality"
  ) %>%
  select(T1_Group, T2_Condition, Time, mean) %>%
  arrange(Time)

plot_means_over_time_depletion_summary <- function(data_summary,
                                                   T1_Group = "T1_Group",
                                                   T2_Condition = "T2_Condition",
                                                   mean = "mean",
                                                   ytitle = "") {
  ggplot(data_summary, aes(
    x = Time, y = mean, color = T1_Group,
    shape = T1_Group, linetype = T2_Condition,
    group = interaction(T1_Group, T2_Condition)
  )) +
    geom_line(linewidth = 3) +
    geom_point(size = 4, fill = "white", stroke = 1.5) +
    ggplot2::discrete_scale("shape",
      palette = function(n) {
        c(21:25, 0:20)[1:n]
      }
    ) +
    theme_bw(base_size = 24) +
    theme(
      axis.text.x = element_text(colour = "black"),
      axis.text.y = element_text(colour = "black"),
      panel.grid.major = element_blank(),
      panel.grid.minor = element_blank(),
      panel.border = element_blank(),
      axis.line = element_line(colour = "black"),
      axis.ticks = element_line(colour = "black")
    ) +
    scale_linetype_manual(values = c("solid", "dotted"), name = "Depletion") +
    # Fix the order of the legends
    guides(
      linetype = guide_legend(order = 2), # "Depletion" second
      color = guide_legend(order = 1), # "Group" first
      shape = guide_legend(order = 1) # "Group" legend for shape, same order
    ) +
    theme(legend.title = element_blank()) +
    labs(y = ytitle)
}

hypothesis <- plot_means_over_time_depletion_summary(
  data_summary = data_deplete2,
  ytitle = "Prosociality (Hypothesis)"
) +
  scale_y_continuous(limits = c(7.5, 20), breaks = seq(7.5, 20, by = 2.5)) +
  geom_segment(x = 2, y = 14.3, xend = 2, yend = 13.2, linewidth = 1, arrow = arrow(
    length = unit(0.2, "cm"), type = "closed"
  ), colour = "black", show.legend = FALSE) +
  geom_segment(x = 2, y = 9.3, xend = 2, yend = 8.2, linewidth = 1, arrow = arrow(
    length = unit(0.2, "cm"), type = "closed"
  ), colour = "black", show.legend = FALSE) +
  annotate(geom = "text", label = c("Depletion"), x = c(2.35), y = c(19.3, 12.5, 7.5), size = 6)
hypothesis
plots(hypothesis,
  p_CLS + guides(colour = "none", shape = "none"),
  p_WHS + guides(colour = "none", shape = "none"),
  p_blastintensity.duration + guides(colour = "none", shape = "none"),
  p_attitude + guides(colour = "none", shape = "none"),
  p_dehumanization + guides(colour = "none", shape = "none"),
  p_NOBAGS + guides(colour = "none", shape = "none"),
  p_IAT + guides(colour = "none", shape = "none"),
  n_columns = 2, guides = "collect", tags = "a",
  tag_prefix = "(", tag_suffix = ")"
) &
  theme(legend.position = "bottom")

# Save a high-res .png image file
ggsave("Figure X_depletion.pdf", width = 17, height = 25, unit = "in", dpi = 300)
ggsave("Figure X_depletion.png", width = 17, height = 25, unit = "in", dpi = 300)
plot_means_over_time_depletion_real <- function(data_delta,
                                                variable,
                                                T1_Group = "T1_Group",
                                                T2_Condition = "T2_Condition",
                                                mean = "mean",
                                                ytitle = "") {
  time <- data %>%
    select(ends_with(variable)) %>%
    names()
  data_long <- data_delta %>%
    mutate(subject_ID = row_number(),
           across(all_of(time), as.numeric)) %>%
    pivot_longer(
      cols = time,
      names_to = "Time",
      values_to = "mean"
    ) %>%
    mutate(Time = factor(gsub("[^0-9]", "", .data$Time)))

  data_summary <- data_long %>% 
    group_by(T1_Group, T2_Condition, Time) %>% 
    summarize(mean = mean(mean), .groups = "keep")
  
  plot_means_over_time_depletion_summary(
    data_summary = data_summary,
    T1_Group = T1_Group,
    T2_Condition = T2_Condition,
    ytitle = ytitle
  )
}

# Plot
p_CLS <- plot_means_over_time_depletion_real(
  data_delta = data_delta,
  variable = "CLS",
  ytitle = "Compassionate Love"
)
p_CLS
p_NOBAGS <- plot_means_over_time_depletion_real(
  data_delta = data_delta,
  variable = "NOBAGS",
  ytitle = "Aggression Attitude"
)
p_NOBAGS
p_attitude <- plot_means_over_time_depletion_real(
  data_delta = data_delta,
  variable = "attitude",
  ytitle = "Intergroup Attitude (Positive)"
)
p_attitude
p_dehumanization <- plot_means_over_time_depletion_real(
  data_delta = data_delta,
  variable = "dehumanization",
  ytitle = "Humanization"
)
p_dehumanization
p_IAT <- plot_means_over_time_depletion_real(
  data_delta = data_delta,
  variable = "IAT",
  ytitle = "Implicit Aggression (IAT)"
)
p_IAT
p_WHS <- plot_means_over_time_depletion_real(
  data_delta = data_delta,
  variable = "WHS",
  ytitle = "Willingness to Help"
)
p_WHS
p_blastintensity.duration <- plot_means_over_time_depletion_real(
  data_delta = data_delta,
  variable = "blastintensity.duration",
  ytitle = "Behavioural Aggression"
)
p_blastintensity.duration
plots(hypothesis,
  p_CLS + guides(colour = "none", shape = "none", linetype = "none"),
  p_WHS + guides(colour = "none", shape = "none", linetype = "none"),
  p_blastintensity.duration + guides(colour = "none", shape = "none", linetype = "none"),
  p_attitude + guides(colour = "none", shape = "none", linetype = "none"),
  p_dehumanization + guides(colour = "none", shape = "none", linetype = "none"),
  p_NOBAGS + guides(colour = "none", shape = "none", linetype = "none"),
  p_IAT + guides(colour = "none", shape = "none", linetype = "none"),
  n_columns = 2, guides = "collect", tags = "a",
  tag_prefix = "(", tag_suffix = ")"
) &
  theme(legend.position = "bottom")

# Save a high-res .png image file
ggsave("Figure X_depletion2.pdf", width = 17, height = 25, unit = "in", dpi = 300)
ggsave("Figure X_depletion2.png", width = 17, height = 25, unit = "in", dpi = 300)
plot_means_over_time_depletion_real_final <- function(data_delta,
                                                      variable,
                                                      T1_Group = "T1_Group",
                                                      T2_Condition = "T2_Condition",
                                                      mean = "mean",
                                                      ytitle = "") {
  time <- data %>%
    select(ends_with(variable)) %>%
    names()
  data_long <- data_delta %>%
    mutate(subject_ID = row_number(),
           across(time, as.numeric)) %>%
    pivot_longer(
      cols = c(time),
      names_to = "Time",
      values_to = "mean"
    ) %>%
    mutate(Time = factor(gsub("[^0-9]", "", .data$Time)))

  data_summary <- data_long %>% 
    group_by(T1_Group, T2_Condition, Time) %>% 
    summarize(mean = mean(mean), .groups = "keep")
  
  data_summary2 <- data_long %>% 
    group_by(T1_Group, Time) %>% 
    summarize(mean = mean(mean), .groups = "keep")
  
  data_summary <- data_summary %>% 
    mutate(mean = case_when(
      T1_Group == "Meditation" & Time == 3 ~
        data_summary2 %>% 
        filter(T1_Group == "Meditation" & Time == 3) %>% 
        pull(mean),
      T1_Group == "Reflection" & Time == 3 ~
        data_summary2 %>% 
        filter(T1_Group == "Reflection" & Time == 3) %>% 
        pull(mean),
      T1_Group == "Waitlist" & Time == 3 ~
        data_summary2 %>% 
        filter(T1_Group == "Waitlist" & Time == 3) %>% 
        pull(mean),
      .default = mean
    ))
  
  # Plot
  plot_means_over_time_depletion_summary(
    data_summary = data_summary,
    T1_Group = T1_Group,
    T2_Condition = T2_Condition,
    ytitle = ytitle
  )
}

p_CLS <- plot_means_over_time_depletion_real_final(
  data_delta = data_delta,
  variable = "CLS",
  ytitle = "Compassionate Love"
  )
p_CLS

p_NOBAGS <- plot_means_over_time_depletion_real_final(
  data_delta = data_delta,
  variable = "NOBAGS",
  ytitle = "Aggression Attitude"
)
p_NOBAGS
p_attitude <- plot_means_over_time_depletion_real_final(
  data_delta = data_delta,
  variable = "attitude",
  ytitle = "Intergroup Attitude (Positive)"
)
p_attitude
p_dehumanization <- plot_means_over_time_depletion_real_final(
  data_delta = data_delta,
  variable = "dehumanization",
  ytitle = "Humanization"
)
p_dehumanization
p_IAT <- plot_means_over_time_depletion_real_final(
  data_delta = data_delta,
  variable = "IAT",
  ytitle = "Implicit Aggression (IAT)"
)
p_IAT
p_WHS <- plot_means_over_time_depletion_real_final(
  data_delta = data_delta,
  variable = "WHS",
  ytitle = "Willingness to Help"
)
p_WHS
p_blastintensity.duration <- plot_means_over_time_depletion_real_final(
  data_delta = data_delta,
  variable = "blastintensity.duration",
  ytitle = "Behavioural Aggression"
)
p_blastintensity.duration
plots(hypothesis,
  p_CLS + guides(colour = "none", shape = "none", linetype = "none"),
  p_WHS + guides(colour = "none", shape = "none", linetype = "none"),
  p_blastintensity.duration + guides(colour = "none", shape = "none", linetype = "none"),
  p_attitude + guides(colour = "none", shape = "none", linetype = "none"),
  p_dehumanization + guides(colour = "none", shape = "none", linetype = "none"),
  p_NOBAGS + guides(colour = "none", shape = "none", linetype = "none"),
  p_IAT + guides(colour = "none", shape = "none", linetype = "none"),
  n_columns = 2, guides = "collect", tags = "a",
  tag_prefix = "(", tag_suffix = ")"
) &
  theme(legend.position = "bottom")

# Save a high-res .png image file
ggsave("Figure X_depletion3.pdf", width = 17, height = 25, unit = "in", dpi = 300)
ggsave("Figure X_depletion3.png", width = 17, height = 25, unit = "in", dpi = 300)
# Visualization
library(ggpubr)
data_delta %>%
  # group_by(T1_Group) %>%
  ungroup() %>%
  # mutate(T2_CLS = T2_CLS - min(T2_CLS)) %>%
  mutate(
    T2_CLS = as.numeric(T2_CLS),
    T2_NOBAGS = as.numeric(T2_NOBAGS)
  ) %>%
  ggbarplot(
    x = "T1_Group", y = c(
      "T2_CLS", "T2_WHS", "T2_blastintensity.duration", "T2_attitude",
      "T2_dehumanization", "T2_NOBAGS", "T2_IAT"
    ),
    fill = "T2_Condition", palette = "grey",
    add = "mean_ci", add.params = list(group = "T2_Condition"),
    position = position_dodge(0.8)
  )

# group <- "T1_Group"
# response = "T2_CLS"
#
# data_delta %>%
#   group_by(T1_Group, T2_Condition) %>%
#   mutate(T2_CLS = T2_CLS - min(T2_CLS)) %>%
#   summarize(
#     n = n(),
#     mean = mean(T2_CLS),
#     sd = sd(T2_CLS)) %>%
#   mutate(se = sd / sqrt(n),
#          ic = se * qt((1 - 0.05) / 2 + .5, n - 1)) %>%
#   ggplot2::ggplot(ggplot2::aes(x = T1_Group, fill = T2_Condition,
#     y = mean)) +
#   ggplot2::geom_bar(stat = "identity",
#     position = ggplot2::position_dodge()) +
#   geom_errorbar(stat = "identity", aes(ymin = mean - ic, ymax = mean + ic),
#                 width = 0.1, group = c("T1_Group", "T2_Condition")) +
#   ylab("Effects of ego depletion on compassionate love")

# zz <- data_delta %>%
#   ungroup() %>%
#   select(all_of(T1_Group, T2_Condition, T2_CLS)) %>%
#   mutate(ID = row_number(),
#          T2_CLS = T2_CLS - min(T2_CLS)) %>%
#   pivot_wider(id_cols = c("ID", "T1_Group"),
#               names_from = "T2_Condition",
#               values_from = "T2_CLS") %>%
#   select(-ID)
#
# View(zz)
#
# zz <- zz %>%
#   group_by(T1_Group) %>%
#   summarize(Control = mean(Control, na.rm = TRUE),
#             Depleted = mean(Depleted, na.rm = TRUE)) %>%
#   mutate(diff = Depleted - Control)
#
# zz %>%
#   group_by(T1_Group) %>%
#   ggplot2::ggplot(ggplot2::aes(x = T1_Group, fill = T1_Group, y = diff)) +
#   ggplot2::geom_bar(stat = "identity",
#     position = ggplot2::position_dodge())
data %>%
  select(starts_with("T1_attitude")) %>% 
  get_label %>% 
  lapply(function(x) gsub(".*- ", "", x)) %>% 
  unlist() %>% unname

social.groups <- c("Blacks", "Homeless", "Native", "Muslims", "Refugees", "Women",
                   "Animals", "Elderly", "Whites")

charities <- data %>%
  select(ends_with("1_1") & contains("charity")) %>% 
  get_label %>% 
  lapply(function(x) gsub(".*- ", "", x)) %>% 
  unlist() %>% unname
charities

regions <- c("Montreal", "Quebec", "Canada", "International")
regions

data$ID <- seq_len(nrow(data))

T1_attitude <- data %>%
  select(ID, T1_Group, T1_attitude_1:T1_attitude_9) %>% 
  pivot_longer(cols = T1_attitude_1:T1_attitude_9, 
               names_to = "Group",
               values_to = "attitude") %>% 
  mutate(Group = factor(Group, labels = social.groups),
         Time = 1)

nice_violin(T1_attitude,
            group = "Group",
            response = "attitude",
            ytitle = "Positive Explicit Attitude",
            CIcap.width = 0.3,
            obs = "jitter",
            border.size = 1,
            alpha = 0.7,
            groups.order = "increasing")

T1_dehumanization <- data %>%
  select(ID, T1_Group, T1_dehumanization_1:T1_dehumanization_9) %>% 
  pivot_longer(cols = T1_dehumanization_1:T1_dehumanization_9, 
               names_to = "Group",
               values_to = "attitude") %>% 
  mutate(Group = factor(Group, labels = social.groups),
         Time = 1)

nice_violin(T1_dehumanization,
            group = "Group",
            response = "attitude",
            ytitle = "Humanization",
            CIcap.width = 0.3,
            obs = "jitter",
            border.size = 1,
            alpha = 0.7,
            groups.order = "increasing")

T2_attitude <- data %>%
  select(ID, T1_Group, T2_attitude_1:T2_attitude_9) %>% 
  pivot_longer(cols = T2_attitude_1:T2_attitude_9, 
               names_to = "Group",
               values_to = "attitude") %>% 
  mutate(Group = factor(Group, labels = social.groups),
         Time = 2)

nice_violin(T2_attitude,
            group = "Group",
            response = "attitude",
            ytitle = "Positive Explicit Attitude",
            CIcap.width = 0.3,
            obs = "jitter",
            border.size = 1,
            alpha = 0.7,
            groups.order = "increasing")

T2_dehumanization <- data %>%
  select(ID, T1_Group, T2_dehumanization_1:T2_dehumanization_9) %>% 
  pivot_longer(cols = T2_dehumanization_1:T2_dehumanization_9, 
               names_to = "Group",
               values_to = "attitude") %>% 
  mutate(Group = factor(Group, labels = social.groups),
         Time = 2)

nice_violin(T2_dehumanization,
            group = "Group",
            response = "attitude",
            ytitle = "Humanization",
            CIcap.width = 0.3,
            obs = "jitter",
            border.size = 1,
            alpha = 0.7,
            groups.order = "increasing")

data %>%
  select(contains("charity") & ends_with("1_1")) %>% 
  pivot_longer(cols = everything(), 
               names_to = "charity",
               values_to = "donation") %>% 
  mutate(charity = factor(charity, labels = charities)) %>% 
  nice_violin(group = "charity",
              response = "donation",
              ytitle = "Amount Donated",
              CIcap.width = 0.5,
              obs = "jitter",
              border.size = 1,
              alpha = 0.7,
              groups.order = "increasing",
              xlabels.angle = 75)

data %>%
  select(contains("charity") & ends_with("2_1")) %>% 
  pivot_longer(cols = everything(), 
               names_to = "charity",
               values_to = "familiarity") %>% 
  mutate(charity = factor(charity, labels = charities)) %>% 
  nice_violin(group = "charity",
              response = "familiarity",
              ytitle = "Familiarity with Charity",
              CIcap.width = 0.5,
              obs = "jitter",
              border.size = 1,
              alpha = 0.7,
              groups.order = "increasing",
              xlabels.angle = 75)

data %>%
  nice_scatter(predictor = "T2_Familiarity",
               response = "T2_Charity",
               ytitle = "Donation Amount",
               xtitle = "Familiarity with Charity",
               has.jitter = TRUE,
               has.legend = TRUE,
               has.r = TRUE,
               has.p = TRUE)

data %>%
  select(contains("charity") & ends_with("1_1")) %>% 
  pivot_longer(cols = everything(), 
               names_to = "charity",
               values_to = "donation") %>% 
  mutate(charity = factor(charity, labels = charities),
         region = case_match(
           charity,
           charities[1:6] ~ regions[1],
           charities[7:12] ~ regions[2],
           charities[13:18] ~ regions[3],
           charities[19:24] ~ regions[4]
         )) %>% 
  nice_violin(group = "region",
              response = "donation",
              ytitle = "Amount Donated",
              obs = "jitter",
              groups.order = "increasing")

data %>%
  select(contains("charity") & ends_with("2_1")) %>% 
  pivot_longer(cols = everything(), 
               names_to = "charity",
               values_to = "familiarity") %>% 
  mutate(charity = factor(charity, labels = charities),
         region = case_match(
           charity,
           charities[1:6] ~ regions[1],
           charities[7:12] ~ regions[2],
           charities[13:18] ~ regions[3],
           charities[19:24] ~ regions[4]
         )) %>% 
  nice_violin(group = "region",
              response = "familiarity",
              ytitle = "Familiarity with Charity",
              obs = "jitter",
              groups.order = "increasing")

T3_attitude <- data %>%
  select(ID, T1_Group, T3_attitude_1:T3_attitude_9) %>% 
  pivot_longer(cols = T3_attitude_1:T3_attitude_9, 
               names_to = "Group",
               values_to = "attitude") %>% 
  mutate(Group = factor(Group, labels = social.groups),
         Time = 3)

nice_violin(T3_attitude,
            group = "Group",
            response = "attitude",
            ytitle = "Positive Explicit Attitude",
            CIcap.width = 0.3,
            obs = "jitter",
            border.size = 1,
            alpha = 0.7,
            groups.order = "increasing")

T3_dehumanization <- data %>%
  select(ID, T1_Group, T3_dehumanization_1:T3_dehumanization_9) %>% 
  pivot_longer(cols = T3_dehumanization_1:T3_dehumanization_9, 
               names_to = "Group",
               values_to = "attitude") %>% 
  mutate(Group = factor(Group, labels = social.groups),
         Time = 3)

nice_violin(T3_dehumanization,
            group = "Group",
            response = "attitude",
            ytitle = "Humanization",
            CIcap.width = 0.3,
            obs = "jitter",
            border.size = 1,
            alpha = 0.7,
            groups.order = "increasing")

attitudes <- rbind(T1_attitude, T2_attitude, T3_attitude) %>% 
  pivot_wider(values_from = attitude,
              names_from = Time,
              names_prefix = "attitude_T")

# Correct scores for baseline (change scores)
# attitudes <- attitudes %>% 
#   mutate(across(starts_with("attitude"), \(x) {x - attitude_T1}))

time <- attitudes %>% select(starts_with("attitude")) %>% names()
p_attitudes <- plot_means_over_time(
  attitudes,
  response = time,
  group = "Group",
  error_bars = TRUE,
  ytitle = "Intergroup Attitude (Positive)",
  groups.order = "decreasing")
p_attitudes

p_attitudes <- plot_means_over_time(
  attitudes %>% filter(T1_Group == "Meditation"),
  response = time,
  group = "Group",
  error_bars = TRUE,
  ytitle = "Attitude (Meditation)",
  groups.order = "decreasing")
p_attitudes

p_attitudes <- plot_means_over_time(
  attitudes %>% filter(T1_Group == "Reflection"),
  response = time,
  group = "Group",
  error_bars = TRUE,
  ytitle = "Attitude (Reflection)",
  groups.order = "decreasing")
p_attitudes

p_attitudes <- plot_means_over_time(
  attitudes %>% filter(T1_Group == "Waitlist"),
  response = time,
  group = "Group",
  error_bars = TRUE,
  ytitle = "Attitude (Waitlist)",
  groups.order = "decreasing")
p_attitudes

dehumanization <- rbind(T1_dehumanization, T2_dehumanization, T3_dehumanization) %>% 
  pivot_wider(values_from = attitude,
              names_from = Time,
              names_prefix = "attitude_T")

# Correct scores for baseline (change scores)
dehumanization <- dehumanization %>%
  mutate(across(starts_with("attitude"), \(x) {x - attitude_T1}))

time <- attitudes %>% select(starts_with("attitude")) %>% names()
p_dehumanization <- plot_means_over_time(
  dehumanization,
  response = time,
  group = "Group",
  error_bars = TRUE,
  ytitle = "Humanization",
  groups.order = "decreasing")
p_dehumanization

p_dehumanization <- plot_means_over_time(
  dehumanization %>% filter(T1_Group == "Meditation"),
  response = time,
  group = "Group",
  error_bars = TRUE,
  ytitle = "Humanization - Meditation",
  groups.order = "decreasing")
p_dehumanization

p_dehumanization <- plot_means_over_time(
  dehumanization %>% filter(T1_Group == "Reflection"),
  response = time,
  group = "Group",
  error_bars = TRUE,
  ytitle = "Humanization - Reflection",
  groups.order = "decreasing")
p_dehumanization

p_dehumanization <- plot_means_over_time(
  dehumanization %>% filter(T1_Group == "Waitlist"),
  response = time,
  group = "Group",
  error_bars = TRUE,
  ytitle = "Humanization - Waitlist",
  groups.order = "decreasing")
p_dehumanization

data %>% 
  mutate(T3_post.medipractice = factor(T3_post.medipractice, 
                                       levels = c("Yes", "No", NA))) %>% 
  grouped_bar_chart("T3_post.medipractice", 
                    "Meditation Practice \nDuring Intervention")
data <- data %>% 
  mutate(T3_medipractice.which_aggregate = case_when(
    grepl("LKM", T3_medipractice.which) ~ "LKM", 
    T3_medipractice.which == "NA" ~ "NA",
    .default = "Other"))

grouped_bar_chart(data, "T3_medipractice.which_aggregate",
                  "LKM Practice \nDuring Intervention",
                  proportion = TRUE)
report(data$T3_medipractice.which)
data %>% 
  mutate(T3_medipractice.time = as.factor(T3_medipractice.time),
         T3_medipractice.time = factor(
           T3_medipractice.time, levels = levels(
             .data$T3_medipractice.time)[c(1, 4:6, 3, 2)])) %>% 
  grouped_bar_chart("T3_medipractice.time",
                    "Weekly Meditation Time \nDuring Intervention")
data %>% 
  grouped_bar_chart("T3_choice.medicomp", 
                    "Wants to receive \nMeditation Intervention")
data_no_LKM <- data %>% 
  filter(T3_medipractice.which_aggregate != "LKM" |
           T1_Group == "Meditation")

# Make list of all models
models.list <- sapply(formulas, lm, data = data_no_LKM, simplify = FALSE, USE.NAMES = TRUE)

# Attempt with nice_lm_contrasts 
set.seed(100)
x <- nice_lm_contrasts(models.list, group = "T1_Group", data = data_no_LKM)

nice_table(x, highlight = TRUE)
# Make list of all models
models.list2 <- sapply(formulas2, lm, data = data_no_LKM, simplify = FALSE, USE.NAMES = TRUE)

## Attempt with nice_lm_contrasts
set.seed(100)
x2 <- nice_lm_contrasts(models.list2, group = "T1_Group", data = data_no_LKM)

nice_table(x2, highlight = TRUE)
data_delta <- data_no_LKM %>% 
  group_by(T1_Group) %>% 
  mutate(
    across(ends_with("NOBAGS"), \(x) {x - T1_NOBAGS}),
    across(ends_with("attitude"), \(x) {x - T1_attitude}),
    across(ends_with("dehumanization"), \(x) {x - T1_dehumanization}),
    across(ends_with("IAT"), \(x) {x - T1_IAT}),
    across(ends_with("blastintensity"), \(x) {x - T1_blastintensity}),
    across(ends_with("blastduration"), \(x) {x - T1_blastduration}),
    across(ends_with("blastintensity.duration"), 
           \(x) {x - T1_blastintensity.duration}),
    across(ends_with("WHS"), \(x) {x - T1_WHS}),
    across(ends_with("CLS"), \(x) {x - T1_CLS})
    )

time <- data_no_LKM %>% select(ends_with("NOBAGS")) %>% names()
p_NOBAGS <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE)

time <- data_no_LKM %>% select(ends_with("attitude")) %>% names()
p_attitude <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Positive Intergroup Attitude")

time <- data_no_LKM %>% select(ends_with("dehumanization")) %>% names()
p_dehumanization <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Humanization")

time <- data_no_LKM %>% select(ends_with("IAT")) %>% names()
p_IAT <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE)

time <- data_no_LKM %>% select(ends_with("blastintensity")) %>% names()
p_blastintensity <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE)

time <- data_no_LKM %>% select(ends_with("blastduration")) %>% names()
p_blastduration <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE)

time <- data_no_LKM %>% select(ends_with("blastintensity.duration")) %>% names()
p_blastintensity.duration <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Aggression")

time <- data_no_LKM %>% select(ends_with("WHS")) %>% names()
p_WHS <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Willingness to help")

time <- data_no_LKM %>% select(ends_with("CLS")) %>% names()
p_CLS <- plot_means_over_time(
  data_delta,
  response = time,
  group = "T1_Group",
  error_bars = TRUE,
  ytitle = "Compassionate Love")

p_CLS <- p_CLS +
  theme(legend.position = "none")
p_blastintensity.duration <- p_blastintensity.duration +
  theme(legend.position = "none")
p_dehumanization <- p_dehumanization +
  theme(legend.position = "none")
p_IAT <- p_IAT +
  theme(legend.position = "none")

plots(p_CLS, p_WHS, p_blastintensity.duration, p_attitude, 
      p_dehumanization, p_NOBAGS, p_IAT, n_columns = 2)
---
title: '**Loving-Kindness Meditation, Attitudes, Prosocial Behaviour, and Ego Depletion**'
subtitle: "Is the Mind More Powerful Than the Heart? A Look at Two Loving-Kindness Interventions"
author: "Rémi Thériault"
date: "`r format(Sys.Date())`"
output:
  rmarkdown::html_document:
        theme: cerulean
        highlight: pygments
        toc: yes
        toc_depth: 2
        toc_float: yes
        number_sections: no
        df_print: kable
        code_folding: hide # or: show
        code_download: yes
        anchor_sections:
          style: symbol
      
---

```{r setup, warning=FALSE, message=TRUE, echo=FALSE}
fast <- FALSE # Make this true to skip the time-consuming chunks

transform <- TRUE # Make this FALSE to skip the transformation chunks

knitr::opts_chunk$set(dpi = 100)
```

```{r klippy, echo=FALSE, include=TRUE}
# remotes::install_github("rlesur/klippy")
klippy::klippy(position = c('top', 'right'))
```

# Introduction

This report describes the results of a preregistered study available at: https://osf.io/gkd8s.

---
Note also that this data has been cleaned beforehand. Several datasets over three measurement times were merged (joined) through an inner join so as to keep only participants who at least participated at each step of the study. Missing data will be imputed later on. Duplicates were addressed with the `rempsyc::best_duplicate` function, which keeps the duplicate with the least amount of missing values, and in case of ties, takes the first occurrence. However, for duplicate participation in the activities and exercises, rather than the first occurrence, the occurrence with the higher completion percentage (% of total activity time) was taken instead.

# Packages & Data {.tabset}

## ...

## Packages

```{r packages, warning=FALSE, message=FALSE, results='asis'}
library(dplyr)
library(interactions)
library(performance)
library(see)
library(report)
library(datawizard)
library(modelbased)
library(ggplot2)
library(bestNormalize)
library(psych)
library(GPArotation)
library(visdat)
library(missForest)
library(doParallel)
library(ggplot2)
library(emmeans)
library(sjlabelled)
library(tidyr)
library(tools)
library(flextable)
library(pwr)
library(patchwork)
library(ggpubr)
library(survival)
library(survminer)
if(packageVersion("rempsyc") < "0.1.7.6") stop("Please install 'rempsyc' package version > '1.1.7.6'")
library(rempsyc)
# remotes::install_github("Buedenbender/datscience")
library(datscience)
# remotes::install_version("eefAnalytics", version = "1.0.6", repos = "http://cran.us.r-project.org")
if(packageVersion("eefAnalytics") > "1.0.6") stop("Please install 'eefAnalytics' package version '1.0.6'")
library(eefAnalytics)

```

## Data

```{r data, warning=FALSE, message=TRUE, results='asis'}
# Read data
# data <- readRDS("Data/finaldataset_n496.rds")
# inner.join <- FALSE
data <- readRDS("Data/finaldataset_n217.rds")
inner.join <- TRUE

report_participants(data, threshold = 1) %>% cat

# Allocation ratio
report(data$T1_Group)
report(data$T2_Condition)

```

# Data Cleaning

```{r Data Cleaning, child=if (fast == FALSE) '0_cleaning.Rmd', eval = TRUE}
```

# Assumptions {.tabset}

```{r Assumptions, child=if (fast == FALSE) '1_assumptions.Rmd', eval = TRUE}
```

# Contrasts (Group Differences) {.tabset}

```{r Contrasts, child=if (fast == FALSE) '2_contrasts.Rmd', eval = TRUE}
```

# Moderations (Ego Depletion) {.tabset}

## ...

```{r Moderations, child=if (fast == FALSE) '3_moderations.Rmd', eval = TRUE}
```

# Attitudes & Charity {.tabset}

## ...

```{r Charity, child=if (fast == FALSE) '4_charity.Rmd', eval = TRUE}
```

# Who Meditated {.tabset}

## ...

```{r meditated, child=if (fast == FALSE) '5_meditated.Rmd', eval = TRUE}
```

# Discussion

In this report, we aimed to compare two types of loving-kindness meditation, one more embodied, based on meditation, and one more cognitive, based on intellectual reflection, to a waitlist control group. We compared those groups on several variables relating to prosociality (i.e., on affect, attitude and behaviour). Groups were measured three times: Week 0 (T1), Week 6 (T2), and Week 13 (T3) so it was able to compare for baseline but also see how robust the effects, if any, are through times. We were also interested in assessing whether these effects depend on other personality variables (i.e., moderators).

#### Group Differences at Time 2

Our contrasts analyses first revealed group differences at Time 2. Both the meditation and reflection groups showed moderately more compassionate love than the waitlist group, but only the meditation group showed moderately more positive affect than the waitlist group. However, the reflection group showed a little more positive explicit attitudes toward various social groups, as well as moderately shorter reaction times to remember an altruistic event than the waitlist group (suggesting that altruism was more cognitively accessible to them). Furthermore, the reflection group showed a little lower behavioural aggression (blast intensity, blast duration, and blast intensity * duration) than both the waitlist group and the meditation group.

#### Group Differences at Time 3

Our contrasts analyses also revealed group differences at Time 3. However, only the reflection group showed lasting positive effects on attitudes (still small effect), behavioural aggression (still small effect), and compassion (still moderate effect), suggesting these effects are durable in time. Furthermore, the reflection group showed a delayed onset effect on willingness to help, whereas they were a little more willing to help in various hypothetical scenarios than the control group.

#### Moderations at Time 2

First, attitudes toward aggression (NOBAGS) only moderated one variable, blast duration. In short, while NOBAGS does not affect blast duration in the control condition, it relates to higher blast duration in the depletion condition. Although this result is theoretically consistent with the literature, it is likely a false positive given our high number of tests, the fact that this is the only variable that NOBAGS moderates, and that the *p* value is relatively close to 0.5. Furthermore that variable (blastduration alone) was not in the preregistration, so we might not report this finding.

Second, implicit aggression (IAT) seems to have moderated several variables. Like for NOBAGS, it also moderated blast duration, but in a three-way interaction this time. Surprisingly, for the meditation group, implicit aggression related to lower aggression, but only when depleted, whereas there was no such interaction in the waitlist group. However, as mentioned before, that variable (blastduration alone) was not in the preregistration, so we might not report this finding.

Third, implicit aggression also moderated compassionate love, again in a three-way interaction. For the waitlist group, the effect of implicit aggression clearly depends on depletion: implicit aggression relates to lower compassion in the control group (expected), but to higher compassion in the depletion group (unexpected). However, for the meditation group, the effect was absent or partly reversed.

Fourth, implicit aggression also moderated reaction time to remember an altruistic event, again in a three-way interaction. For the meditation group, higher implicit aggression relates to shorter reaction time (unexpected), unless they are depleted. However, for the waitlist group, the effect was absent or partly reversed.

#### Conclusion

In conclusion, there seems to be group differences at Time 2 and Time 3, between the experimental conditions and the control group. However, the effects in the reflection group appear not only stronger, but also more robust (i.e,. they are the only ones lasting at Time 3). Furthermore, there are also several three-way interactions between implicit attitudes, ego depletion, and group, as expected. The nature of the interactions do not seem however to perfectly align with our original predictions. A deeper exploration of the meaning of these interactions will be required.

# Full Code & References {.tabset}

The package references and the full script of executive code contained in this document is reproduced in the tabs below.

## ...

## Package References

```{r references, warning=FALSE, message=FALSE, results='asis'}
sessionInfo() %>% report %>% summary

report::cite_packages(sessionInfo())
```

## Full Code

```{r full_code, ref.label=knitr::all_labels()[!knitr::all_labels() %in% knitr::all_labels(echo == FALSE)], eval=FALSE}
```