I am studying impact of various characteristics on court decission on specific offences. The dataset is pretty large (28928 observations with 86 level-2 units). I am looking at the decision whether to incarcerate someone or not (=binary outcome variable) using level1 and level2 characteristics as controls (level1 are in capitals).
This is my code:
MLmodel196a_2 <- glmer(NEPO_ANO_NE ~
OZNACENY_RECIDIVISTA_REG + POCET_DRIV_ODSOUZENI_REG +
ROK_ODSOUZENI_REG + OMEZENI_A_POVINNOST_REG +
POCET_HLAVNICH_LICENI + DRUH_ZAHAJENI_RIZENI_REG +
NOVELA_REG + ODSTAVEC_REG +
EU_OBCANSTVI + POHLAVI_REG + VEK_SPACHANI_REG +
objasnenost_procenta + kriminalita_relativni_REG +
venkov_mesto + socialni + nezamestani_celkem +
vzdelani_zakladni_procenta +
prumerny_vek + podil_15az24_muzu_procenta +
zenati_vsichni_procenta +
verici_procenta + volby_ucast +
(1 | Nazev_soudu), family = binomial, data = vyber196)
When I run this, I receive this error:
Error: (maxstephalfit) PIRLS step-halvings failed to reduce deviance in pwrssUpdate
If I run this analysis for a different dataset (different offence), it produces results with several warnings. If I run this dataset only with level1 control variables, it again produces results with several warnings.
The majority of level1 variables are categorical, the level2 variables are all continuous (not being scaled).
Unfortunately I cannot provide any data since the data were provided by the government under such condition.
I do not understand, why this happens only for this offence and not the the other offences. Is there a way around it?
(lme4 version 1.1-12, R version 3.3.1)
After removing one of the continuous variable, it worked out. The continuous variable was the number of hearings in a case and in the majority of cases it was zero. Since it is not possible to incarcerate someone without a hearing, it probably messed up the process as it was quasi-separated. Majority of the warnings were finally solved using scaling and restarting the fit from the original value (n. 1 and 4 in examples in ?convergence - thanks for it!).