Retrieve MCMC convergence diagnostics for a rater fit

mcmc_diagnostics(fit, pars = c("pi", "theta"))

Arguments

fit

An rater mcmc_fit object.

pars

A character vector of parameter names to return. By default c("pi", "theta").

Value

A matrix where the columns represent different diagnostics and the rows are different parameters. Currently the first column contains the Rhat statistic and the second bulk effective samples size. The rownames contain the parameter names.

Details

MCMC diagnostics cannot be calculate for the z due to the marginalisation used to fit the models.

These MCMC diagnostics are intended as basic sanity check of the quality of the MCMC samples returned. Users who want more in depth diagnostics should consider using as_mcmc.list() to convert the samples to a coda::mcmc.list() object, or get_stanfit() to extract the underlying stanfit object.

References

Aki Vehtari, Andrew Gelman, Daniel Simpson, Bob Carpenter, and Paul-Christian Bürkner (2019). Rank-normalization, folding, and localization: An improved R-hat for assessing convergence of MCMC. arXiv preprint arXiv:1903.08008.

Examples

# \donttest{

fit <- rater(anesthesia, "dawid_skene")
#> 
#> SAMPLING FOR MODEL 'dawid_skene' NOW (CHAIN 1).
#> Chain 1: 
#> Chain 1: Gradient evaluation took 0.000111 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.11 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1: 
#> Chain 1: 
#> Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 1: 
#> Chain 1:  Elapsed Time: 1.426 seconds (Warm-up)
#> Chain 1:                1.495 seconds (Sampling)
#> Chain 1:                2.921 seconds (Total)
#> Chain 1: 
#> 
#> SAMPLING FOR MODEL 'dawid_skene' NOW (CHAIN 2).
#> Chain 2: 
#> Chain 2: Gradient evaluation took 9.3e-05 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.93 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2: 
#> Chain 2: 
#> Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 2: 
#> Chain 2:  Elapsed Time: 1.423 seconds (Warm-up)
#> Chain 2:                1.579 seconds (Sampling)
#> Chain 2:                3.002 seconds (Total)
#> Chain 2: 
#> 
#> SAMPLING FOR MODEL 'dawid_skene' NOW (CHAIN 3).
#> Chain 3: 
#> Chain 3: Gradient evaluation took 9.3e-05 seconds
#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.93 seconds.
#> Chain 3: Adjust your expectations accordingly!
#> Chain 3: 
#> Chain 3: 
#> Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 3: 
#> Chain 3:  Elapsed Time: 1.455 seconds (Warm-up)
#> Chain 3:                1.599 seconds (Sampling)
#> Chain 3:                3.054 seconds (Total)
#> Chain 3: 
#> 
#> SAMPLING FOR MODEL 'dawid_skene' NOW (CHAIN 4).
#> Chain 4: 
#> Chain 4: Gradient evaluation took 9.4e-05 seconds
#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.94 seconds.
#> Chain 4: Adjust your expectations accordingly!
#> Chain 4: 
#> Chain 4: 
#> Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 4: 
#> Chain 4:  Elapsed Time: 1.489 seconds (Warm-up)
#> Chain 4:                1.567 seconds (Sampling)
#> Chain 4:                3.056 seconds (Total)
#> Chain 4: 

# Calculate the diagnostics for all parameters.
mcmc_diagnostics(fit)
#>                     Rhat ess_bulk
#> pi[1]          1.0000411 9322.139
#> pi[2]          1.0007257 7104.109
#> pi[3]          1.0019066 6602.998
#> pi[4]          1.0021143 7223.241
#> theta[1, 1, 1] 0.9996569 8242.263
#> theta[1, 1, 2] 0.9996258 8078.522
#> theta[1, 1, 3] 0.9998915 5720.639
#> theta[1, 1, 4] 1.0000860 6146.707
#> theta[1, 2, 1] 0.9995099 5541.439
#> theta[1, 2, 2] 1.0035749 6280.526
#> theta[1, 2, 3] 1.0016579 7359.919
#> theta[1, 2, 4] 1.0007972 4115.575
#> theta[1, 3, 1] 1.0008028 5208.579
#> theta[1, 3, 2] 1.0003983 4198.876
#> theta[1, 3, 3] 1.0009412 5043.970
#> theta[1, 3, 4] 1.0009513 4332.149
#> theta[1, 4, 1] 0.9994513 5390.639
#> theta[1, 4, 2] 1.0018778 5814.222
#> theta[1, 4, 3] 1.0012371 3415.511
#> theta[1, 4, 4] 1.0011763 5000.877
#> theta[2, 1, 1] 1.0011433 7378.930
#> theta[2, 1, 2] 1.0011614 6361.640
#> theta[2, 1, 3] 1.0008169 5689.322
#> theta[2, 1, 4] 1.0000055 5911.166
#> theta[2, 2, 1] 1.0000328 7303.744
#> theta[2, 2, 2] 1.0018605 8264.359
#> theta[2, 2, 3] 1.0025368 8092.917
#> theta[2, 2, 4] 1.0012950 4990.504
#> theta[2, 3, 1] 1.0015800 6255.578
#> theta[2, 3, 2] 1.0002677 5018.330
#> theta[2, 3, 3] 1.0010362 6691.658
#> theta[2, 3, 4] 0.9996149 6226.415
#> theta[2, 4, 1] 1.0042108 5840.323
#> theta[2, 4, 2] 0.9998438 6131.651
#> theta[2, 4, 3] 1.0020604 5867.068
#> theta[2, 4, 4] 1.0005945 7765.954
#> theta[3, 1, 1] 1.0019053 6786.027
#> theta[3, 1, 2] 1.0009387 6009.502
#> theta[3, 1, 3] 0.9996640 5562.968
#> theta[3, 1, 4] 1.0001541 6304.421
#> theta[3, 2, 1] 1.0014941 6196.460
#> theta[3, 2, 2] 1.0009919 7685.069
#> theta[3, 2, 3] 0.9999155 7382.332
#> theta[3, 2, 4] 1.0017519 5639.320
#> theta[3, 3, 1] 0.9997880 4852.304
#> theta[3, 3, 2] 1.0004004 6303.678
#> theta[3, 3, 3] 1.0016287 8044.488
#> theta[3, 3, 4] 1.0009986 7475.118
#> theta[3, 4, 1] 1.0001093 5744.578
#> theta[3, 4, 2] 1.0002596 5855.632
#> theta[3, 4, 3] 1.0001916 4613.585
#> theta[3, 4, 4] 1.0012419 5312.299
#> theta[4, 1, 1] 1.0025308 6925.654
#> theta[4, 1, 2] 1.0020376 6094.537
#> theta[4, 1, 3] 1.0001623 5134.521
#> theta[4, 1, 4] 1.0000678 4606.893
#> theta[4, 2, 1] 1.0019133 5991.980
#> theta[4, 2, 2] 1.0017730 7767.973
#> theta[4, 2, 3] 1.0021365 6741.018
#> theta[4, 2, 4] 1.0012606 5299.967
#> theta[4, 3, 1] 0.9997970 5522.610
#> theta[4, 3, 2] 1.0003265 5999.997
#> theta[4, 3, 3] 0.9995117 6576.641
#> theta[4, 3, 4] 1.0006957 4799.500
#> theta[4, 4, 1] 1.0002419 6366.926
#> theta[4, 4, 2] 1.0013254 5437.112
#> theta[4, 4, 3] 1.0004651 5188.611
#> theta[4, 4, 4] 0.9993335 6662.486
#> theta[5, 1, 1] 1.0000230 6038.277
#> theta[5, 1, 2] 0.9996132 5060.106
#> theta[5, 1, 3] 1.0008109 6030.416
#> theta[5, 1, 4] 1.0001200 5328.386
#> theta[5, 2, 1] 0.9998511 7576.136
#> theta[5, 2, 2] 1.0002398 7999.554
#> theta[5, 2, 3] 1.0034221 6120.410
#> theta[5, 2, 4] 1.0001041 6263.509
#> theta[5, 3, 1] 0.9995260 5240.229
#> theta[5, 3, 2] 1.0016653 7896.889
#> theta[5, 3, 3] 0.9995560 5784.396
#> theta[5, 3, 4] 1.0007534 7285.070
#> theta[5, 4, 1] 1.0002834 5784.630
#> theta[5, 4, 2] 1.0013967 4936.616
#> theta[5, 4, 3] 1.0010311 3971.771
#> theta[5, 4, 4] 1.0010470 5682.226

# Calculate the diagnostics just for the pi parameter.
mcmc_diagnostics(fit, pars = "pi")
#>           Rhat ess_bulk
#> pi[1] 1.000041 9322.139
#> pi[2] 1.000726 7104.109
#> pi[3] 1.001907 6602.998
#> pi[4] 1.002114 7223.241

# }