| Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
|---|---|---|---|---|---|
| neuronovo-7B-v0.2 | 44.95 | 76.49 | 71.57 | 47.48 | 60.12 |
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| agieval_aqua_rat | 0 | acc | 25.98 | ± | 2.76 |
| acc_norm | 25.59 | ± | 2.74 | ||
| agieval_logiqa_en | 0 | acc | 37.48 | ± | 1.90 |
| acc_norm | 38.71 | ± | 1.91 | ||
| agieval_lsat_ar | 0 | acc | 24.78 | ± | 2.85 |
| acc_norm | 24.78 | ± | 2.85 | ||
| agieval_lsat_lr | 0 | acc | 49.61 | ± | 2.22 |
| acc_norm | 51.76 | ± | 2.21 | ||
| agieval_lsat_rc | 0 | acc | 65.43 | ± | 2.91 |
| acc_norm | 63.20 | ± | 2.95 | ||
| agieval_sat_en | 0 | acc | 79.61 | ± | 2.81 |
| acc_norm | 78.64 | ± | 2.86 | ||
| agieval_sat_en_without_passage | 0 | acc | 46.12 | ± | 3.48 |
| acc_norm | 44.66 | ± | 3.47 | ||
| agieval_sat_math | 0 | acc | 34.09 | ± | 3.20 |
| acc_norm | 32.27 | ± | 3.16 |
Average: 44.95%
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| arc_challenge | 0 | acc | 67.83 | ± | 1.37 |
| acc_norm | 67.92 | ± | 1.36 | ||
| arc_easy | 0 | acc | 87.12 | ± | 0.69 |
| acc_norm | 81.44 | ± | 0.80 | ||
| boolq | 1 | acc | 87.43 | ± | 0.58 |
| hellaswag | 0 | acc | 70.14 | ± | 0.46 |
| acc_norm | 86.43 | ± | 0.34 | ||
| openbookqa | 0 | acc | 38.40 | ± | 2.18 |
| acc_norm | 49.00 | ± | 2.24 | ||
| piqa | 0 | acc | 83.84 | ± | 0.86 |
| acc_norm | 84.82 | ± | 0.84 | ||
| winogrande | 0 | acc | 78.37 | ± | 1.16 |
Average: 76.49%
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| truthfulqa_mc | 1 | mc1 | 57.89 | ± | 1.73 |
| mc2 | 71.57 | ± | 1.49 |
Average: 71.57%
| Task | Version | Metric | Value | Stderr | |
|---|---|---|---|---|---|
| bigbench_causal_judgement | 0 | multiple_choice_grade | 57.89 | ± | 3.59 |
| bigbench_date_understanding | 0 | multiple_choice_grade | 62.60 | ± | 2.52 |
| bigbench_disambiguation_qa | 0 | multiple_choice_grade | 42.25 | ± | 3.08 |
| bigbench_geometric_shapes | 0 | multiple_choice_grade | 24.51 | ± | 2.27 |
| exact_str_match | 0.00 | ± | 0.00 | ||
| bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 32.40 | ± | 2.10 |
| bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 22.57 | ± | 1.58 |
| bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 57.00 | ± | 2.86 |
| bigbench_movie_recommendation | 0 | multiple_choice_grade | 54.20 | ± | 2.23 |
| bigbench_navigate | 0 | multiple_choice_grade | 54.90 | ± | 1.57 |
| bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 67.05 | ± | 1.05 |
| bigbench_ruin_names | 0 | multiple_choice_grade | 50.89 | ± | 2.36 |
| bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 40.98 | ± | 1.56 |
| bigbench_snarks | 0 | multiple_choice_grade | 69.06 | ± | 3.45 |
| bigbench_sports_understanding | 0 | multiple_choice_grade | 73.73 | ± | 1.40 |
| bigbench_temporal_sequences | 0 | multiple_choice_grade | 48.10 | ± | 1.58 |
| bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 22.40 | ± | 1.18 |
| bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 17.03 | ± | 0.90 |
| bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 57.00 | ± | 2.86 |
Average: 47.48%
Average score: 60.12%
Elapsed time: 05:32:08