Edit model card

BigWeave v16 103b

The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared.

Prompting Format

Mistral, Vicuna and Alpaca.

Merge process

This is a self-merge of 152334H/miqu-1-70b-sf. By conducting exl2 measurements, we identify the most relevant layers. The layers are duplicated such that each group consists of consecutive layers with a two-layer overlap (i.e. larger groups than in v15).

Merge configuration:

slices:
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [0,11]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [9,13]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [11,15]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [13,17]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [15,23]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [21,25]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [23,49]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [47,51]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [49,53]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [51,55]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [53,57]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [55,59]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [57,61]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [59,63]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [61,65]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [63,67]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [65,69]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [67,71]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [69,73]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [71,75]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [73,80]
merge_method: passthrough
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 72.02
AI2 Reasoning Challenge (25-Shot) 65.87
HellaSwag (10-Shot) 87.61
MMLU (5-Shot) 73.22
TruthfulQA (0-shot) 63.81
Winogrande (5-shot) 80.43
GSM8k (5-shot) 61.18
Downloads last month
89
Safetensors
Model size
103B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for llmixer/BigWeave-v16-103b

Finetuned
(25)
this model
Quantizations
1 model

Collections including llmixer/BigWeave-v16-103b

Evaluation results