README.md 2,1 КБ
Newer Older
OMP Education's avatar
OMP Education включено в состав коммита
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
# Converting TinyStories-33M to ONNX

## Quick way (recommended)

### Alternative: via the universal converter (scripts/universal_onnx_converter.py)

```bash
# Install dependencies (if not already installed)
pip install torch transformers onnxruntime onnxruntime-tools huggingface_hub

# Basic conversion of TinyStories-33M from Hugging Face
python3 ../scripts/universal_onnx_converter.py roneneldan/TinyStories-33M -o tinystories-33m.onnx --opset 14 -l 256

# Quantize to INT8 and save the quantized model
python3 ../scripts/universal_onnx_converter.py roneneldan/TinyStories-33M -o tinystories-33m.onnx --opset 14 -l 256 -q

# Search for models on Hugging Face (optional)
python3 ../scripts/universal_onnx_converter.py --search tinystories
```

### 1. Install dependencies
```bash
pip install torch transformers onnxruntime
```

### 2. Run the conversion
```bash
python convert_tinystories_optimized.py
```

This will create:
- `tinystories-33m-quantized.onnx` (~50–80 MB) — quantized model
- `tokenizer/` — tokenizer files

### 3. Copy the model into the app
```bash
mkdir -p ~/Documents/models
cp tinystories-33m-quantized.onnx ~/Documents/models/
```

## Detailed way

### Step 1: Prepare the environment
```bash
# Create a virtual environment (optional)
python -m venv onnx_env
source onnx_env/bin/activate  # Linux/Mac
# or
onnx_env\Scripts\activate  # Windows

# Install dependencies
pip install -r requirements_convert.txt
```

### Step 2: Convert the model
```bash
# Basic conversion
python convert_tinystories.py

# Or optimized with quantization
python convert_tinystories_optimized.py
```

### Step 3: Validate the result
```bash
# Check file sizes
ls -lh *.onnx

# Test the model
python -c "
import onnxruntime as ort
session = ort.InferenceSession('tinystories-33m-quantized.onnx')
print('Model loaded successfully!')
print('Inputs:', [input.name for input in session.get_inputs()])
print('Outputs:', [output.name for output in session.get_outputs()])
"
```

## Model sizes

| Model | Original | Quantized | Reduction |
|-------|----------|-----------|-----------|
| TinyStories-33M | ~291 MB | ~50–80 MB | ~70–80% |