🔨 Model Forge

Fine-tune your own models and integrate them seamlessly into the unlearning pipeline.

⚙️ Fine-Tuning Pipeline

1
Upload Dataset Training data
2
Load Base Model distilgpt2
3
Fine-Tune Custom parameters
4
Save Model ./forged_model
5
Auto-Available For unlearning

Seamless Integration

Models forged here are automatically detected by the Unlearning Engine. No manual configuration needed - just forge and unlearn!

Base Model
distilgpt2
Output Path
./forged_model
Optimization
FP16, Batch=4
Tracking
Loss + Metadata
Fine-Tuning Workbench
Required
📤
Click to upload or drag & drop
CSV or JSON with 'text' column

Upload training data to customize the model for your domain.

⚙️ Training Parameters

2

More epochs = better adaptation but longer training

Higher rate = faster learning but risk of instability