configure-model

Configure tokens and layer access for unsupported or custom models. This command sets up the necessary configuration for Wisent to work with models that aren't in the default supported list, including special tokens and layer naming conventions.

Basic Usage
python -m wisent configure-model MODEL [OPTIONS]

Examples

Configure New Model
python -m wisent configure-model my-org/custom-llama-7b
Force Reconfiguration
python -m wisent configure-model meta-llama/Llama-3.1-8B-Instruct --force
Configure Local Model
python -m wisent configure-model ./my_local_model/

Arguments

ArgumentDefaultDescription
modelrequiredModel name or path to configure
--forcefalseForce reconfiguration even if model already has a config

What Gets Configured

The configure-model command sets up:

  • Special tokens - BOS, EOS, pad tokens for the model
  • Layer naming - How to access transformer layers for activation extraction
  • Chat template - Format for chat-style interactions
  • Model architecture - Layer count and hidden dimensions

When to Use

Use this command when:

  • Working with a custom fine-tuned model
  • Using a newly released model not yet in Wisent's default configs
  • Encountering token or layer access errors with a model
  • You need to override default configurations for a supported model

Related Commands

Stay in the loop. Never miss out.

Subscribe to our newsletter and unlock Wisent insights.