Wisent Guard CLI Reference

Basic Usage

python -m wisent_guard tasks <task_name> [OPTIONS]

The CLI follows a simple pattern: specify the task(s) to run, followed by configuration options.

Quick Start Commands

Steering Mode (HellaSwag)

python -m wisent_guard tasks hellaswag --model meta-llama/Llama-3.1-8B-Instruct --layer 15 --limit 5 --steering-mode --steering-strength 1.0 --verbose

Classification Mode (MMLU)

python -m wisent_guard tasks mmlu --model meta-llama/Llama-3.1-8B-Instruct --layer 15 --limit 10 --classifier-type logistic --verbose

Core Arguments

Basic Configuration

ArgumentTypeDefaultDescription
--modelstrmeta-llama/Llama-3.1-8B-InstructModel name or path
--layerstr15Layer(s) to extract activations from
--limitintNoneLimit number of documents per task
--verboseflagFalseEnable verbose logging

Classification Mode

Classification mode trains classifiers to detect harmful/incorrect content in model activations.

Classifier Configuration

ArgumentTypeDefaultDescription
--classifier-typestrlogisticType of classifier (logistic, mlp)
--detection-thresholdfloat0.6Classification threshold (higher = stricter)

Steering Mode

Steering mode uses Contrastive Activation Addition (CAA) to influence model behavior during generation.

Steering Configuration

ArgumentTypeDefaultDescription
--steering-modeflagFalseEnable steering mode
--steering-strengthfloat1.0Steering vector strength multiplier

Steering Strength Guidelines

StrengthEffectRecommendation
0.5-1.0Subtle behavioral changesRecommended for production
1.0-3.0Noticeable but coherent changesGood for experimentation
5.0+Risk of incoherent outputsNot recommended

Examples

Basic Classification

python -m wisent_guard tasks mmlu --model meta-llama/Llama-3.1-8B-Instruct --layer 15 --limit 10 --classifier-type logistic

Steering with Custom Strength

python -m wisent_guard tasks hellaswag --model meta-llama/Llama-3.1-8B-Instruct --layer 15 --steering-mode --steering-strength 2.0 --verbose

Multi-Task Evaluation

python -m wisent_guard tasks hellaswag,mmlu,truthfulqa --layer 15 --limit 20 --model meta-llama/Llama-3.1-8B

Core Arguments

Required Arguments

ArgumentDescriptionExample
commandCommand to run (always `tasks`)tasks
task_namesTask name(s) or file pathhellaswag, truthfulqa,mmlu, data.csv

Basic Configuration

ArgumentTypeDefaultDescription
--modelstrmeta-llama/Llama-3.1-8B-InstructModel name or path
--layerstr15Layer(s) to extract activations from
--shotsint0Number of few-shot examples
--limitintNoneLimit number of documents per task
--seedint42Random seed for reproducibility
--devicestrNoneDevice to run on (auto-detected if None)
--verboseflagFalseEnable verbose logging

Model and Layer Configuration

Model Selection

--model meta-llama/Llama-3.1-8B-Instruct  # HuggingFace model
--model /path/to/local/model               # Local model path

Layer Specification

The --layer argument supports multiple formats:

FormatDescriptionExample
Single layerExtract from one layer--layer 15
RangeExtract from layer range--layer 14-16
ListExtract from specific layers--layer 14,15,16
Auto-optimizeFind optimal layer--layer -1

Generation Settings

ArgumentTypeDefaultDescription
--max-new-tokensint300Maximum new tokens for generation
--split-ratiofloat0.8Train/test split ratio