meta/llama-4-scout-instruct

A 17 billion parameter model with 16 experts

Input
Configure the inputs for the AI model.
0
100

Top-p (nucleus) sampling

Prompt

2
20480

The maximum number of tokens the model should generate as output.

0
100

The value used to modulate the next token probabilities.

0
100

Presence penalty

0
100

Frequency penalty

Output
The generated output will appear here.

No output yet

Click "Generate" to create an output.

llama-4-scout-instruct - ikalos.ai