cerebras

cerebras/deepseek-r1-distill-llama-70b

DeepSeek R1 Distill Llama 70B model optimized for fast inference on Cerebras hardware. Supports up to 65,536 tokens context length.

cerebras

cerebras/deepseek-r1-distill-llama-70b

DeepSeek R1 Distill Llama 70B model optimized for fast inference on Cerebras hardware. Supports up to 65,536 tokens context length.

cerebras

cerebras/deepseek-r1-distill-llama-70b

DeepSeek R1 Distill Llama 70B model optimized for fast inference on Cerebras hardware. Supports up to 65,536 tokens context length.

Provider:

cerebras

Model type:

chat

chat

chat

Location:

us

Context Window

0

Intelligence Rating

Speed Rating

Cost Efficiency Rating

Pricing

$0.00

Input tokens per million

$0.00

Output tokens per million

Features

Create an account and start building today.

Create an account and start building today.

Create an account and start building today.