OmniAI::Llama

LICENSE RubyGems GitHub Yard CircleCI

A implementation of the OmniAI for api.llama.com.

Installation

gem install omniai-llama

Usage

Client

A client is setup as follows if ENV['LLAMA_API_KEY'] exists:

client = OmniAI::Llama::Client.new

A client may also be passed the following options:

  • api_key (required - default is ENV['LLAMA_API_KEY'])

Configuration

Global configuration is supported for the following options:

OmniAI::Llama.configure do |config|
  config.api_key = 'LLM|...' # default: ENV['LLAMA_API_KEY']
end

Chat

A chat completion is generated by passing in a simple text prompt:

completion = client.chat('Tell me a joke!')
completion.content # 'Why did the chicken cross the road? To get to the other side.'

A chat completion may also be generated by using a prompt builder:

completion = client.chat do |prompt|
  prompt.system('Your are an expert in geography.')
  prompt.user('What is the capital of Canada?')
end
completion.content # 'The capital of Canada is Ottawa.'

Model

model takes an optional string (default is Llama-4-Scout-17B-16E-Instruct-FP8):

completion = client.chat('How fast is a cheetah?', model: OmniAI::Llama::Chat::Model::LLAMA_4_SCOUT)
completion.content # 'A cheetah can reach speeds over 100 km/h.'

Temperature

temperature takes an optional float between 0.0 and 2.0 (defaults is 0.7):

completion = client.chat('Pick a number between 1 and 5', temperature: 2.0)
completion.content # '3'

Stream

stream takes an optional a proc to stream responses in real-time chunks instead of waiting for a complete response:

stream = proc do |chunk|
  print(chunk.content) # 'Better', 'three', 'hours', ...
end
client.chat('Be poetic.', stream:)

Format

format takes an optional symbol (:json) and that setes the response_format to json_object:

completion = client.chat(format: :json) do |prompt|
  prompt.system(OmniAI::Chat::JSON_PROMPT)
  prompt.user('What is the name of the drummer for the Beatles?')
end
JSON.parse(completion.content) # { "name": "Ringo" }

When using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message.