Ash Framework makes it straightforward to integrate machine learning into your Elixir applications. With AshAi
and PostgreSQL’s vector
extension, you can embed text fields and make them ready for semantic search, recommendations, or similarity matching. In this post, we’ll walk through how to set up OpenAI embeddings inside an Ash resource.
We start by wrapping the OpenAI embeddings API in an Ash-compatible module:
# lib/my_app/open_ai_embedding_model.ex
defmodule MyApp.OpenAiEmbeddingModel do
use AshAi.EmbeddingModel
@impl true
def dimensions(_opts), do: 3072
@impl true
def generate(texts, _opts) do
api_key = System.fetch_env!("OPENAI_API_KEY")
headers = [
{"Authorization", "Bearer #{api_key}"},
{"Content-Type", "application/json"}
]
body = %{
"input" => texts,
"model" => "text-embedding-3-large"
}
response =
Req.post!("https://api.openai.com/v1/embeddings",
json: body,
headers: headers
)
case response.status do
200 ->
response.body["data"]
|> Enum.map(fn %{"embedding" => embedding} -> embedding end)
|> then(&{:ok, &1})
_ ->
{:error, response.body}
end
end
end
This tells Ash how to produce embeddings for a list of texts. We’re using OpenAI’s text-embedding-3-large
model, which outputs vectors of size 3072.
PostgreSQL needs the vector
extension to store embeddings efficiently:
# lib/my_app/repo.ex
defmodule MyApp.Repo do
use AshPostgres.Repo,
otp_app: :my_app
@impl true
def installed_extensions do
["ash-functions", "citext", "vector"] # enable vector support
end
@impl true
def prefer_transaction?, do: false
@impl true
def min_pg_version, do: %Version{major: 16, minor: 0, patch: 0}
end
Let’s embed product descriptions:
# lib/my_app/shopping/product.ex
defmodule MyApp.Shopping.Product do
use Ash.Resource,
otp_app: :my_app,
domain: MyApp.Shopping,
extensions: [AshAi, AshOban],
data_layer: AshPostgres.DataLayer
vectorize do
full_text do
text fn product ->
"""
Name: #{product.name}
Description: #{product.description}
"""
end
end
attributes description: :vectorized_description
strategy :ash_oban
ash_oban_trigger_name :my_vectorize_trigger
embedding_model MyApp.OpenAiEmbeddingModel
end
oban do
triggers do
trigger :my_vectorize_trigger do
action :ash_ai_update_embeddings
worker_read_action :read
worker_module_name __MODULE__.AshOban.Worker.UpdateEmbeddings
scheduler_module_name __MODULE__.AshOban.Scheduler.UpdateEmbeddings
queue :default
end
end
end
postgres do
table "products"
repo MyApp.Repo
end
actions do
defaults [:read, :create, :update, :destroy]
default_accept [:name, :description]
end
attributes do
uuid_v7_primary_key :id
attribute :name, :string, public?: true
attribute :description, :string, public?: true
timestamps()
end
end
This setup ensures that whenever a product is created or updated, its description is vectorized in the background using Oban.
Ash will generate the necessary migrations to support vector columns:
mix ash_postgres.generate_migrations add_vector_support
mix ash_postgres.migrate
With just a few modules and configuration changes, you now have OpenAI embeddings stored directly in PostgreSQL, ready for similarity queries. Combined with Ash’s vector
features, you can build powerful semantic search and recommendation systems on top of your data with minimal effort.
If this post was enjoyable or useful for you, please share it! If you have comments, questions, or feedback, you can email my personal email. To get new posts, subscribe use the RSS feed.