SF Ruby Conference Videos Are Live. Watch This One First.
The SF Ruby Conference 2025 videos just dropped. All 58 recordings are now available at sfruby.com/talks. Keynotes, workshops, startup demos—the works.
If you only watch one, make it Carmine Paolino's keynote: RubyLLM: One API, One Person, One Machine for AI.
Here's why.
The Merchants of Complexity
The AI world has been sold a lie. You need frameworks. SDKs. Enterprise architectures. Bullshit.
AI today is just API calls. That's it.
And when the game becomes building products instead of training models, complexity is death and simplicity is everything. Rails proved this a decade ago. Now RubyLLM is proving it again.
One API for Everything
RubyLLM gives you one interface for every model, every vendor. OpenAI, Anthropic, Gemini, Bedrock, DeepSeek, Mistral, Ollama—doesn't matter. Same beautiful Ruby code.
chat = RubyLLM.chat
chat.ask "What's the weather in Berlin?"
# Switch providers mid-conversation
chat.with_model('claude-sonnet-4-20250514')
chat.ask "Now explain quantum computing"
That's it. No adapter patterns. No provider-specific SDKs. No dependency hell.
While Python developers debug their 14-line "Hello World," we're shipping.
Rails Integration That Actually Works
RubyLLM has first-class Rails support. Run the generator, and you get Chat and Message models with automatic persistence. Your conversations save to the database without extra code.
class Chat < ApplicationRecord
acts_as_chat
end
chat = Chat.create!(model_id: 'gpt-4o')
chat.ask "Summarize this document", with: { pdf: "contract.pdf" }
Vision. PDFs. Audio. Streaming. Tools. It's all there.
Ruby's Time in AI
Carmine built Chat with Work on RubyLLM. One developer. One machine. Serving thousands.
This is what "botscaling" looks like in practice. Small teams. Big output. No infrastructure complexity.
Ruby's time in AI isn't coming. It's here.
Watch the full keynote at sfruby.com/talks and check out RubyLLM yourself. You'll have AI running in your Rails app in five minutes.