Turbo & RubyLLM: Build a Streaming AI Chat in Rails
Build a ChatGPT-style streaming chat interface using RubyLLM and Turbo Streams. Real-time AI responses, automatic persistence, zero JavaScript API calls.
I’m Jonathan, the CTO of Loyal, with over two decades of experience in software development for startups. This space is where I share my experiences, insights, and opinions—both popular and unconventional.
My ramblings will be on mobile and web applications, startup launches, favorite books, user acquisition, user experience, (maybe parenting) and most importantly, driving revenue.
Build a ChatGPT-style streaming chat interface using RubyLLM and Turbo Streams. Real-time AI responses, automatic persistence, zero JavaScript API calls.
assert_turbo_stream, broadcast testing, WebSocket timing in system tests. The complete guide to testing Turbo applications in Rails.
turbo_frame_tag, turbo_stream_from, Broadcastable, and all the Rails helpers that make Turbo feel native. Stop writing raw HTML tags.
Upload a CSV, kick off a background job, watch the progress bar fill in real-time. No JavaScript polling. Just Turbo Streams and Action Cable.
Turbo's speed comes from avoiding work: snapshot cache for instant back buttons, prefetching on hover, morphing to preserve state. Here's how it works.