As models get better at reasoning, you shouldn’t need to manually draw structured paths. It should feel more like onboarding a new teammate - you give high-level goals, and context, and they figure out the details. We don’t give flowcharts to teammates because it’s a lot of overhead to specify everything upfront. We think agentic systems are heading the same way. Flowcharts are helpful in some cases, but not how we’ll build long-lived assistants.
Our take is that trust in agent systems has to be empirical. You start with manual testing and then layer on AI based simulations (we’re adding this in Rowboat soon) to test more scenarios at scale. Splitting work into multiple agents also makes it easier to isolate and test parts separately.
The second part is done (generating it and posting it), but finding the news is the hardest part, even if I share some RSS feed. Would this help me with my use case or is something completely different?
Rowboat has tools to search the web, find HN posts, browse Reddit etc, and you can ask the copilot to build an agent to filter posts based on topics - at the granularity that you want. We have time based triggers, so you can have the agent invoked every x hours.
We have a similar prebuilt template you could checkout: https://app.rowboatlabs.com/projects?shared=N2pJTzyTdh-NdwMi....
Even for support, it's more flexible: companies are shifting from narrow "customer support" to broader customer experience - not just resolving tickets, but handling onboarding, account health, proactive updates, and escalations across teams. With Rowboat, you can compose cross-functional agents across support, product, and ops. The same system that answers tickets can also trigger workflows, update dashboards, or prep reports.
Does this make sense?
What is the plan if, like Jetbrains have recently experienced, customer usage exceeds the $20?
Power users treat Rowboat as their daily go-to assistant for a range of different tasks, customizing assistants for themselves and expanding to cover more use cases.
Regarding pricing: If usage exceeds beyond the $20 (starter) plan, we have a $200 (pro) plan that users can upgrade to. Additionally, we will soon be launching pay-as-you-go pricing as well.
Those are much harder and time-taking to express and maintain in a flowchart model. Our goal with Rowboat was to make it simple and quick to build and maintain multi-agent assistants. Hence, the copilot is equipped with tools and state-of-art orchestration patterns [1], which allow it to build ready-to-go assistants in minutes from high-level requirements.
[1] https://cdn.openai.com/business-guides-and-resources/a-pract...
You are a business facing startup, act like one.
The current UI[1] made me feel the target market are my elementary school kids.
Rowboat is especially designed for agentic patterns (e.g. manager-worker) which lend more autonomy to agents. Rowboat's copilot is empowered to organize and orchestrate agents flexibly, based on the nature of the assistant.
Here is some personal experience: we previously built Coinbase's automated chatbot and we used a flowchart type builder to do that. This was a intent-entity based system that used deep learning models. It started great, but pretty quickly it became a nightmare to manage. To account for the fact that users could ask things out of turn or move across topics every other turn - we added in concepts called jumps - where control could go from one path to another unrelated path of workflow in on hop - which again introduced a lot of maintenance complexity.
The way we see it is that, when we assign a task to another human or a teammate we don't give them a flowchart - we just give them high level instructions. Maybe that should be the standard for building systems with LLMs?
Is this making sense?
No, the instructions are not compiled into a flowchart under the hood. We use OpenAI’s agent SDK and use handoffs as a mechanism to transfer control between agents.
There are 3 types of agents in Rowboat: 1. Conversational agents are ones which can talk to the user. They can call tools and can choose to handoff control to another agent if needed. 2. Task agents can’t talk to users but can otherwise call tools and do things in a loop - they are internal agents. 3. Pipeline agent is a sequence of task agents (here the transfer of control is deterministic).
For instance, if we build a system for airline customer support, there might be a set of conversational agents each for different high level topics like ticketing, baggage etc. and internally they can use task and pipeline agents as needed.
Does this make sense?