9 comments

  • frio 25 minutes ago
    Thanks, I've been tooling away in my spare time on my own version of this -- both to get a deeper understanding of agents (everyone suggests writing your own) and to help learn Rust. I'd like to retain `pi`'s configurability though, the ability to self-mutate and generate new tools is incredibly useful, particularly because I don't think any of these things should have access to arbitrary code execution through `bash` (of course, if they have access to, say, `edit` and `cargo run` they still have arbitrary code exec, but...) (so I tend to generate tools on the fly when I encounter something the no-bash agent needs to do).
    • gidellav 18 minutes ago
      I actually though about this issue, but while Pi can have this script-like environment thanks to the fact that it's based on an interpreted language (TypeScript), Rust has its own limitation as a compiled language.

      I decided to allow for customization in a different way:

      1. The prompt library (~/.config/hypernova/prompts/) acts as a simpler alternative to Skills, with the built-in prompts that should replace superpowers + Claude's frontend-design

      2. Compile-time features; things that might make the agent more bloated can be disabled when you decide to compile zerostack

      3. Clean code; code that's short and easy to read, you can just throw zerostack on its own source code in order to build a custom fork if your necessity can't be satisfied. Good features could also be adopted by the main version.

      4. Permission mode; as you can see in the README, there was lots of concern around the permission model, and I landed on a 4-mode system that goes from "Restrictive" (no commands) to "YOLO" (whatever the agent wants to do" + custom regex patterns for allow/ask/deny permission on 'bash' calls. In your case, you just need to run `zerostack -R` to force all tools to ask for permission.

      (Also, there is a work-in-progress features for programmable agents, but that's yet to be announced)

      • frio 15 minutes ago
        I've been trying to use `Deno` underneath `Rust` so that the tools can still be written in Typescript and thus self-mutated without the compilation step (but I can still try to do clever things with V8 Isolates or similar). It's been an ugly experiment so far; I'm vaguely thinking a simpler model would be to just define a binary "API" and run tools by exec-ing binaries.
        • gidellav 10 minutes ago
          I have to be honest and tell you that try to load such an heavy runtime as a scripting layer is not a great idea; at the same time I can tell you that I am working on another Rust project where I also needed scripting, and after three attempts I landed on rhai (https://rhai.rs/) (https://rhai.rs/book).

          You might find it nice for pretty much all use cases except for high-performance scripting (so, if you are not try to build the entire logic entirely in rhai, you are going to be fine).

          • frio 7 minutes ago
            Yeah, it's been a bit of a dead end. I didn't want the heavy runtime but felt it was worth disproving after experimenting rather than ruling out off the bat. Even before getting it running, the dependency list alone was pretty discouraging, especially given the storm of supply chain attacks these days.

            Rhai looks nice, I'll take a look, thanks! And good luck with Zerostack.

  • throwa356262 1 hour ago
    "RAM footprint: ~8MB on an empty session, ~12MB when working"

    I like this, Claude Code is using multiple gigabytes, which is really annoying on lowend laptops

    • messh 23 minutes ago
      The memory footprint is great, it allows finally running these coding agents in extra small instances -- say x1 on shellbox.dev
    • tecoholic 1 hour ago
      Yes. Just this fact is going to make a lot of people try it out.
    • marknutter 1 hour ago
      Isn't that because of the context window size?
      • gidellav 52 minutes ago
        Hi, I'm the developer of zerostack! No, the memory footprint is not beacuse of the context window size: on my benchmarks, with a 128k context loaded, and it jumped from 8MB (without any chat/context loaded) to 11MB.

        The reasons why the memory footprint of zerostack are:

        - Rust, and not JS/Python, so no interpreters/VMs on top

        - Load-as-needed, so we only allocate things like LLM connectors when needed

        - `smallvec` used for most of the array usage of the tool (up to N items are stored in stack)

        - `compactstring` used for most of the string usage of the tool (up to N chars are stored in stack)

        - `opt-level=z` to force LLVM to optimize for binary size and not for performance (even tho we still beat both in TTFT and in tool use time opencode)

        - heavy usage of [LTO](https://en.wikipedia.org/wiki/Interprocedural_optimization#W...)

      • SwellJoe 48 minutes ago
        The context window is not on your system. It's on the server with the model. There may be some local prompt caching, of some sort, but you're not locally hosting the context unless you're also locally hosting the model.
      • SatvikBeri 1 hour ago
        The context window has nothing to do with RAM usage and even if it did, a million tokens of context is maybe 5mb.
  • 360MustangScope 43 minutes ago
    Funny this comes out today. I was just about to start to write one in rust. It's amazing having opencode slowly leak memory and end up becoming 6gbs on a large project and then get slower and slower.

    Will check this out! Seems cool!

    • gidellav 17 minutes ago
      Yes! This project derived from an OOM killer activation that happened on my old laptop beacuse i had more than 2 opencode instances open together with Firefox...
  • khimaros 33 minutes ago
    i built something with a similar philosophy here: https://github.com/khimaros/airun -- it is intended to be piped and redirected. it discovers skills, AGENTS and prompt templates from Claude Code, Pi.dev, OpenCode and others. no TUI, but does have a basic tool calling loop

    $ airun -q -p 'output a shell command for linux to display the current time. output only the command with no other code fencing or prose' | airun -q -s 'review the provided shell command, determine if it is safe, run it only if it is safe, and then summarize the output from the command' --permissions-allow='bash:date *'

    • gidellav 25 minutes ago
      While I think that the core philosohpy is the same, i'd like to ask: why adding features like Skills and prompt templates?

      I personally decided to not implement Skills and instead using a prompt library approach, where certain .md are used to fully replace the system prompt, in order to allow for an approach similar to Skills with ~100 LoC dedicated to this system.

  • hiAndrewQuinn 57 minutes ago
    The codebase was small enough that I handed it over to DeepSeek v4 Flash in Pi to skim through for any risky business, and I didn't find anything concerning. Nice work.
    • koito17 13 minutes ago
      Since the OP stated they used DeepSeek V4 Flash for generating a lot of the code, I decided to check whether there were any outdated dependencies. In my experience, with Rust projects, if you do not instruct models (even Claude 4.7 Opus) to use `cargo add` instead of manually editing the Cargo.toml, you will almost certainly get out-of-date dependencies added to your project.

      Manually checking the dependencies used by this project, I was pleased to see they are all the latest version. That doesn't mean there are no issues lurking in transitive dependencies, of course.

      As for getting an LLM to review the code, I think we can get all opinionated very fast. For instance, when I was eyeballing the code, some of the enum methods converting to/from strings made me think "this could've been a single #[derive] with strum." That would make the code in provider.rs a lot more concise, at the cost of importing one crate (with no dependencies!)

      Lastly, for fun, I decided to get DeepSeek V4 Pro (with Max thinking) to "audit" the codebase, and the output mentioned no obvious signs of hidden telemetry, but it did note that the project sets the panic handler to "abort", which I have strong opinions on... Presumably the OP wanted to avoid linking against libunwind to save a few kilobytes of binary size, but now you have a binary that immediately aborts and doesn't even give the user a stacktrace of what just crashed. I would rather have a ~50 KiB larger binary if it means getting useful debug info during a panic. Additionally, if there are async tasks that panic, they can't be recovered to display a generic error message; instead the whole process just aborts.

    • gidellav 49 minutes ago
      Thanks! Funny enough, a good chunk of the coding was done by Deepseek v4 Flash, while I hand-wrote a couple of the TUI logic, as deepseek kept failing on certain cursor-moving logic, and I fully managed the memory optimization process (as you can read on another comment I left, it both a set of compiler optimizations and usage of certain Rust crates in order to leverage more efficient data structures).
      • hiAndrewQuinn 40 minutes ago
        Taking notes and comparing this against my own (non coding agent) Rust TUI project, thank you! I'm new to Rust so this is a helpful baseline.
        • gidellav 28 minutes ago
          No problem, happy to help!
    • kadoban 44 minutes ago
      > I handed it over to DeepSeek v4 Flash in Pi to skim through for any risky business

      Doesn't prompt injection make that a rather flimsy investigation?

  • sergiotapia 1 hour ago
    Given agent harnesses affect so much of the performance of models, it would be great to see some kind of benchmark on how this tool performs compared to claude/codex/opencode/pi etc.
    • gidellav 42 minutes ago
      Hi! While I didn't try any agent benchmark, I already though of this possible issue, and I tried to approach it on two different levels:

      1. The tools that are given to the agent are almost the same to the one defined in Opencode, except for Skills and Subagents (both features not implemented in zerostack)

      2. Zerostack is prompt-based, so that it ships with a set of .md files, stored in ~/.config/zerostack/prompt, and that can be selected from the TUI in order to activate different 'agents': as you can see from the README, it is designed to contain the most important feautres of superpower + Claude's front-end design + git worktree support and Ralph Wiggum loops (both as integrated features)

  • hparadiz 1 hour ago
    this is what I've been waiting for

    a low level language. please no more scripting language TUIs!

    • nine_k 36 minutes ago
      Rust, a language with affine types, generics, lifetimes, deep static analysis, hygienic macros, etc is not low-level. It's nearly as high-level as Haskell (without HKTs though).

      It just does not rely on GC and allows to manage resources efficiently. This efficiency is partly due to its being so high-level.

      • gidellav 31 minutes ago
        While I agree on the fact that it allows to manage resources efficiently, I don't agree on the fact the efficency derives from it being high-level; from a purely tecnical standpoint, i could skim off 2-3MB from the memory footprint by writing the code in pure C, as there are some unused parts of Rust's std that cannot be removed without recompiling std.

        This is obv only a technical talk, as writing an AI TUI in pure C would be rather... ehhh

    • schaefer 56 minutes ago
      There has been no reason to wait... Codex is written in rust.

      -- So is deepseek-tui.

      • hparadiz 53 minutes ago
        Forgot to add an open source qualifier. I use codex lol
        • andxor 50 minutes ago
          Codex is also opensource.
          • hparadiz 35 minutes ago
            I don't really want something owned by a company for my local stuff. I'd prefer it be small and minimalistic. Maybe in the future I'll change my mind and it will be more like a browser but for now I wanna keep it small and local.
            • gidellav 14 minutes ago
              Thanks! I don't think that the only advantages are being open and lightweight, but you can actually find some more interesting features such as Ollama support, integrated Prompts (in order to compete with superpowers), git worktrees integration, and so on
    • iknowstuff 1 hour ago
      Isn’t codex in rust?
  • artem_am 2 minutes ago
    [flagged]
  • nimchimpsky 6 minutes ago
    [dead]