5 comments

  • julius 32 minutes ago
    Click coordinates. Agentic GUI is really annoying when the multi-modal agent cannot click on x,y coordinates.

    I tested Qwen3.6, Gemma4, Nemotron3-nano-omni. They fully hallucinate x,y coords. (did not try GLM-5V yet)

    GPT-5.5 can easily do it. But also Vocaela, a tiny 500M model, is quite good at it. Hope they improve the training for x,y clicking soon on the smallish multi-modals.

    Recently slopped a http service together just so my local models can click, instead of relying on all the wild ways agents currently hack into the browser (browser-use, browser-harness, agent-browser, dev-browser etc) https://github.com/julius/vocaela-click-coords-http

    • cyanydeez 25 minutes ago
      This sounds a lot like another hacker news posted in the last few days. The same problem image generators have with a prompt like, produce numbers 1-50 in a spiral pattern and it can't count properly. But if you break it into a raster/vector where you have it first produce the visual content and then a SVG overlay, it's completely capable.

      Have you tried doing a two step: review the image, then render a vector?

      • julius 15 minutes ago
        Maybe there is a smart trick to get them to do the right thing, but the things I tried did not work.

        At one point I had some smaller model draw bounding boxes around everything that looked interactable and labels like "e3" ... then asked the model to tell me "click on e3". Did not work in my tests was pretty much as bad as x,y.

  • gertlabs 2 hours ago
    GLM-5V-Turbo is a model I wanted to like due to its speed and API reliability, but it didn't perform well in our coding and reasoning testing. More recent open source models have made it obsolete. GLM 5.1 is so many light years ahead of it on everything except speed, that I'm not sure why it's still being served.

    Comprehensive evaluation results at https://gertlabs.com/rankings

    • gruez 41 minutes ago
      >but it didn't perform well in our coding and reasoning testing

      >Comprehensive evaluation results at https://gertlabs.com/rankings

      But if you go to the linked site, it seems like the only thing that's part of the evaluation is how well the models play various games? I suppose that counts as "reasoning", but I don't see how coding ability tested?

      • gertlabs 22 minutes ago
        Games is loosely defined here, as we run the bench across hundreds of unique environments. For some, the models write code to play a game, either one-shot or via a harness where they can iterate and use tools. Some they play directly, making a decision on each game tick. Some are real-time, giving the models a harness where they can write code handlers or submit decisions to interact with environments directly.

        Coding is what we test for most heavily. Testing this via a game format (instead of correct/incorrect answers) allows us to score code objectively, scale to smarter models, and directly compare performance to other models. When we built the first iteration last year, I was surprised by how well it mapped to subjective experience with using models for coding. Games really are great for measuring intelligence.

    • BugsJustFindMe 43 minutes ago
      This may be a strange request, but is it at all possible to include Cursor's Composer models in your tests?
      • gertlabs 0 minutes ago
        I am curious about the model, but for the most part, we have access to the same models that you do and only test models with standalone API releases.
    • XYen0n 1 hour ago
      GLM-5.1 does not support image input.
    • scotty79 1 hour ago
      I think the point is to use them both with GLM 5.1 delegating vision tasks to GLM-5V-Turbo
  • _pdp_ 6 minutes ago
    We just migrated an AI agent from Kimi to GLM and frankly I am surprised by the results. It feels premium.

    However, both Kimi and GLM can end up in doom loops so be careful how you use them. Without a proper harness the agent can easily get into some tricky situations with no escape.

    We had to develop new heuristics in our cloud harness just because of this but I am really grateful that we did as the platform feels now more robust.

    A small price to pay for model plug & play!

  • desireco42 3 minutes ago
    I've been using GLM pretty much exclusively last 6-8 months. I have access to Anthropic and OpenAI models and others. I always keep returning to GLM, it isn't the best, sometimes I would go to Codex to help it, but overall, especially with Turbo, it is everyday good model.

    Turbo makes a huge difference in everyday use because it saves you time and you are not in the mood always to wait endlessly.

  • muddi900 1 hour ago
    z.ai will use quantized models in off hours. Buyer beware
    • _aavaa_ 1 hour ago
      Do you have proof for this?
    • yogthos 1 hour ago
      I have a subscription and I have not seen any difference in performance during on/off hours. What exactly are you basing this on?