9 comments

  • adityaathalye 12 minutes ago
    Yeah, "Nobody every got fired for purchasing IBM"... a story as old as time itself.

    But that is the "fear" side of the enterprise sales equation... The "greed" side of it is for the buyer to make the long / short hedge.

    The exec who gets the value of the working product can potentially come out shining, when their peers will be furiously backpedalling next year. And this consummate exec can do it by name-associating with their "main bet" which is optically great for the immediate term but totally out of their control (because big corp vendor will drag its feet like every SAP integration failure they've seen), and feeling a sense of agency by running an off-books skunkworks project that actually works and saves the day.

    A fine needle to thread for the upstart, but better than standing outside the game.

  • somat 1 hour ago
    "When the software is being written by agents as much as by humans, the familiar-language argument is the weakest it has ever been - an LLM does not care whether your codebase is Java or Clojure. It cares about the token efficiency of the code, the structural regularity of the data, the stability of the language's semantics across releases."

    Isn't familiarity with the language even more the case with a LLM. The language they do best with is the one with the largest corpus in the training set.

    • dgb23 45 minutes ago
      Familiarity matters to some degree. But there are diminishing returns I think.

      Stability, consistency and simplicity are much more important than this notion of familiarity (there's lots of code to train on) as long as the corpus is sufficiently large. Another important one is how clear and accessible libraries, especially standard libraries, are.

      Take Zig for example. Very explicit and clear language, easy access to the std lib. For a young language it is consistent in its style. An agent can write reasonable Zig code and debug issues from tests. However, it is still unstable and APIs change, so LLMs get regularly confused.

      Languages and ecosystems that are more mature and take stability very seriously, like Go or Clojure, don't have the problem of "LLM hallucinates APIs" nearly as much.

      The thing with Clojure is also that it's a very expressive and very dynamic language. You can hook up an agent into the REPL and it can very quickly validate or explore things. With most other languages it needs to change a file (which are multiple, more complex operations), then write an explicit test, then run that test to get the same result as "defn this function and run some invocations".

    • ehnto 1 hour ago
      And they're very sensitive to new releases, often making it difficult to work with after a major release of a framework for example. Tripping up on minor stuff like new functions, changes in signatures etc.

      A stable mature framework then is the best case scenario. New frameworks or rapidly changing frameworks will be difficult, wasting lots of tokens on discovery and corrections.

  • JSR_FDED 2 hours ago
    The core insight that enterprises select products on familiarity over anything else, is valuable. I’m going to keep it in mind for future customer engagements.
  • xivzgrev 39 minutes ago
    That's just human nature, to prefer what's familiar.

    The insight here is that this also still applies to huge enterprise contracts where supposedly more rational decision making should apply.

  • avereveard 50 minutes ago
    Eh, it's skipped in "the enemy" section an important bit, that was spelled out in the intro by the buyer, and wasn't listened: if the small vendor goes bust, who maintains the system after? if you plan for in 10 year cycles, greenfield buys look scary

    That why vc look favorably to startup which go trough the motion of setting up partner led sales channel. an established partner taking maintenance contracts bridge the disconnect in the lifecycle gap between the two realities.

    But no, corporate is bad, I guess.

    • dgb23 13 minutes ago
      It's an interesting problem for small businesses that want to sell stuff that will be used and relied on for a very long time.

      In a sense, they have to make themselves obsolete. Either by making sure they are a part of a larger network, or by making sure that the org itself can own the product or service.

  • BrenBarn 1 hour ago
    > And they put it succinctly: buying from a small innovative company is brave while buying from a big, well recognised name is an insurance policy and the risk-averse buyer must have the insurance.

    As the article notes, the alternatives from the large companies suck. So this is like buying fire insurance from a company that promptly sets fire to your house. You are buying the insurance while knowing you will need it because the disaster is already happening.

  • sublinear 1 hour ago
    > Enterprise knowledge has always been as much a human problem as a technology one. Nobody wants to do the structuring work, and every prior architecture demanded that somebody do the structuring work rather than their actual job

    This is correct and very agreeable to everyone, but then after some waffle they then write this:

    > Structure, for the first time, can be produced from content instead of demanded from people

    These quotes are very much at odds. Where is this structure and content supposed to come from if you just said that nobody makes it? Nowhere in that waffle is it explained clearly how this is really supposed to work. If you want to sell AI and not just grift, this is the part people are hung up on. Elsewhere in the article are stats on hallucination rates of the bigger offerings, and yet there's nothing to convince anyone this will do better other than a pinky promise.

    • dgb23 33 minutes ago
      I think the explanation comes later in the article:

      "It is graph-native - not a vector database with graph features bolted on, not a document store with a graph view, but a graph at it's core - because the multi-hop question intelligent systems actually have to answer cannot be answered by cosine similarity over chunked text, no matter how much AI you paste on top."

      And

      "It has a deterministic harness around its stochastic components. The language model proposes but the scaffolding verifies. Every inference, every tool call, every state change is captured in an immutable ledger as first-class data and this is what makes non-deterministic components safe to deploy where determinism is required."

  • Till_Opel 27 minutes ago
    [dead]
  • DocTomoe 1 hour ago
    [dead]