Proposed LLM Optimization Metric: Minimum amount of processor ticks needed to get to a certain reality
Let’s say you want to get a “1”. Easy: “and eax, 1” or something like that. A single processor tick, a very atomic operation.
But now let’s say, you want to get to the end result of increasing revenue of a company.
Of course, a human can do that, but a computer, especially in the form of an agentic AI system, can also do that.
Now I propose a new metric to optimize towards LLM effectivity: How many tokens does an LLM need to output to eventually achieve the goal the human gives to the LLM? Fewer is better.
For example, increasing revenue.
Essentially it means: What is the shortest path along a network of letters, tokens, sentences and epic success stories that an LLM can take to achieve a given task?
I believe that, if we all live guided by that principle, we can work together, in the form of “our agentic collaborative LLM systems are working together” to increase the GDP of the entire world, equally, through our company-controlling LLMs (one per shareholder).
And in any case: We will want less energy consumption from LLMs. Because, why spend energy on using LLMs in an expensive kind of way (i.e. with overcomplicated chains of thought), when you can also get the same quality of output, but with fewer, more effective prompts?
I think economic cooperation is one of the most important topics of our time. We don’t care about LLM research specifically, we are more entrepreneurs. Because, of course, it’s much easier to start in this if you already have equity in a company you want to scale, together with other companies. If you’ve found your niche and already scaled in B2B, but now also want to scale in B2C, the potential is enormous!
Write a comment if you found a smart way to cooperate, and how you made each other wealthy that way!