For those that remember, in the early days of the PC x86 market, you needed a Maths Co-Processor to handle the maths operations in large-scale data processes. In those days, I was working in Oil and Gas, and it was the early days of GPS. What’s becoming increasingly apparent to me is that the story of that combination of a data-intensive system and an external "helper" has some clear and informative parallels with our situation today.
To get accurate readings from the systems available back then, we needed to have two locations set up or sit for 5 days using the transit satellite network. One location was a known point, which had been painstakingly surveyed into a first, second or third-degree trig pillar (Trigonometry (Trig) Pillar History Triangulation Station Overview). The second location was a new point where we wanted to record its location to a high degree of accuracy.
However, in those early days, the US military injected noise into the CA (Civilian Access) code to degrade the signal and make positioning less accurate. To overcome this signal degradation, we used something called Differential GPS. This process involved recording the positions of two systems, one known and one new, and calculating the noise applied. By continually adjusting the data so the first known system on the Trig point was always correct, we could then apply those same changes (i.e. the differential) to the remote receiver.
However, this consumed lots of processing power, and in those days, the CPUs were not designed to run operations like that, so we needed to install a Maths Co-Processor.
In my last blog, I spoke about how we created the contract analytics space with Seal Software and that this was only possible due to the hardware for consumer-based systems moving from 32-bit to 64-bit processing.
Fast forward to today, and the situation resembles the days of GPS systems, as external processors (GPU, LPU, TPU, NPU, etc) are needed to process the vast amounts of calculations used by LLMs. In time, these will go the way of the maths co-processor, and today’s NPUs and GPUs will be absorbed into a single system and find their way into our mobile phones.
This repeating cycle is important because whilst it presents a technical challenge, we have identified a way of resolving it that presents a significant opportunity for users, enterprises and software developers.
Soon, we will have on-chip LLMs for edge computing, much like we have GPS on every phone. This will allow users to interact with and utilise an LLM that can be tailored to each individual. It will be able to provide information and help in all kinds of situations, so the user never becomes lost in the data wilderness– just as GPS keeps us from getting lost in the physical world.
However, this advance brings us to the technical challenge – let’s call it the start point problem.
I’ll take a simple example. My friend Dave, down at the local pub, has run a small company for years. I can ask him how to run my company and get advice on the various tasks involved. He is probably able to give a set of answers that fulfils my needs and will charge me very little.
At the other end of the spectrum, I could engage one of the top business consulting firms - KPMG, PWC or Deloitte. I will have the same questions, but their answers are going to be more expensive and presumably of a higher quality. Which starting point is better for me? I have no idea!
The new local LLMs face the same challenge: how to select the best starting point to traverse the network to gain the best possible outcome.
Just as triangulation depends on high-quality reference points, LLMs rely on the quality of their inputs and starting conditions. Poor inputs can lead to hallucinations or unreliable outputs, and users will lose their way. Ensuring the starting points are precise and well-calibrated is essential for high-quality outcomes.
However, as LLMs migrate to the edge, they must be reduced in size. There are several ways to accomplish this, but whatever method they use, they will lose some information as their size is reduced. I will not go into the differences and benefits of each method, but what is important is that each share the opportunity to reduce this loss whilst preserving as much accuracy as possible.
It is this challenge that we at DM are working to solve. We are creating a new market space that addresses an unsolved business problem with a solution that can truly fly due to the recent advances in everything from maths to hardware. And much like the creation of the contract analytics space, we will enable our customers to take full advantage and fly with us.