Technology

Systems Engine

Our Applications are built on a solid bedrock of AI technology that works on today’s hardware, scales to the future and is as reliable as regular software.

TNE’s solution enables the use of AI Experts to correlate disparate data sources without the huge expense of re-engineering and “integrating” IT systems.

Using AI Experts we can build usable and useful solutions in a fraction of the time and at a much lower cost.

We have a wide ranging set of technologies that are patented and unique. This is built on a foundation of open source technology, so you get the best of all worlds. The choice and speed of open source and the unique enterprise features we know will give you competitive advantage.

Write it Fast for the Future

This is a simple visual tool combined with an advanced static description language that means you are not writing python code or any other language, but simple descriptions that are easy to debug and maintain. We include full revision control and testing tools so you know that what you are developing is going to work reliably.

We use an agent-based system which lets you connect dozens, if not hundreds of special AI models together. You can view how they work graphically and you can also manage them effectively and trust them as well because they are small and lightweight.

Moreover, rather than generating Python or some other language that constantly needs to be updated for the latest version of software, we use a static description file. This frees you from the many dependencies and idiosyncrasies of today’s systems and prepares you for the future.

Get the Perfect Model

There is no need to develop from scratch with over 1M open source AI models available. Furthermore, there are countless commercial products available. We give you access to all these models and let you evaluate which ones work right for you.

And our growing community of partners are developing proprietary models just for your industry. They can also build custom models just for you that are the heart of your new competitive advantage.

Keep Your Data and Models Private

While we can run any proprietary model, our focus is on open models that you can control and manage. That means your data stays private.

Even better, we can run in very small models that can stay on your PC or phone, ensuring even more privacy and security.

With us, your data is your data.

Run it Fast Locally

It’s a challenge to have all this running so that it can scale and you can manage it with ease. How Systems kernel combines the latest advances in web-scale distributed computing with the special needs of Generative AI models. We transparently run models across PC, mobile, department servers, corporate machines and the cloud giving you the best tradeoff in terms of cost and performance.

Technology has matured significantly in the last year, the critical technology is the Small Agent Framework for the Enterprise (SAFE™ AI). Instead of large uncontrolled models, the heart of the new approach takes small Large Language Models (sLLMs) and programs each individually and puts them into a framework that checks the input and output of each. Such a system is. controllable and easy to deploy. Here is how it works:

Small means really small, we work with models that are 100-1000x smaller than things like ChatGPT. Small Models (with less than 8B parameters) have some terrific advantages. They run fast because the number of computations is significantly reduced. They require less memory so they can fit in smaller machines. All this combines into models that run nearly instantly on PCs, phones or inexpensive commodity servers. This reduces cost and enables a whole generation of open source models that are your proprietary models and which can’t leak data out of your enterprise.

Small models know less than bigger ones, so we solve that problem by deploying many agents working together. Each agent is specialized and highly tuned. This makes it hard for them to hallucinate and easier for us to control them. By deploying a whole series of agents, we get the same power of a large model, but with ability to tune and control them.

While most systems require that you write a custom computer program to put these agents into a workflow or system, we use the latest frameworks to make it possible to execute dozens if not hundreds of agents in a coordinated way.

And we do this without generating more hard to maintain code, but by creating static manifests that are easy to read and maintain and deploy.

And most important we use all these components to provide enterprise features. We have special agents that just explain the methodology used for a recommendation. We use more agents to be “skeptics”. They analyze a memo and tell you all the reasons it might be inaccurate (so you can fix that and create an even more air-tight argument). Finally, we have logging agents that record events and determine what is happening in the system.

As reliable as regular software

All the power in the rule doesn’t mean anything if the answers are not correct. So we use a combination of the most advanced AI techniques along with our enterprise experience to build AI systems as reliable as regular software.

We deploy the latest insights in keeping the systems working properly. We are constantly monitoring the output and looking for inconsistencies, we spot problems with the data that feeds our systems. And we run multiple models simultaneously to spot problems in reasoning and logic.