UK AI chip company Fractile has today exited stealth and announced $15 million in Seed funding.
Founded in 2022 by 28-year-old artificial intelligence PhD, Walter Goodwin, Fractile s building its first new AI chip, capable of running state-of-the-art AI models up to 100x faster and 10x cheaper than existing hardware.
Today’s AI chips are the biggest constraint to better AI performance. – Every large AI company currently relies on fundamentally similar chips, which are well-suited to training LLMs but not to inference (the process of running live data through a trained model).
This means AI models are very expensive to run, their performance is inhibited, and their potential future capabilities are restricted. It also makes it hard for AI model builders to deliver meaningful differentiation.
There are two paths available to a company attempting to build better hardware for AI inference. The first is specialisation: honing in on very specific workloads and building chips that are uniquely suited to those specific requirements.
Because model architectures evolve rapidly in the world of AI, whilst designing, verifying, fabricating and testing chips takes considerable time, companies pursuing this approach face the problem of shooting for a moving target whose exact direction is uncertain.
The second path is to fundamentally change the way that computational operations themselves are performed, create entirely different chips from these new building blocks, and build massively scalable systems on top of these. This is Fractile’s approach, which will unlock breakthrough performance across a range of AI models both present and future.
A Fractile system will achieve astonishing performance on AI model inference – initial targets are 100x faster and 10x cheaper – by using novel circuits to execute 99.99 per cent of the operations needed to run model inference.
A key aspect is a shift to in-memory compute, which removes the need to shuttle model parameters to and from processor chips, instead baking computational operations into memory directly.
Fractile technology is fully compatible with the unmodified silicon foundry processes that all leading AI chips are built on.
Not only will Fractile provide vast speed and cost advantages, but it does so at a substantial power reduction. Power – sometimes measured in Tera Operations Per Second per Watt (TOPS/W) – is the biggest fundamental limitation when it comes to scaling up AI compute performance (see notes below for more detail).
Fractile’s system is targeting 20x the TOPS/W of any other system visible to the company today. This allows for more users to be served in parallel per inference system, with – in the case of LLMs for example – more words per second returned to those users, thereby making it possible to serve many more users for the same cost.
Fractile’s performance leap on inference will accelerate AI’s ability to solve the biggest scientific and computationally heavy problems, from drug discovery to climate modelling to video generation.
Kindred Capital, NATO Innovation Fund, and Oxford Science Enterprises led the round, with participation from Cocoa and Inovia Capital, together with angel investors including Hermann Hauser (co-founder, Acorn, Amadeus Capital), Stan Boland (ex-Icera, NVIDIA, Element 14 and Five AI), and Amar Shah (co-founder, Wayve). To date, Fractile has raised $17.5 million in total funding.
According to Dr Walter Goodwin, CEO and Founder of Fractile:
“In today’s AI race, the limitations of existing hardware — nearly all of which is provided by a single company — represent the biggest barrier to better performance, reduced cost, and wider adoption.
Fractile’s approach supercharges inference, delivering astonishing improvements in terms of speed and cost. This is more than just a speed-up – changing the performance point for inference allows us to explore completely new ways to use today’s leading AI models to solve the world’s most complex problems.”
John Cassidy, Partner at Kindred Capital:, said
“AI is evolving so rapidly that building hardware for it is akin to shooting at a moving target in the dark.
Because Fractile’s team has a deep background in AI, the company has the depth of knowledge to understand how AI models are likely to evolve, and how to build hardware for the requirements of not just the next two years, but 5-10 years into the future. We’re excited to partner with Walter and the team on this journey.”
According to Stan Boland, angel investor:
“There’s no question that, in Fractile, Walter is building one of the world’s future superstar companies. He’s a brilliant AI practitioner but he’s also listening intently to the market so he can be certain of building truly compelling products that other experts will want to use at scale.
To achieve this, he’s already starting to build one of the world’s best teams of semiconductor, software and tools experts with track records of flawless execution. I’ve no doubt Fractile will become the most trusted partner to major AI model providers in short order.”
Fractile has already built a world-class team with senior hires from NVIDIA, ARM and Imagination, and has filed patents protecting key circuits and its unique approach to in-memory compute.
The company is already in discussions with potential partners and expects to sign partnerships ahead of production of the company’s first commercial AI accelerator hardware.
Fractile will use the funding to continue to grow its team and accelerate progress towards the company’s first product.
Lead image:
The European Union looks to have clinched political agreement on the team of 26 commissioners who will be implementing President Ursula von der Leyen’s polic
The European Union's ambitious Digital Decade 2030 plan sets forth bold targets for digital infrastructure, skills and business transformation. However, recent
EU antitrust regulators on Friday (22 November) closed a four-year-long investigation into Apple's rules for competing e-book and audiobook
This week we tracked more than 95 tech funding deals worth over €2.5 billion, and over 15 exits, M&A transactions, rumours,