AI Playing It's Part
oxes: it targets a breathtakingly large market opportunity, its initial investor funding broke records and made it start-up famous, and its founders are intellectually bold, hyper-driven young men. But what I find interesting about the Partly story are the things that make it unusual. Partly’s product uses AI, but not because they want it to do everything that humans do – they want it to do something that only AI can do. It could be fantastically transformative, but not because it will “solve poverty” or possess superintelligence. Its focus is something that affects people every day, what co-founder Levi Fawcett calls a “hidden unsexy problem that no one really cares about”. Indeed, the Partly story is about finding replacement car parts, a subject I didn’t previously know I cared about but now find fascinating. That’s also due to Fawcett’s highly unusual backstory. To be sure, unconventional-founder-of-exciting-start-up stories are a dime a dozen, but this one is something else.
Fawcett was born in 1995 in Nelson, New Zealand. He grew up in the beautiful Waipara Valley in a family of 11, and most of his childhood unfolded during the dying days of an isolationist, anti-tech cult. The success of his start-up, and the fact that it exists at all, is to a large extent informed by the experiences he had there.
Fawcett’s parents belonged to the Full Gospel Mission, known locally as the God Squad or Camp David. In its 1970s and 1980s heyday, it was led by a man called Douglas Metcalf, or “Lord Yahshuah”, and its followers obeyed strict social rules and performed exhausting physical labour on a farm-based commune, where buildings were connected by hidden tunnels. Photographs from the time show an aesthetic that was part medieval pageant and part Middle Eastern culture by way of the Christmas nativity play. There was a walled compound with decorative battlements, heraldry, swords and men in long robes. Everyone had their own wooden staff. But not long before Fawcett was born, the cult’s fortunes changed. In 1989, Metcalf suddenly died and, from that point, the group’s hold weakened. Female members spoke out in the mid-1990s about Metcalf’s sexual abuse, after which the farm-commune dispersed and the group split into two sects. One sect joined another cult; the other continued to meet in each other’s houses. By the time Fawcett was born, his parents were members of the second sect. Their group no longer lived in a commune, but they maintained the same Bible-literalism and extreme control of information from the outside world.
Fawcett, the eldest of nine, was homeschooled and from a very young age he spent a lot of time pulling things apart and reassembling them. He started fixing cars and tractors for his community and, by the age of 11 he was fully reconditioning engines. We caught up online, with Fawcett at Partly in Auckland one week and working in the United Kingdom the next. He tells me that he “felt very capable” as a child. “I thought it was normal … to do relatively complex things.” One of Fawcett’s childhood friends tells me he was known in the homeschool community as a “practical child-genius figure”. He entered a homemade go-kart with complete steering system into the annual homeschool competition when he was eight. When he was 11, his go-kart was fully motorised and it sprayed flames out the back when it accelerated. Fawcett learnt a lot about risk and failure firsthand, he says, such as staring directly at a welding torch and being blinded for a day.
Mostly though, Fawcett remembers a feeling that deep religious beliefs separated his family from the rest of the world, and a growing suspicion that those beliefs did not stand up. He began to hate what he saw as hypocrisy and lack of proof: I can't see this, I can't experience it, I can't test it – why would I assume that God exists? He tells me that he bought an interlinear bible, taught himself to read its Ancient Greek and Hebrew, and realised that the different translations meant even the Bible was not absolute. He was stubborn and argumentative, and his parents became worried that he would influence his siblings. As a result, he moved out of the family house at 13 to live by himself in an unheated shipping container on the property. The experience was “very brutal mentally”, he says.
Around the same time, says Fawcett, he got hold of a broken laptop and reconditioned it. He worked out how to get onto the internet and spent many secret hours reading circuit diagrams and learning about technology, eventually buying and selling electronic parts on TradeMe, one of New Zealand’s first online start-ups. He dug a tunnel beneath his shipping container and, when he wasn’t using it to hang out with friends, he hid his computer down there.
“It had its pros and its cons,” Fawcett says of his childhood. “Probably more cons.” Still, he credits some of the qualities he was forced to develop as a child for his start-up success today. He was a lone, powerless sceptic in an already isolated group, continually challenging his elders to make things make sense. It was very stressful, but it also gave him an “extreme bias towards reasoning from first principles”, he says, and “taking every problem to its limits”.
It also helped that Fawcett was used to being an outsider and taking risks. When he was 16, he went to high school for the first time, attending for one year. It was a tough experience. He had what he calls “big deficiencies” and he was “socially very weird”. But in 2012, he was accepted to the University of Canterbury. He believed he needed “to work 100-hour weeks for a very, very long time to be at the same level as the average person in the real world”.
Indeed, Fawcett started three different businesses while he was a full-time student, and, in his third year, he got a job at the space-technology juggernaut Rocket Lab. The experience taught him how to be ambitious on a global scale. Then, in 2020, he and three other friends launched Partly. Each founder has played a critical role in the company, but Nathan Taylor, a co-founder who has known Fawcett since childhood (he was the one who told me about the go-karts), says Fawcett is its driving force. The problem that Partly now seeks to solve had been obvious to Fawcett since he was a teenager. “Why is it that I can recondition an engine in six hours,” he says, “but it takes me two weeks to find the parts?”
At first the issue doesn’t seem entirely surprising. There are lots of cars and spare parts in the world, with lots of overlapping, intersecting car-parts supply chains; it is probably reasonable, if annoying, that it takes time to locate and ship them. And it doesn’t help that the search is usually conducted almost entirely by phone. “I would call 18 different companies and wait for a week for them to call back,” says Fawcett. But taken to its limits, what Partly seeks to do is breathtakingly more complicated and requires staring down astronomically vast permutations, similar to those you see in the game Go, but, in this case, the permutations are of cars and parts.
There are 1.4 billion passenger vehicles on the road today, says Fawcett. They were built by 1000 different manufacturers over the past 60 years and each one of them has up to 30,000 replaceable parts – approximately 18 trillion parts in total. Some car parts work in different car models; some work in just one. “The number of parts and vehicle combinations is in the hundreds of trillions,” Fawcett says. When any one person anywhere in the world today is looking for car parts, they are pulling on strands of this incredibly chaotic web of possibilities.
Partly’s mission is to replace a messy, uneven process with a completely new infrastructure that connects all the world’s cars and parts and people in a single, automated, global supply chain. The ultimate goal is to convert a search that could take weeks into one that takes seconds. At the heart of its solution is a foundation model, a large-scale AI system such as the one that underlies ChatGPT but trained on car parts. To build it the co-founders had to feed it every piece of car-part information they could find, “98 per cent of which has never been on the internet”.
Partly spent years negotiating with manufacturers to buy or license data, they hunted down old paper catalogues, they read car-enthusiast blogs, and they purchased and disassembled cars piece by piece, noting and labelling each part. Now the Partly model cross-translates thousands of terms. “You might call it a hood,” Fawcett explains, “I might call it a bonnet.” Partly maps the different part anatomies and connections. “The Toyota fender is a single part; the Audi guard has three components.” And it interprets and cross-references tens of thousands of arcane diagrams designed by thousands of different designers over decades of change.
To find a part using Partly’s foundation model, users submit a photograph, written description or voice memo to its part finder, which identifies what’s needed and orders it from a supplier. In the long run, if Partly can convince enough businesses across the world to license their model, it could become a major player in an industry that currently represents between 2 and 4 per cent of the world’s GDP. That would make them one of the most valuable companies in the world.
I know – there is no such thing as an AI start-up without an audacious, multibillion-dollar plan. But there is a satisfying logic to the Partly story that makes its ambition – and the promise of its product – feel quite different to the everything will change premise of other AI.
Most people’s experience of AI has been of products, such as ChatGPT, which are complicated systems based on Large Language Models (LLMs), and which include the capacity to recognise images and audio, and increasingly to carry out specialised tasks such as writing code. Their fundamental promise is to be an eager assistant, willing to do anything for a user. And while plenty of individual users can authentically say their load has been lightened by these products, there is growing doubt about their true value, especially in the workplace. A clutch of recent reports underlines the large amount of training users require (the products are supposed to be intuitive), the additional labour the models create (from the overproduction of what’s being called “workslop”) and, for all their party-trick possibilities, the ultimately idiosyncratic nature of their usage. I wrote a few weeks back about an MIT study that found most companies had seen zero return on their AI investment. Likewise, a recent survey of 2000 chief executives by IBM found that only 25 per cent of AI initiatives had delivered the expected return. Curiously, the survey also suggested that many AI investments are made from tech/profit FOMO – which is to say 64 per cent of chief executives didn’t implement AI-based products because of a specific problem that impacted their business’s bottom line, but because they were worried that if they didn’t do it, their company would somehow fall behind in an ill-defined race to our AI future.
By contrast, Partly’s foundation model has been specifically trained to carry out a set of tasks in the automative domain. Similar domain-specific models can be found in healthcare and scientific research, including the famous Alphafold2, which advanced protein science and won its creators a Nobel prize. So, on the one hand, we have this amorphous social-technological project to use AI to replace broad human capacity – really, to replace humans. And on the other, AI products, such as Partly’s, are built to solve a specific problem for humans.
To be clear, products like Partly’s may involve job losses, but the fundamental intent is different. There are other interesting differences, too. Products such as ChatGPT regularly produce plausible sounding nonsense. The Partly model’s output is highly accurate. LLMs such as the one ChatGPT uses are built on a huge amount of intense, specialised human labour, which their creators try to keep invisible, and, truly, they don’t seem quite as magical when you realise how human they are. The Partly foundation model is more interesting because you see the amount of work that went into hunting down decades of esoteric, far-flung sources of information to create it.
Let me tell you, I’m waving my hands over some pretty significant complexity here. I am interested in the different promises made by these AI-based products and the ways that people experience them, but there are tangled technical interdependencies between models that complicate comparisons. For example, the Partly foundation model inherits some language capability from pre-existing general foundation models, which may have themselves been trained on the unlicensed use of data and the unacknowledged labour of a human workforce. The part finder is also used with LLMs for querying, which raises the same issues.
Still, for most people’s day-to-day purposes, there is a striking difference between pseudo, all-purpose everything machines and AI that has a job. After all, do we need chatbots? Or do we need to get our car fixed asap? Maybe this is where the real AI revolution lies – in products that replace messy human processes and do things that only AI can do, not products that replace humans, albeit in some admittedly spectacular ways, but which also create a considerable amount of mess?
Fawcett says Partly finally got its model working at the start of this year. They made a lot of mistakes along the way, some of which slowed them down by months, but they will probably go for a second round of funding in 2026. In the meantime, like most start-ups, no one in the company works less than 40 hours a week and everyone has stock. “Cults and startups are kind of similar,” Fawcett says. “There’s an element of going out and saying, ‘We can be a trillion-dollar company … and we have a mission, and you can join this mission, and together we are going to have this huge impact on the world!’”

