.jpg)
September 23, 2025
Tech industry veteran Robert Scoble sits down with Michael Fester, co-founder of 14.ai, to explore our AI-native approach to transforming customer support.
Robert Scoble: I'm in a San Francisco house with a bunch of people with weird glasses on. What's going on here? I need these for Coachella. That'll get you some attention at Coachella for sure. Or Burning Man. You guys can just win Burning Man with these things on. Where are we and who are you guys with?
Michael: 14.ai headquarters. I'm Michael, I'm co-founder of 14.ai. I have a background in AI. I've been in this space for like 12 years now. Started off building my first company back in Paris, specialized in voice recognition. We're actually sitting inside of Sonos speakers now and my team led the development of the whole on-device voice recognition there.
Robert: I love my Sonos system. I have the top-of-line Arc soundbar with two subwoofers and two rear speakers. It's something like 29 little speakers around me, all computer controlled and a microphone that has your software. That's awesome. So introduce your company - what are you guys doing?
Michael: 14.ai is building a full Zendesk replacement on AI native principles. Meaning that you're starting to think about how much of the customer support operation can you actually start streamlining using AI agents. So a lot of the work goes in basically capturing all the information of a company - their knowledge, their products, their APIs - and give this access to agents so they can actually start working on behalf of humans.
Robert: This is really important. I visited a bank down in Brazil called Nubank and they built their own customer relationship management system. So when a customer calls in, they got routed through the phone system to an employee. Totally changed how customer service was done. And now they're a public company and doing real well. Warren Buffett invested in them after I did. What's nice about going and seeing startups is you see the companies before the big financial players figure out that there's something going on here. What's the challenge of working with companies and trying to get them to see this new world of using AI to do everything?
Michael: There's lots of challenges. Obviously capturing the specifics of each company is tricky. You really want these agents to be grounded in all the company's processes, knowledge and so on. So getting the company to a place where these things actually work really well in a way where you start really trusting them is a lot of work. This is not a one-size-fits-all product that fits every single company in the world. Every company is kind of different in various ways. So there's some work in implementation.
We're working a lot on making this as streamlined and smooth as possible, but the models are still struggling for some certain edge cases and so on. So this is one thing. The second one is the actual change management within a company because you're actually fundamentally shifting the nature of the work of running a support operation. You start from being more proactive in answering support tickets day in, day out to taking a step back and instead of being in the trenches, having more of an AI operator role.
So the nature of the work is really changing and this can be obviously disconcerting. And how you actually become productive in orchestrating agents, managing them in production, is work that frankly, we're all learning nowadays. We're trying to be very helpful and onboard. We do a lot of white glove work with our customers to just make sure that they understand how these systems work and that they can then start doing the management themselves once it gets in production.
Robert: Back when I worked at Microsoft 20 years ago, I worked at support for a day, so I understood a little bit about what those people do. Most of that was answering phone calls. People would call in and have a problem with Microsoft Windows or whatever, and they would get routed and on my screen I would see who the next caller is and a little bit about what they were struggling with. The new world is you get a lot more of that. As somebody working in support, you really understand your customer a lot better. And a lot of the questions I got were the same kind of question over and over again. Because back then we didn't have AI to answer the phones or answer tickets. But very quickly I saw that the same pattern kept coming up. You work with companies to identify those patterns and have systems that answer those patterns automatically. So you take care of customers. Because if you have to talk to a human, that costs 100 bucks minimum. So the less human I have to have in my system, the cheaper it gets. The better the customer satisfaction numbers get because they get their question answered real quick.
Michael: The starting point is the end customer. Do they get a better support experience out of this? That's the end goal for everyone here. Obviously the repetitive questions are very easy to settle because you basically know the answers. So that's quite easy for the agents to figure out. And we typically start off with a company trying to figure out what are these repetitive questions. And then we take off a decent chunk of volume overnight.
But then when it gets really interesting is having these agents actually perform more sophisticated actions on behalf of the customer. So there's obviously the customers who have a one-off issue that's kind of tricky. Oftentimes it actually requires a human. But then there's these agentic workflows where you can actually process a refund and speak between systems and figure out how can we orchestrate all these external systems in order to provide a full end-to-end resolution for our customer.
This is where agents become very powerful because they have this sort of reasoning step where they can assess, plan, try a few things and then get back to the customer, either with a positive resolution or maybe a suggestion that this is actually something that a human needs to look at.
Robert: One thing at Rackspace we figured out is that somebody who is about to churn, in other words go to another company, they have lots of little signals before they do that. They're getting disappointed with the service. They're asking very specific questions. Does your system help identify, hey, this customer is really struggling and is looking at changing, so let's put some extra effort on this customer?
Michael: You're hinting at something very important here on the sort of insights we can start capturing, like churn risk for instance. Now that we have every single customer support conversation going through our system, we're basically sitting on some very important data sets. And just stopping at solving customer issues sort of misses capturing something very important. And having agents also analyze these conversations can give a lot of insight, not just for the support operation, but for the entire company. So we have companies who are deploying agents that can run essentially deep research work on any sort of topics that are important to the product team, the engineering team, the leadership team. And so everyone starts to get involved.
Things like churn risk is not something you necessarily see throughout one conversation. But you start looking at the data set more holistically. You can have the agent basically reason on what can constitute a risk for churn. And this could also be very different from one company to another. So until now, these sorts of reports are very quantitative based. They were metrics that you would compile and then you would compute some sort of average or sum and figure out is this a customer that you need to look into. But it can be much more subtle to detect these factors.
It could be like the customer has regularly been complaining about a feature that still hasn't been launched or they have been mentioning known competitors or they've experienced a critical outage that put them out of business for three days. These are the things that are very hard to quantify.
Robert: Does your system watch social media, by the way, because a lot of times those customers are off on X or LinkedIn and complaining.
Michael: It doesn't yet, but it certainly will at some point. The goal obviously is to connect as many signals as possible to be able to give the most relevant reporting.
Michael: We have a company that we started working with. They came to us saying they're spending 100 plus hours every quarter looking through their Zendesk tickets and compiling these Excel spreadsheets with all sorts of rubrics. We told them, let us work together here because we can have these agents build these Excel spreadsheets very efficiently. And so they started doing this and immediately it worked. So 100 hours brought back to them so they could focus on more interesting things. But importantly, because these agents are not a fixed format, they can be tailored to any sort of reporting. And so they started, because we built these as self-serve features, they've started to use them for all sorts of other kinds of reports.
Robert: Give me some ideas.
Michael: Things like topic cluster analysis. What are some common topics that customers have been struggling with and can we actually be prescriptive in what the company should do in order to alleviate that? Sometimes it's as simple as: well, there was this issue that the agent was not able to solve because we haven't documented it. It's a new feature. So proactively suggest a new knowledge base article.
Robert: So your support system might notice new feature requests from all your customers and might actually go and build that.
Robert: I mean, it's the AI world, right? We can build software with AI. So why doesn't the customer support system go and build a feature and then deliver it in a few minutes and call back the customer, hey, we fixed your problem. Here's the new software.
Michael: Robert, we're still a small team here.
Robert: Yeah, but you have Cursor and you have Codex. I keep reading on X that people are shipping code in a couple minutes or a few minutes, 20 minutes to build a whole new system.
Michael: We are getting there. But I think there's another important point here to make, which is yes, there's lots of ideas, there's lots of things we could potentially do. When it comes to the agent, the AI agents starting to be involved in the work, one key thing here is that you don't want it to feel overwhelming. And so the agent is very trigger happy. It could come up with lots of good ideas, but as humans it can become overwhelming.
To give you an example of this, when we run these analyses, we could do it on every single customer support ticket and derive what you should do with that ticket. And you would just be bombarded with new sorts of tasks that you don't know what to deal with because it's all AI generated. And so one key thing you need to balance here is the right dose of insight and ideally the very important ones that you want the agent to serve.
So when it comes to other kinds of reports, we can do churn risk analysis. You don't want to give a thousand customers that might show some signs of churn risk and then what do you do about it? You want to be a bit more strategic and drop these very important pieces of information every now and then so that you can actually incorporate it meaningfully into your processes.
So I think there's lots of ideas, lots of potential for automation. But the quality of the work and sort of the human scale of what we can do right now is still very important. At least for now.
Robert: A lot of companies use Net Promoter Score to track how good their products are and their services. In other words, you would get this after a service call: Hey, how likely are you to suggest a company to somebody else? On a scale of 1 to 10, answer this question. AI lets you really come up with a better metric than that because that metric is not that great. How can your system really help a company change to become a much more customer-centric company and track whether they're doing a good job or not?
Michael: Well, first of all, when you're compiling these metrics, there's a certain bias. Customers might be more incentivized to give a negative rating when they want to sort of make a statement that they were not happy, rather than when everything worked smoothly. So are you really capturing the right metrics here?
The agents obviously can be helpful here as well. And so we run every single conversation through agents and we compile obviously things like sentiment or urgency, but we also compile other metrics which are more important to certain companies. And they have certain things that they're looking for within conversations to assess these metrics. So things like customer effort score. Can we actually have a sort of sense of: did the customer struggle a lot here? Same thing for the human agent. Did the human agent struggle a lot here for solving the support ticket?
And so again, what does it mean to struggle? This is a thing which is per company. This is not a one-size-fits-all. It's not a single metric. For certain companies, certain kinds of issues look easy on the surface by the conversation, but actually they know that they need to do a bunch of things in the back office in order to investigate that case and then come back with an answer.
So allowing our customers - and again, prompting is key here - allowing our customers to build their own tailored support setup via prompts, natural language instructions where you tell the agent, here's actually how you should assess the complexity of this score or the overall effort kinds of things. You want to start building this sort of malleability into the software. And this malleability is really provided by prompts. And I think this is a key new thing in the whole UX that we're starting to build the systems using natural language instructions, basically, and then we can start compiling meaningful metrics at scale.
Robert: Where does the human fit into customer service? And where I'm going with this is the Ritz Carlton tells every employee you're allowed to spend, I think, $500 to make a customer happy. To do something special for the customer that goes above and beyond. At Rackspace, we have the same kind of policy. Somebody answering customer service could do anything, including buying some pizza for the customer. And they didn't need to ask a manager to do that. They just needed to see that there was an opportunity to make the customer happy. Where does a human fit into this thing? And where does a human get encouraged maybe to do something like that to make the customer service better?
Michael: The pizza idea is good, actually. Since we can connect MCP, we might have a Pizza Hut MCP server that you can connect to our copilot and then have them send pizza. Yeah, it's gonna come for sure.
Well, you see, when we deploy these agents overnight, we take off 30% of work off of your desk, and we're telling you all the things where you needed to answer the same questions over and over again - essentially take the work of a computer and do it yourself as a human. This is gone now. Then you start working, you have more bandwidth and you start working on more interesting kinds of cases with the customers who actually really need a human for solving it, but also for the human connection.
So the goal really here is to allow support teams to take a much more strategic role within the company and do more orchestrator work of figuring out how these AIs can be improved over time. Trying to capture what works, what doesn't work with a customer and essentially spend more time on the qualitative work, including delighting customers. That's what it's all about. Delighting them sometimes means giving them an answer immediately. Whether it was a human or an AI doesn't matter. Actually, if it's an AI who can solve my problem immediately, I'd be more happy than having an email back in three hours.
So I think what we've seen at least deploying this with companies is that it really changes the work of the support team and they gradually take a more and more central role within product teams, engineering teams and so on. Because their work is now shifted towards this more qualitative assessment of the company's operation as a whole.
Robert: At Rackspace, we learned that when systems are down - a truck hit our data center one time and took our data center down for a few hours. And that costs millions of dollars in fees because we guarantee service levels. But it also causes huge numbers of customers to call and email and wonder what the hell is going on. Does your system hook into all the systems and understand, like, if Amazon AWS goes down in a region or something like that, and all of a sudden customers are in pain? Does it know that so that it can answer people immediately and say, hey, here's what's going on right now? Here's what we know, here's what the AI knows, because this is part of building a modern company now. We're all using Azure, AWS or services like that to host various things, and sometimes those things go down and that's when support matters so much.
Michael: Absolutely. So our agents, you can connect things like your status page. And so if there's an ongoing outage, the agent would know about that and will incorporate it into the answer if the customer is inquiring about something related to that outage.
Robert: Why would I hire you instead of building my own agents? I keep hearing that build versus buy.
Michael: Oh yeah, we get that all the time for very good reasons. You want to have control. And these things are kind of accessible nowadays. You just get an OpenAI key and then you hit the API and then you get your answer.
The reality is that all these hackathon demos or "I built this in a weekend" kind of demos - we also built our first version in a weekend and we're still here today with an ever-growing list of features. And the reality is that there's not just getting the agent in place and getting it to answer questions. There's so much tooling and plumbing in order to be able to continue improving the systems when they get into production.
We always say with our customers, we're not just going to help them to get to go live and get all their team onboarded. We're going to help them throughout the partnership. Because there's so much work to understand what are the features they actually need here to evolve the operation. And this is not just a general purpose LLM playground.
You want to start building these data flywheels. As I mentioned, there's certain kinds of reports you want to start building. You want to have an AI-first knowledge base that can improve itself, assess its own performance and accuracy over time. You want to have a really good QA system in place so that you can start efficiently queuing, reviewing conversations, you want to evaluate conversations, you want to look at the metrics and so on. That's where we fit in. We try to take this off the shoulders of our customers, because ultimately they will need to build that themselves.
Robert: Your staff is standing around taking pictures of me and quite entertained. Tell me a little bit about your team and what skills your team has to bring to this.
Michael: We're all engineers. And we're basically shipping code every day. That's the key. And spending a lot of time with our customers really helping them and understanding their needs. Because the reality is that we're all very new to this space. We don't know how things are going to look a year from now. But our way to get an edge is to be very close with our customers and really understand how we can evolve the product as we learn more and more things, even things like agents were kind of new a year and a half ago. And now we understand that this is a very powerful way of having the systems do more sophisticated things. So we're all engineers and we have our coding agents as well that help us a lot. But we can ship really fast, and that's how we compete.
Robert: What kinds of businesses right now would be successful buying you or using you? So I have a feeling Coca Cola is not - you're not ready for a Coca Cola. Coca Cola is not a good example for customer support. But they have hundreds of thousands of employees all over the world, and they all have different support needs. It's a big company.
Michael: Every single company in the world has some sort of support going on. And when we speak with customers and try to understand where they're at, for us, the most important is to figure out, is there a pain point? And usually pain point is a proxy for volume. You have a lot of support inquiries. What's the complexity of them? Are each of the inquiries specific to the customer? Then there's probably not a lot of room nowadays for the agents.
But can we see some patterns or do we know that these are the kinds of topics where if we connect these kinds of APIs and this kind of knowledge, we can start deflecting? This is the sort of things that we try to assess early on. Some other things we look at is the actual appetite for giving this a try, trying to figure out are they open to start trusting an AI with our help.
Up until now, we've been working on basically telling our customers, you keep your existing ticketing system that would typically be Salesforce, Zendesk, Intercom and so on. And we're building the AI infrastructure - the dashboard to create your agents and customers, connect them and then have them serving the customer and then having the copilots inside of Zendesk, inside of Salesforce and so on.
But ultimately we realized that this was very limiting and we didn't want to keep just sitting inside of an existing host, especially a legacy host with an interface that was not built for AI first principles. As I mentioned, the whole prompting part, allowing every single agent to build their own prompts to essentially automate their day-to-day work as efficiently as possible. These things are so important. And being sitting inside of a sidebar inside of an existing fully featured UI was just slowing us down.
And so this is where we decided to eject and say, well, we're actually building the entire thing, the full replacement of Zendesk, because we can move much faster and provide an end-to-end experience where every single micro-interaction we can control and make sure that it's the way we envision it and the way our customers are actually using it meaningfully. So it's a big ask for customers to actually say, well, you have your Zendesk set up for seven years, are you ready to make the switch? But we can definitely see the appetite.
Robert: So important. Like I said with Nubank, they innovated in customer service and completely changed banking down in Brazil and around the world now. So it's such a strategic thing. Companies that really want to change themselves should start in this area. That's why I'm coming to visit you, because it's such an important thing. I had a tour of Nubank years ago and saw it.
Michael: The impact is very real. This is not hypothetical. We're going to see it - it is real and it's happening right now. And given the importance and the impact of this, it's a relatively small ask actually. The impact can be felt very, very quickly.
Robert: What else should companies need to know about considering this kind of move or evaluating you versus competitors because there's others out there that are trying to do the same kind of thing. Mark Benioff will be sitting right here and saying, ah, don't give me up yet. You know, I got agents. If he's smart, he'll buy you before you get there.
Michael: Where we fit in is really the end-to-end experience. The thing where we're really trying to figure out how does an AI native full support environment fit in. Obviously there's a lot of other companies who are doing the same thing. We started off with a clean slate and our background is in AI and so every team member has been working in this space and really understands how these AI systems work. We are agonizing over these topics day in and day out.
So even some topics that seem fairly commoditized nowadays, like proper knowledge retrieval, having some embedding models and a good RAG pipeline - even these things, the performance of the system can be drastically improved by building up really good retrieval systems or really good agent planner systems that is capable of correctly choosing the sub-agent that it needs to solve a task with a fairly complicated SOP. These are the things where we put all the effort and we don't have legacy, we don't have other topics to cater to. We can be fully focused on that. And so this is usually where we differentiate is that the performance of the system is actually better.
Robert: Are you going to grow into doing voice and answering the phones for a company and answering the emails and answering the social media?
Michael: So the voice part, we have some prototypes of this ready. The other ones we already do - emails and social channels and so on. So we've been really focusing on the text-based experience for now, but obviously this is going to grow to voice as well.
Robert: This will change your company very deeply. How does a company, like a Coca Cola, have to change to use this new way of doing customer service? What kinds of skills do they need to work on? What kind of cultural change initiatives do they need to do to get their humans on board with this kind of thing coming in and changing their company?
Michael: Well, I think one thing that remains is you need to have a really good understanding of your own company and your own operation. So the support team is so key here. Their work is going to change a lot, but their knowledge and insight and empathy for customers is still key here. But what needs to change is first of all accepting the fact that we're dealing with non-deterministic software.
Robert: Is that a nerd's way of saying AI can hallucinate and make some bullshit up once in a while?
Michael: Yeah, that's the nature of it. And there's nothing wrong with it - it's not going to be perfect. And so one thing that we've been learning the hard way is that when the agent comes up with a hallucinated answer, a wrong answer, obviously it's a problem. But for us as a company, it's different from a bug report and it looks like a bug but it's so much harder to investigate because it can be so subtle and it's not like, yep, it's a bug, we're gonna get it fixed tomorrow or it's on our roadmap, we're gonna fix it. This is really, really hard. Especially when the company grows and there's more and more knowledge and there could be ambiguity in your instructions at some point.
You mentioned here this procedure, but this knowledge article from two years ago, there's this other tweak to it - it could be confusing. So the thing is that this system deployed will have a measurable impact. Just on raw numbers, it's going to have an impact. There are going to be situations where it's going to not perform well. The customer who didn't get a good outcome of it, and we're going to work with you on figuring it out, but it's not trivial.
And so being comfortable with the fact that you're deploying something which will have a positive ROI for your company, but still knowing that it might not work exactly as you want all the time. I think it's a fundamental change in the way that you use software. You cannot expect it to work deterministically. When you click that button, it triggers this action. No, this is very different now. So I think this is one thing that everyone needs to be comfortable with because this is the future.
This is how software is going to be - more and more non-deterministic, obviously better and better. And we always try to get as close to deterministic outcome as possible. Meaning that when we can we go sort of in code land and not rely on the LLM to generate a procedure or piece of code and run it. But still, there's still many situations where there is ambiguity by the nature of things - humans themselves are ambiguous. And this is where the company knowledge and the understanding of the customers is so key.
But I think this is one thing - one of our customers, he said this in a way that I think makes a lot of sense: it's kind of like when you're a journalist, you're either a writer or an editor or an editor in chief. And everyone becomes editor in chief now. Less on writing the things like the customer answers or the knowledge base articles or even the prompts. This is going to be much more like the AI that is going to do that fully autonomously. But your work will now be to make sure that everything runs smoothly and figure out what are the actions to take in order to fix an issue or improve a certain kind of scenario.
Robert: Very cool. Well, thanks for spending a little time with me and your team. Thank you very much. We all have weird glasses on today. I have my Apple Vision Pro on. Any last things? Where should people find you?
Michael: 14.ai, the URL is embedded in the logo, so it's always convenient.
Robert: Well, I'll let you get back to work and help people out. This was really interesting and really important for the future of business. Changing support is so fundamental to so many companies and to their future of profitability, their future of customer churn, their future of customer satisfaction. Having a better system is so important and can, like I said with Nubank, it made them a public company. It took them from a little startup like this to a public company because they built their own. In this case, we want them to buy yours.
Michael: Absolutely.
Thank you, Robert, for the wonderful questions and insights!