
Uplink: AI, Data Center, and Cloud Innovation Podcast
Uplink explores the future of connectivity, cloud, and AI with the people shaping it. Hosted by Michael Reid, we explore cutting edge trends with top industry experts.
Uplink: AI, Data Center, and Cloud Innovation Podcast
GPU Powerhouse: Scaling an AI Cloud in the Heart of Europe
From gaming rigs to high-performance AI infrastructure, Julien Gauthier’s journey with Arkane Cloud mirrors the meteoric rise of GPU demand. In this episode of Uplink, the CEO and founder of Arkane Cloud shares how a pivot from 3D rendering to AI compute positioned his Paris-based company at the forefront of the AI infrastructure wave.
Julien walks us through the evolution of NVIDIA’s architecture, the challenges of deploying high-density GPU clusters, and why American AI companies are flocking to France for inference workloads. Operating a 1,000-GPU cluster with plans to scale to 6,000, Arkane Cloud is building the backbone of the AI era, one cabinet at a time.
From financing hurdles to liquid cooling breakthroughs, this is a deep dive into what it really takes to deliver GPU-as-a-service at scale.
🚀 Uplink explores the future of connectivity, cloud, and AI - with the people shaping it. Hosted by Michael Reid, we dive into cutting-edge trends with top industry experts.
👍 Follow the show, leave a rating, and tell us what you think.
🎧 Listen on Spotify, Apple Podcasts, or wherever you get your podcasts: https://www.uplinkpod.com/
📺 Watch video episodes on YouTube: https://youtu.be/mmAMAnlSGFI
🔗 Learn more about Megaport: https://www.megaport.com/
🔗 Learn more about Arkane Cloud: https://arkanecloud.com/
Welcome to Uplink, where we explore the world of digital infrastructure, uncovering the technology, fueling AI and cloud innovation, with the leaders making it happen. Welcome to Uplink. This is a podcast that we bring to the world. We cover all sorts of things and it's really exciting to have you here, because AI GPU as a service, it's really hot at the moment. So welcome. You're the CEO and founder, absolutely so why don't you just quickly open? How on earth did you start this company? What's happening? Give us some perspectives. I'm really curious on how things are playing out from this AI space, what you're servicing, what's growing, what's happening that you didn't expect? Yeah, so let's just chat through it. So give me a little bit about your company.
Speaker 2:Yeah, sure, everything began in 2020.
Speaker 1:2020, pandemic, when everything should begin Exactly.
Speaker 2:So I decided to build a few servers for customers, because I resell on Amazon a few components, because I'm passionate in technology in general.
Speaker 1:So you signed some compute in 2020 during COVID to some customers, absolutely Because everyone wanted to be on streaming, be on crypto as well. Yes.
Speaker 2:And I saw a good opportunity to do that. Okay, so we were a GPU cloud provider at the beginning. So we did In 2020? Yeah, before AI. Absolutely Because we did cloud gaming, hpc, 3d rendering, yes, and and more and more AI, because it's booming in 2023, 2024. Of course, with a release of ChatGPT. So it was a good way to go more and more on this area. Yes, so we got at the moment, I don't know something between 80 to 90% of our demand for AI rather than HPC or 3D rendering. What we got at the beginning.
Speaker 1:Okay, so you started with the 3D rendering, which is more around gaming, and that's NVIDIA as well, is it?
Speaker 2:Yeah, more or less, because we decided to aggregate compute for 3D rendering like you can do CAO, so compute assisted by computers.
Speaker 1:And what's that? Who uses that? Like what's an example of a style you?
Speaker 2:have users like companies who need to build video games for.
Speaker 1:So actually building the game, not running the game. So they're actually designing the content, Absolutely. It's designing the games, rendering everything from characters, to environment, et cetera, et cetera.
Speaker 2:You need a lot of compute to do that.
Speaker 1:Okay, and Is that GPU or is that compute? No, it's GPUs, yeah.
Speaker 2:Computes and GPUs could be more or less the same, but here I'm talking about GPUs.
Speaker 1:Okay, so GPUs are more parallelized compute, so it's very efficient for that type of workloads. Yes, the rendering side as in the yeah, okay.
Speaker 2:Yeah, so at the beginning, NVIDIA was involved in this area. Okay, so you were involved with NVIDIA.
Speaker 1:prior to the NVIDIA going through the room yeah, Jensen is my king right now.
Speaker 2:Based on that, they not pivot, but they saw very, very interesting demand for that. And in 2020, 2021, they built a little new architecture and a new design called Volta, so they released something called V100. Okay, before that, you got the P100.
Speaker 1:So a full lineup of new products for AI. This is from NVIDIA. Yeah, absolutely so. P100 before this. Okay.
Speaker 2:Yeah, so you got V100, A100, H100 right now and we are coming with B200 right now? Yes, of course yeah, and it's a cluster we are building in Paris. Yes, we have 1,000 GPUs.
Speaker 1:Oh, just a few. Yeah, absolutely.
Speaker 2:So it's called more or less large clusters close to a mega cluster. Wow, because when you need 1,000 GPUs, it's to build a foundation model.
Speaker 1:So you're actually training a model when you're building that? Yeah, so before we get to that, so you're bouncing along creating NVIDIA for some rendering companies. Ai appears 2022, 2023, somewhere around that. Is it 23? I can't remember now, yeah, I think it's something like this with GPT-3. Yes.
Speaker 1:Because of course they cut GPT-1, GPT-2, but it wasn't spread like a massive deployment, so the first yeah, no, customers were really figuring out what to do with it so a year later, probably GPT-3, what have you comes out? Then you start to see demand for GPUs, and then GPU as a service, absolutely. So you're sitting there ready to roll because you've actually been building this for a while, you've got your relationships in place and so, all of a sudden, people turn up and they're like time to scale. Is that it? Absolutely.
Speaker 2:Okay, and we decide to add new features for that. Yes, something we call AI as a service, with API endpoints capability to deploy any AI models from a hugging phase, for example. So every open source models are hosted on a hugging phase.
Speaker 1:And you would enable that to be loaded onto your.
Speaker 2:GPUs.
Speaker 1:And is it only NVIDIA that you run, or do you have options?
Speaker 2:No, you have other options like AMD, intel and other, and you provide all of those options. No, we prefer to be dedicated with NVIDIA. Got you?
Speaker 1:So NVIDIA, your GPU as a service, NVIDIA and anyone can go to Hugging Face and download or provision that on your GPUs and run whatever model that they choose. Is that right, Absolutely? And are you allowing companies to train on your platforms or are are they using that for post-trained models? They're actually running that through Hugging Face. How does it play out?
Speaker 2:Well, they can do both, because on our web app you can obviously deploy any models. So you select one model you want to deploy, you click on a few parameters and you can run it very efficiently. We have a few very interesting features like auto-scaling. So, for example, if you have demand for one or two users and it's growing significantly, because, I don't know, you got a post on Le Monde or Fox or something like this and you have a lot of demand on your own platform or own applications, uh, our servers analyze that demand uh in real time and it's increasing the number of gpus automatically you'll start provisioning more gpus in the platform as it starts to push what it requires, and so you just start scaling yeah, so you can, but of course downscale if it's necessary, because you don't need specifically to keep that amount of GPUs.
Speaker 2:Because if for any reason, for example in weekend, you don't have any traction because you develop something for your company, you don't have that necessity to keep that Running over the weekends and what have you? So you can upscale but downscale for any reason you can.
Speaker 1:So that's the benefit for the customer. How does that work for you? How do you manage your side? Are you then having other companies take the GPUs when they're not being used, or is it okay to not run for a period, because of the compute and power requirements, I guess? Or how do you structure that on your side?
Speaker 2:Yeah, we have different strategies. So, working with brokers or partners who need specific models, we can provide. So, for example, we are trying to provide the latest generation with Blackwell. So Blackwell released in Q1 2025. Yes, with Blackwell, blackwell released in Q1, 2025. Yes, and we have an appetite for this one, because previously you got Hopper with H100 and H300 and because it's more efficient and uh, you get, you can get a better efficiency price ratio. You have three times more performance with Blackwell for something like 50% of cost?
Speaker 1:Yes, so it's 50% of the cost being the power and cooling, or 50% of the cost of the chip itself.
Speaker 2:In comparison, Um, in terms of a rental, okay, efficiency, price ratio, you have something like two times more performance, okay, so, uh, it's easy to Justify that. Justify that, go to, uh, go to customers and say, okay, now we have Blackwell running for inference. So to deploy these type of models and try it. If it works correctly, you will get the productivity you need and efficiency for your applications you are looking for.
Speaker 1:What happens to the older chips then? Now they're not even that old. But the H100s, h200s when Blackwall is there, do you keep them? Other customers use them for a different requirement we can get over customers using that.
Speaker 2:For example, if we got older generation, we got A100, and it was something more polyvalent. So it could be more interesting for HPC, for example, because HPC yeah, HPC High Performance Computing, Absolutely.
Speaker 1:What is that? Servicing as in.
Speaker 2:For simulation Okay. So scientific simulation, it could be for weather fluid mechanic fluid, etc.
Speaker 1:So you keep those running the A100s, is that? What you're and then the use case that's specific to them continues to use them, so it's not like you throw them away or they're useless to you every time. One of the worries I think a lot of investors look at this is like every time they I assume you need to raise capital or at least get access to capital to buy these chips.
Speaker 1:And so you buy them and then, a year later, jensen brings out new chips. Does that mean the old chips continue to work and what you're saying is that you keep the older chips, the use cases or maybe the cost or what have you? The way you charge for them differs to the new chips and it continues to flow through. Is that the theory as to how it all works, or how does it play out in practice?
Speaker 2:No, it's totally accurate. Because you have a release every 16 to 20 months of a new generation. You have to get a yield and a return of your investment as soon as possible. Yes, so usually on new generation you prefer to get a long-term contract and it reassure your financial partners of course, yes, like a three-year contract to run on this absolutely, and is it a three-year foot?
Speaker 1:is it truly gpu? As a service, I just scale up, scale down. Or are you saying I'm going to sell you a hundred nvidia chips and then you're contracted for that for three years? Which way do you? Is it as a service? Are you just scale up, scale down, or are you saying I'm going to sell you 100 NVIDIA chips? And then you're contracted for that for three years. Which way do you? Is it like on demand or is it like contract 100? Now?
Speaker 2:We can do both and we have a strong demand for on demand rather than long commitments, yes, but you charge a lot more, presumably for that, absolutely so we optimize our revenue.
Speaker 1:Of course.
Speaker 2:And charge a lot more presumably for that. Absolutely so we optimize our revenue and for our customers. It's more flexible and even more interesting for them because they don't need to run it for a bit turn it off, don't have to invest in it and run it every time. Sometimes they just need to run it for 25, 50 or 75 percent of the time. So, it doesn't need necessary to be commit for a full dedicated deployment.
Speaker 1:And so where are you based? Where are these chips? Are they only in France?
Speaker 2:Yeah, we have two clusters, One in Lyon, a smaller one with a few models from a polyvalent card, like A5000, a6000 to H100. H100 and our new clusters, built in Paris, so in the east of Paris, with 1000 GPUs, extendable to 6000. So we have a provisioning of 10 megawatts of capacity. Jeez so.
Speaker 1:A lot of power? Yeah, absolutely. The entire data centers used to be about two or three megawatts, six megawatts and you're consuming 10.
Speaker 2:Yeah, 10 in two rooms. So Jeez six megawatts and you're consuming 10, yeah, 10, yes, two rooms. So, geez, the big advantages of this data center. He has a large rooms to store this facility.
Speaker 1:So we are. You got to cool them obviously in these rooms. Yeah, absolutely, I wanted to speak about it.
Speaker 2:Yeah, so we decided to go to direct liquid cooling. Okay, so we have a facility Direct to chip liquid cooling.
Speaker 1:Absolutely so it's. You have to explain it.
Speaker 2:Yeah, it's going from, so it's moving with pipe.
Speaker 1:So chilled water in a pipe or something it's chilled.
Speaker 2:So you have two types of water One with cold water going to the chips, cooling that chip, going warmer after that, of course, and it's going back on a CTU, so cooling, distributing unit, and it's, of course, a closed loop so we can optimize what we called.
Speaker 1:Is it your cooling or it's the data center that provides the cooling? It's all cooling Yours. Yeah, absolutely, oh wow so you actually had to go and install your own cooling platform.
Speaker 2:Yeah, so we are working with Supermacro, so they got a branch, of course, for servers, but they are working with a new business unit for cooling, because it's important right now to get servers but as well cooling distributing units to cool everything.
Speaker 1:And you think it seems to be playing out that this directed chip is the way that most of these data centers are retrofitting, or at least offering, or at least you could build in an existing data center the cooling you could do or it was a brand new data center built for you no, we refit this one.
Speaker 2:Yeah, so with this data center partner. So we decided to tell them okay, we need DRC. Are you ready for getting something like 135 kilowatt per cabinet? They told us, yes.
Speaker 1:They can deliver 135 kilowatts per cabinet.
Speaker 2:Yeah, absolutely so.
Speaker 1:they need to be able to deliver that power and then you need to be able to deal with the cooling. Okay, fascinating, yeah, not easy.
Speaker 2:Fascinating, yeah, not easy actually no, because it's closer to industrial rather than technology. On what I did at the beginning. Yes, but it's challenging and I like it.
Speaker 1:It's going to keep you interested, but it's also super micro. By the sounds it's actually building the. You procure that from them. They deliver that cooling into the chips.
Speaker 2:Yeah, of course we can use their expertise, uh to do everything, because, uh, they have to validate everything. Yes, get approval in terms of uh, what type of uh, whatever they can put on it, et cetera, et cetera. So a close uh freeway uh between Supermicro as OEM, us and data centers to know.
Speaker 1:okay what we have to do, fascinating To get this and obviously NVIDIA is also approved.
Speaker 2:Of course Of course. They visit that data center yes, two times with Supermacro. Okay, to validate. It could support that type of deployment and future deployments as well.
Speaker 1:Okay, you said up to 10,000. Is that what it was? 6,000.
Speaker 2:6,000, sorry so we designed for 1,000 at the beginning. Yes, we could get one or two customers who need to get everything at the beginning and they can scale up with their project. So we are speaking with large companies who need that type of cluster.
Speaker 1:So are you struggling to access the chips and is there a supply chain shortage for you, or you are small enough to actually slot between and get access to what you need?
Speaker 2:I think it's a chance to be smaller than others. Yes, because we are more flexible, we can improve our process and, if you want, you can talk directly to me and we can do everything possible to do everything Get it done. So, from NVIDIA, we talk directly with EMEA director to manage if that's possible Of course we have up and down and to validate everything. Most of the time. Yeah, as I told you, our financial partners wanted to get something between two or three years commitment contract Fair enough, and we are in more or less a situation of egg and chicken. So customer wants everything deployed as soon as possible, or even already deployed, yes, of course, and our financial partners say okay, you need a contract to finance and we can deploy it.
Speaker 2:So it's complicated to move on that way, but our customers understand that and we can leverage with our proximity, closer relationship with NVIDIA. Super Macro to move quickly because right now it's already set up in San Jose in their HQ and we can deploy everything as soon as possible. That's fascinating.
Speaker 1:And so who are your customers? Are they mainly French companies at the moment, trying to keep their data in like sovereign data? I assume that's important for them, or how does it play out what?
Speaker 2:do you see? We have 0% of customers in France 0%, yeah, 60% to 70% are coming from US, that's amazing.
Speaker 1:So why do you think they're coming then?
Speaker 2:if that's the case, To expand their deployment from US to Europe, for example? For what purpose? For inference mainly Okay.
Speaker 1:So for example so inference is playing out for you. That is a big use case for your chips.
Speaker 2:Yeah, and for next years as well, because I think we are in decreasing position. In training we saw a lot of trained models. Of course we still could increase their efficiency and many other parameters, et cetera, et cetera. But in one or two years we will see inference in many use cases, in every layer in businesses, from chatbot to optimize productivity to optimize many use cases, and we want to be more and deeply a partner of that.
Speaker 1:So if companies are servicing French customers, there might be US companies that are requiring very low latency, fast inference. Right here You're a perfect fit.
Speaker 2:It's exactly what they are looking for, even if we have European customers and more and more Middle East customers coming, because French market is very attractive for them with decarbonized energy, uh energy with a lot of investment, uh from so is it lower cost energy in France?
Speaker 1:What do you mean by that? In?
Speaker 2:terms of Western Europe. Yes, of course we are not competitive with Nordics, but because we are in the center, of latency between London, amsterdam, frankfurt and Madrid or Barcelona Good spot. Okay, it's a good spot to deploy anything for inference.
Speaker 1:Yeah, true, you're right in the center of it all.
Speaker 2:Yeah, okay, so our strategy is to still keep everything in France or in Paris, so we can say, okay, you can train the model and, in the same way, you can deploy everything after that in the same location. Okay, that's fascinating. So you don't have specifically to restudy if you need to deploy in another location, you can keep your data. It's the value of your company, your data.
Speaker 1:Yes, of course. Yeah, Because it's not moving. It's preferable to you protect it, you control it and encrypt everything. And so you've got the GPU component. Presumably, you also need some storage. You need other compute elements right next to it. How do you figure it? How have you designed that? How do you figure out what you're investing in from a storage compute GPU? How do you make those decisions?
Speaker 2:Well, we are partnering with VASdata, so VASdata is a yeah yeah, storage solution provider. Yes, so they are deeply invested in AI and they bring new features for that.
Speaker 1:And you procure the storage and then they run that software across it, or is it? How does that work?
Speaker 2:We put something like API access to deploy compute and it store everything on this storage solution Fascinating.
Speaker 1:And so do you charge. How do you charge? Is it per second, per minute, per token?
Speaker 2:So we have two types of billing. As you mentioned, we can charge per AI generation. So images, video, seconds of video video or token, we can charge per second as well. It's very interesting in terms of auto-scaling.
Speaker 1:That's hard to build.
Speaker 2:More or less, but we have a few good connectivity with Stripe and you can more or less connect everything with the API. It collects every data and we got exactly what you are using and we can charge everything for that.
Speaker 1:So you got some smart people working for you. Is that how you built this thing, or did you build this thing?
Speaker 2:I built a part of that but because I wanted to get something better for production way, I prefer to let the team validate the last part with more full stack developers and DevOps engineers. Very cool what you've built.
Speaker 1:It's super exciting. You've landed at a really good time. I mean, 2020 would have been interesting, and then it exploded in 2023. So, what's your limiting factor? What slows you down?
Speaker 2:Well to, why aren't?
Speaker 1:you doing what CoreWeave did?
Speaker 2:CoreWeave is more an attracting provider for enterprise-level company, so they do a lot of work in terms of research, providing the best way to do AI deployment, machine learning deployment, etc. So they put the standard as high as possible to do that, yes, and we aim to go in that direction, because they provide not GPU as a community, but more as okay, we provide managed services, so don't get any headache in service management. I speak as a Core Wave manager. They manage everything, so you can deploy your own clusters very efficiently. They know exactly what they are doing and they are backed up by NVGA, of course. Of course, yeah, but this strategy and vision to provide services can keep them above everyone on this market.
Speaker 1:Well, they just doubled their market cap last week. Yeah, so they're doing well, so this is good for you, presumably.
Speaker 2:Yeah, absolutely. It attracts new investors, new actors on this market, and probably they want to go deeply on this area.
Speaker 1:Yeah, and so for you are you just looking for more capital to buy more chips, or are you okay on that side? Do you just need to find more customers to consume the chips?
Speaker 2:It's a mix of that because we have a partnership with funds or family offices, we can get you the cash if you need it.
Speaker 1:You need to get contracts in place. Chicken and egg scenario.
Speaker 2:You know everything right now Gotcha.
Speaker 1:So you get the contract, you can get the capital, you can buy the chips from NVIDIA. You can get access to the chips quick enough. Yeah, and it's probably because you're not asking for like a billion of them, and I think, uh, I mean, jensen is keen to help companies like yours, uh, build differentiated sort of GPU clouds or AI clouds um, from what I can see, which is good, this is a helpful thing, yeah. Otherwise, that hyperscalers take all of the demand, all the supply, sorry and then destroy that, you know, then it's very hard to get access, but I don't think this is happening.
Speaker 2:No.
Speaker 1:You're actually getting access to the chips when you need them. Yeah absolutely, it's perfect.
Speaker 2:Yeah, and it's helpful to provide a good momentum, for example when a new model is coming, and say, okay, we are more agile. We are not like over 8 Pascalers because, for example, aws it doesn't want to deploy that for one data center, but probably something like 20, 30 different data centers. So it's way more complicated than us and we can be more agile for that Okay, so you're constantly being agile.
Speaker 1:So, you're going to deploy Blackwell? Yeah, what's next Rebin? Yes, you've got orders in place.
Speaker 2:What do you do there In mid-2026. 2026, yeah. Already discussing with different OEMs and Nvidia. Of course, it's not released, but it's not possible to order anything. Yes, but in a close possibility it could be B300. Yes, they don't know exactly when it will be released, but it could be something between Q4 2025, q1 2026.
Speaker 1:And so there's a lot of. We're a publicly traded company so I get asked a lot of questions by different investors on all of this space as well. What happens if they fail? So you procure the NVIDIA chip. Are they under maintenance so that if there's a failure with the chip, yeah, you have something like 1% failure every year.
Speaker 2:So of course we have a site where the ability engineers maintain everything support from SuperMacro to help us to replace every port going to failure.
Speaker 1:How long do they last? How long are you modeling these chips to last?
Speaker 2:Is it similar to?
Speaker 1:compute, or is it? They burn a bit hotter.
Speaker 2:It's going to failure sooner because it's very intensive, but another way you can optimize replacement and be sure everything is working correctly, because if you have one GPU going down and fail, you can impact the whole cluster.
Speaker 1:Yes, so they all need to be running.
Speaker 2:Every time? Yes, at the most time as possible. It's not easy. Or you can say okay, we provide something like one, two or 3% of our total inventory as spare parts and say, okay, if something is going down total inventory as spare parts and say, okay, if something is going down, we can not replug, but automatically reprovision a whole server automatically, so they might be using 100.
Speaker 1:One goes down, you just reprovision another one for them?
Speaker 2:Yeah, and it's automatically.
Speaker 1:They don't even notice it.
Speaker 2:Well, it would stop the training at that point, but then they would kick it off again. Yeah, but because you have a technology named the checkpoint, it could be going back very efficiently and very quickly based on that.
Speaker 1:Sounds like you've built something pretty impressive and it's scaling and it's not going to slow down. It's really exciting.
Speaker 2:Yeah, yeah, and we are looking at partners who can assist us for that. Yeah, yeah, and we are looking at partners who can assist us for that. For example, megaport can help us to get pod in other location that we don't want specifically to go. Yes, but getting something in Frankfurt, something, for example, in Nordics, et cetera you can get a full web of sub 30 millisecond, for example for inference, and you can spread that across all regions you want to address.
Speaker 1:Yeah, we launched our AI exchange, which is what we've been sharing to you about as well, trying to think, maybe a year ago. We have something like 35 different GPU as a service style providers. They're all different. Everyone's offering a different style of solution or service. But the whole vision for that was to treat. We've been on-ramping clouds for the last 11 years and so it was to treat an on-ramp to your platform so that any customer in 930 data centers around the world can just get a hundred gig connectivity to you from anywhere, actually in 60 seconds, but only charge for how long they use it for.
Speaker 1:So if they're going to move a whole heap of data to you 100 gig for an hour and then turn it back down or whatever it may be, or spin it up and down, and so what we're finding is lots of companies are on ramping with us to then access, instead of having to build in lots and lots of data are on-ramping with us to then access, instead of having to build lots and lots of data centers to try and get closer to their customers, we use the network to take the customer to your data centers, to your environments.
Speaker 1:We've seen it explode recently. Last year it was slow because I think people were still trying to figure out how to leverage the GPU platforms. It was interesting for you to say, like zero French companies, a lot of the US, so it's not probably yet fully adopted here in France, or people are slowly figuring out what they want to do with it. The US is, I think, very advanced now, and a year and a half ago I think they were still trying to figure out what is the use case, how do they leverage the platforms, and I think there's just been so much change. We're seeing inference really start to take off. We're seeing some really interesting companies appear as well, not just NVIDIA, but other chip providers like Rock, and there's a few others that are really interesting, that are sort of popping out for this inference story, which is really really changing Absolutely. Yeah, hottest space to be.
Speaker 2:It's exciting, yeah, so what next? What?
Speaker 1:do, we do next.
Speaker 2:Well providing a full AI, a full stack solution for customers.
Speaker 1:Yes, I hope so Okay.
Speaker 2:So, yeah, maybe getting managed services by design.
Speaker 1:So, rather than providing a solution, you deliver the full service for customers.
Speaker 2:Yeah.
Speaker 1:So how do customers get a hold of you? How do they find you Does website? Can they spin it up?
Speaker 2:A website, LinkedIn, it's our main possibility to speak with us and we are trying to create a new community on Discord or Slack.
Speaker 1:Do you offer the costs? Is that into some platform that other people can look? How do they find you right now from the US? How are these customers finding you as a GPU, as a service provider, just through a website, or is it?
Speaker 2:Just yeah, just through our website, because our SEO is in English, so I think it's natively going like this. Yeah, yeah, thank you.
Speaker 1:Yeah, okay, so it's just website. They're finding you and you're growing. Yeah, ah, it's awesome. Congratulations, ah, it's awesome, congratulations. Well, thank you for the partnership and thank you for what you're doing with us as well. We would love to see you continue to grow and we'd love to take out 2,800 customers to get them access to you. We're particularly strong 70, 60 something percent of our revenue is from the United States, and so we're scaling out through Europe. But it's interesting to see that you're building here with 100% of your customers outside of here, so that's really interesting.
Speaker 2:Yeah, I think it will change a little bit it will yeah. To get something more balanced between Europe and US, et cetera, et cetera, because we always see a stronger execution in US, et cetera. So they are already prepared to move in that way. I went to San Francisco. Every ads are running in based on AI, so we saw a large boom there and it's spreading to Europe more and more.
Speaker 1:Yeah, okay, fascinating. There you go. Well, the future is hot. We appreciate you coming on the pod. Thank you very much. And yeah, for any of our customers, check them out. It's there you go. Well, the future's hot. We appreciate you coming on the pod. Thank you very much, and yeah, for any of our customers, check them out, it's very cool, it was a pleasure yeah, awesome man, thank you, thank you appreciate it.