
Uplink: AI, Data Center, and Cloud Innovation Podcast
Uplink explores the future of connectivity, cloud, and AI with the people shaping it. Hosted by Michael Reid, we explore cutting edge trends with top industry experts. Catch a new episode every two weeks.
Uplink: AI, Data Center, and Cloud Innovation Podcast
🔐 From the Vault: Gui Soubihe, CEO of Latitude.sh, on Cloud AI Infrastructure and GPUaaS
Before Uplink had a name or a theme song, this conversation laid the groundwork for what the podcast would become—real, unfiltered conversations with the people building the digital world.
In this special "From the Vault" episode, Megaport CEO Michael Reid talks with Gui Soubihe, CEO and founder of Latitude.sh, about the explosion of AI workloads, the demand for GPU resources, and how Latitude.sh is delivering Bare Metal and GPU as a Service globally.
Gui breaks down the intersection of infrastructure, performance, and developer experience—and how Megaport's private network helps power it all.
--
🚀 Uplink explores the future of connectivity, cloud, and AI—with the people shaping it. Hosted by Michael Reid, we dive into cutting-edge trends with top industry experts.
👍 Follow the show, leave a rating, and let us know what you think. New episodes every two weeks.
📺 Watch video episodes on YouTube: https://mp1.tech/uplink-on-youtube
🔗 Learn more:
Uplink Podcast – https://www.uplinkpod.com/
Latitude.sh – https://mp1.tech/Latitude
Megaport – https://www.megaport.com/
#AIInfrastructure #GPUaaS #BareMetal #CloudInfrastructure #EdgeCompute #Connectivity #UplinkPodcast
Alrighty, I think we're live. So, guy, thank you for joining. This is the inaugural Megapod, and we've just given it that name because it's our very first session. For one of these you could be episode one.
Speaker 1:Hey, the reason I wanted you to jump on this call and have a conversation was I came across this incredible company called Latitude, and I came across that company because they were pushing the boundaries of what Megaport was actually offering. We just opened up 100 gig connectivity across one of our backbones and then, out of nowhere, we saw this connection appear, and then we started to see huge amounts of traffic pumping across it, and so that sort of came across my desk, and I was like I've got to get my head around who this is and what this company is doing. And that's when you and I first connected I think it was 11 pm my time, because I was in Australia and the sun was coming up in Sao Paulo, where you are and so we jumped on that call. I was like what are you doing? What are you up to? And it turns out that there's this company that's got access to a whole range of NVIDIA chips that you've been rolling out. I've written down 22 locations across five different countries delivering this GPU as a service model, and I know you've been around for four years, so you haven't just been doing this the whole time.
Speaker 1:But, as I mentioned, I had our investor call yesterday and had a chance to chat to all our shareholders and investors and I can tell you right now, if you haven't figured it out, ai is a very hot space and everyone's trying to work out what's happening, and I think you're probably the most interesting example, right at the tip of the spear, for what's occurring globally right now from an AI perspective, from an AI perspective, and so I just wanted to chat to you because I thought it would be helpful for not only me to learn, but also for our listeners and anyone else around the planet who's trying to get their head around what this AI movement is. So I think the first thing is maybe a quick introduction. Ceo and founder of Latitude, did you just want to say hi quickly and maybe a little bit about yourselves.
Speaker 2:Yeah, thanks a lot, michael, for having me in this first episode Really excited with that and the partnership that is coming together between Latitude and Megaport. We see Megaport as a strategic partner for us in multiple ways globally for our compute locations with MegaEX, which is something else that we are putting together on top of connecting multiple of our locations. But, yeah, I am in this space for a little longer than four years. I founded my first hosting company in 2001 when I was 16 years old. Yeah, it was called Maxi Host and we actually rebranded to Latitude one year and a half ago only Okay, yeah, before that we were Maxi Host. We branded to Latitude one year and a half ago. Only Okay, yeah, before that we were Maxi Hosts. But it was founded in 2001 when I was 16 and I ran to a server.
Speaker 1:Were you in high school.
Speaker 2:Were you literally in high school? Yeah, I was in high school. I always loved computer and technology and did many other types of, tried to do many other types of companies. Many of those failed. I have actually started my first business when I was 13 years old.
Speaker 2:The internet was pretty premature, so I am of the bullet board system age, before the internet browser existed.
Speaker 2:And yeah, with this server that I hosted in the, I rented a server in the US and started to provide email and website hosting for small businesses and individuals that were looking to publish their content online, and that's how my cloud journey actually began. Cloud journey actually began and I have pivoted the company many, many times from hosting to provide VPS virtual private servers and then to become a specialized bare-metal computing platform that is built for developers. I have built a data center in Brazil, so up to 2019, we only had a data center in Brazil, and four years from now, it was when I actually began expanding globally to outside Brazil. We began with two locations in the US and just a step back and why we have expanded globally of gaming servers in Brazil from companies that were already using globally distributed infrastructure. So it made sense from these customers to use us in other locations like US, europe and Asia. But we began with the US and it was quite successful and we just continue expanding purely based on customer feedback.
Speaker 1:And that's compute or bare metal at that point in time, or a mixture. Yes, yeah, correct, correct and presumably that's about latency, so the time I assume it's all the different style of gaming consoles that are having. You know my son loves fortnite. As an example, something like that that needs to be sort of delivered locally or close to each location around the world. Is that sort of the use case?
Speaker 2:absolutely yes. Today we have six locations in the US, covering East, west and Central, and it's purely based on latency. So for many of the high performance use cases, 20 milliseconds or 10 milliseconds matters. So we have decided at that point in time to put our compute strategically on and as close as possible to the users, and we took a developer approach for bare-metal. So we have tried to bring the same user experience that users have on hyperscalers with programmatically APIs where they can provision on-demand servers to Bermato. So we have two very different teams inside Laptude. We are a software company that builds the platform, the APIs, and then we are an infrastructure company. That's what makes us really unique in the bare metal space.
Speaker 1:I was going to say that's very different to the traditional bare metal players who are basically here's a bunch of compute, off you go. You're actually adding more of a full stack from a software perspective.
Speaker 2:Exactly. We provide a great user experience and a pretty fast platform where users can get access to this compute either on a modern dashboard or programmatically. We don't just give the keys to the server inside the data center. We do more than that.
Speaker 1:Awesome. It's actually a little bit how Megaport was founded. Everything we do is totally automated, so the engineering team is passionate about building everything. From an API perspective, I think for us we're something like 2000 plus devices in 850 data centers around the globe, and not a single human ever touches any of it. It's fun.
Speaker 2:Yeah, I see many similarities with us, michael. There are many even in Brazil. I don't know a single platform that has the automation that you have. They are just provisioning network. Traditionally, you have to email the account manager. They go there to the data center, connect ports, give you access. You don't have a dashboard at all.
Speaker 1:It's pretty amazing what you're doing, so I'm curious and this is the piece that's so interesting right now is maybe you did predict that AI was going to just appear and change the world, and I mean, it's probably only. What are we about? A year and a half ago, that sort of ChatGPT entered into the realm, and the world has just changed. Actually, Our world has changed so much. The the world has just changed. Actually, our world has changed so much.
Speaker 1:The data centers have been exploding, Connectivity is changing, but what's been so fascinating is this NVIDIA play around getting access to these chips and this rush to sort of train something, train a model and then deliver inferencing and for us, what we're seeing is movements of data data to platforms like yours. And for us, what we're seeing is movements of data data to platforms like yours. And, as I said in the opening, when we first, the reason I came across you was because there was just so much traffic being pumped across this link and I thought, well, who could be possibly using something like this? And so I'd love you to share how you went from bare metal to this, this NVIDIA world, this sort of GPU as a service offering.
Speaker 2:Yeah, and from a traffic perspective, llms and machine learning workflows is really data in movement and they are all constantly downloading, uploading the data sets and they are pushing a lot of bandwidth across locations, and that's probably what caught your eyes from the network, right?
Speaker 1:Yeah, it's crazy.
Speaker 2:We are pushing more than one terabit of egress as of today, which is quite a reasonable amount of bandwidth, to be honest.
Speaker 2:But, like everyone else, what caught our eyes on the AI space was the chat GPT, when they actually turned these ultra powerful large language models into a chatbot and we were already in the compute space.
Speaker 2:And we are getting massive amounts of requests from users. They want to use our platform to get access to GPUs, and I have started exploring and we have set up a bunch of H100 cards in two locations in the US, in Dallas and Frankfurt, and it was quite successful. We use the same platform, the same developer's platform, that we provide compute to extend to GPUs, and users can deploy H1 programmatically on demand, just pay for what they use from clusters of eight H100, the A100, the L40s. That is quite powerful for inference models. We are seeing unprecedented demand from GPUs and what is most interesting is that, as we have a platform that provides both GPUs and CPUs, we are seeing a lot of customers their AI workflows using CPUs as well. I see a lot of specialized GPUs platform but that don't offer the CPU, so many of these users are using both, so it's been pretty cool.
Speaker 1:Well, we've also. I mean, we see this a lot. We see different cloud providers. We've seen different data centers actually try to deliver different services.
Speaker 1:We saw a number of folks that came out with sort of this in theory, that they were going to build GPU as a service, but it ended up it was probably too hard to build the as-a-service piece, which is like you stage, you bring all the storage in, you then use it for a period of time and then you can sort of pull it back out, and so what they ended up doing was just selling the GPUs for like a one to two year period to a company which is not really as a service. It just ends up more as like a almost like an outsourced GPU farm. But in your case and I think it maybe it goes to the software side of your house is you've actually built a number of different ways to access the GPUs. You were sort of explaining it to me that you could actually run it. Maybe it's like containerized or something out of some of the Git farms or whatever it was when you and I were first chatting and you can sort of push to that or you can.
Speaker 1:The other piece that I think it was really interesting. You can send a whole heap of data to a storage farm next to the, your gpu farm, so sort of fill up that storage with all this information that you want to then train them. The gpu is that. How does that work like this? This is such a interesting space no one sort of gets it. But, um, can you sort of articulate how that comes to be?
Speaker 2:yeah, I think we just going back from from the, the, the on demand. I would like to make a point there. And it's a nightmare to build hourly billing or mid-thread billing actually. And when the, when, the, uh, when chat GPT came out, we were already well positioned because we had that for CPUs, so we just had to extend to GPUs. We didn't have to make meaningful changes to the billing structure and it's it's quite challenging. It was over a year to make that work correctly and then we already had that. We just plugged the GPUs and so today you can go to the platform and get hourly billing for multiple clusters, which is quite unique, do and get hourly billing for multiple clusters, which is quite unique.
Speaker 2:And we are going one step further with Launchpad, which is a container orchestration platform that you mentioned, and we are building per second billing, as most of these users that are deploying Docker images from both public or private repositories, they want to spin up a Docker and when they don't have demand for that application, they just kill, and they don't want to pay for a full hour every time. They do that because it's pretty dynamic. So we are coming out with a second billing for Launchpad, which is the Docker orchestration platform and we look to build serverless as well, which is functions of machine learning. Customers would just use APIs to call functions to GPUs, so they don't have to manage infrastructure at all. On the storage side, we have built storage in-house based on open source, saf, and we we build these distributed storage to to give access to to our users of GPUs to store the models so they don't have to download this big, big data sets from the cloud every time.
Speaker 1:So they can connect. It's just the incremental changes every time they want to retrain the model. Is that the theory?
Speaker 1:Correct yes, so for the folks watching, and this is my take of it. So, basically, you're in a few different locations. You've got these big gpu farms sitting here, but next to them you have a whole range of storage, and so if I'm trying to take all my data from a cloud provider, for example, or even on-premise data center, that's that's not where you're located. You're moving all of this data across the network and then you're sitting it in this big storage warehouse, so to speak. You're not sitting it inside your GPU platform, which is expensive. You're sitting it next to it and what I would say is probably a much lower cost solution, and then, when you need it, you're feeding it into the GPUs itself, training a model and then leaving the data there so that a customer could then they get updated. A month later they get updated financial information or whatever it may be. They can update the storage that you've got locally, just with the changes, and then you can feed a model again and retrain. Is that how this sort of plays out?
Speaker 2:Yes, exactly, that's pretty much like that, michael, and the beauty of these file systems built locally is that we enable our users to connect the GPUs, the CPUs and the dockers that they have to the same file system at the same time.
Speaker 1:Yes, so, yeah, fascinating so it's kind of like what you've been building since you were 16 has been leading up to this AI revolution and you've got down now to per second but also the fact that you did the compute, that you had the storage ready to go all those elements you're actually being able to offer what is a much more full suite. The really interesting piece is where you're going with like containerizing, which I think is a more even sort of next level phase, which is really exciting as well. So what are you seeing in terms of customers understanding what they're doing with AI? Yet, obviously, you're seeing traffic come in, so obviously they're doing something with it, and you may not even know what they're doing in there, but have you got any insight as to sort of how it's playing out right now? Are they massive data sets small?
Speaker 2:Any insight into what you're seeing. There are many enthusiasts and people playing with the LLMs. So last week LLAMA 3 came out so we saw many deployments and people trying to fine tune this model and playing with it. And we see search and demand and then people just kill the GPUs and then they train again. But we see other use cases which are pretty consistent. We have a customer from Sports Analytics Platform, vision, so they do analytics of a soccer football at a real time so they can calculate, um, how long the the ball went from one player to the other in real time using using machine learning.
Speaker 1:It's, it's fascinating, yeah the use cases are crazy. Like it really. It really opens up a different way. I mean, what you guys need to figure out is the AI trading platform to make sports bets for who's going to score the next goal, and automatically. You know. There's so many use cases.
Speaker 2:Yeah, yeah, that would be pretty cool.
Speaker 1:So why would someone use your GPU platform, like, why wouldn't we just? You know, I'm a customer and I just wouldn't. I just use an existing cloud provider. What's the difference between what's my value proposition for me?
Speaker 2:Having access to these GPUs on latitude as opposed to hyperscalers they. Nvidia seems to not be providing chips to hyperscalers, as they are building their own chips right. That's really interesting, I mean we'll unpack that a little bit.
Speaker 2:But yeah, and NVIDIA has this framework call it CUDA which is highly adopted by the developers globally. It's by far the framework that is used across most of the developers. So we have this inventory available, and cost is a big factor as well, and when you factor cost of the compute, we have one fifth of the cost of the compute compared to the hyperscaler. And then the bandwidth is crazy as well we are one-tenth of the cost of the bandwidth.
Speaker 1:So cost can be. Is that an egress sort of situation? Is that when you say the bandwidth, yes correct. So that's egress is basically for the folks. Listening is basically the cost to remove or extract the data. Is that the what, what becomes the challenge in some of the hyperscalers?
Speaker 2:yes, when you were doing inference at scale, you were serving these models to the, to the world right, and you were outputting data and every time someone's going to access inference, which is what you need, there is a charge in effect.
Speaker 1:So that's one component and then the other one-fifth the cost from the actual GPUs themselves. I mean that's a pretty good value proposition.
Speaker 2:Yeah, yeah. And also having the ability to provision a cluster on demand is something that not many other providers are doing. They usually provide access to single GPUs, which are usually in different cabinets or data centers, and it gets a bit complicated to train small medium models. We provide a full cluster so you can use uh eight h100 cluster and use the power of interconnectivity from these gpus, and you don't have to commit for a year for that, you're just paying by the hour.
Speaker 1:Amazing and then eventually buy the second with your containerized piece, which is next level. Yeah, it's fascinating. And for us, what I loved about our engagement was, I think you and I caught up, and then Maddy caught up, and then I went to sleep and then had a weekend and I came back on Monday and you guys had provisioned connectivity to Megaport, or at least ports access to our 2,600 customers and our 850 data centers could access you by the time I got in on Monday, which I think was in eight different locations and I, every time I come on there's there seems to be more locations. So, but our perspective is we want to be an on-ramp provider to you, like we are to any other cloud provider, and the focus for us is constantly giving our customers choice, and particularly in this space, it is moving so fast. So thank you for the partnership and we appreciate how fast you move.
Speaker 1:I'm impressed at how quickly you've actually not only delivered all of the platforms that you've delivered. It certainly makes sense, having now discussed with you the fact that you've been building this since you were 16 and everything sort of beautifully led to this moment, but it's exciting. What do you see for the future? What is sort of I mean, a year and a half ago you wouldn't have been having this conversation, but yeah, it's changed a lot. So where are you headed, um? Are you growing? More is there? Is there more locations? What's the future for latitude?
Speaker 2:we are um. Likewise, we are really happy with the with with this partnership, michael uh, megaport, as the combination uh help us in a lot of ways. Connecting to Megaport, we are strategically putting compute in some data centers where we have the cost of power and access to large amounts of cabinets all together, which is getting pretty complicated in carrier hotels. So we are using Megaport to connect our telecom pops to compute pops and we plan on doing that in many more locations where Megaport is available. On top of that, we are excited with the MegaIX. Many of our use cases are super sensitive to latency, as we discussed in the beginning. So having access to local providers directly without going through multiple paths is pretty important for us, and using the same port with Megaport it helps a lot.
Speaker 1:Well, firstly, we saw that connection in Miami that spun up and then that thing was, you know, pumping traffic. And then the IXs. When we saw you connect to the Internet exchanges, again we saw huge traffic throughputs, which usually takes time for that to occur, but we just saw that almost instantly. Every time you've sort of jumped on and that's always been the value prop for Megaport. It's one port. You can use it to connect to an IX, connect to a cloud provider, connect to your on-premises, connect to a data center, doesn't really matter. Which is what when we look at you in the portal and our customers and for any of the customers listening, you can jump inside the portal and you can connect through the marketplace to Latitude basically one VXC straight to latitude from wherever you are. Now, interesting, on our side, we're rapidly upgrading our backbone networks to 400 gigs so we can add 100 gig connectivity from just ideally as many locations as we can around the planet, not just for yourselves but obviously every customer. But this use case, which is AI, is really so surprising, the amount of traffic throughput that you are having. So it's one of the biggest use cases for us to see tremendous amounts of traffic. So, yeah, we're all in, wherever you land, we'll just follow. So you just let us know where you need us and it will be great to see how customers leverage the platform.
Speaker 1:One other question I had for you, which is around you mentioned the data centers that you end up rolling out. We know that heat and power is like a huge component to these GPU. I know that they burn hot, so they're churning through some power. Are you hitting limits of what data centers can provide you? Do you have to look at the styles of data centers that you land in? Are you getting to sort of liquid cooling? Where is that sort of progressing to?
Speaker 2:These data centers that are dancing connectivity. They are going through issues of power capacity, so a couple of them started to pushing us to migrate workloads, and this is something that we don't like to do. So we began started to look at alternative data centers, actually still tier three certified data centers, but more focused on compute, and, as a counter, they are not as well connected as the carrier hotels. And that's where Megaport comes in. That enables us to connect both data centers in a redundant way. We don't have to manage multiple providers. We can do all that through Megaport. And GPUs uses a lot of power. It's insane compared to CPUs. We use it to put 40 physical CPU systems in a single cabinet. For GPUs, there are cabinets that we put two or three, so they use two clusters right, two or three clusters, which is 24 total GPUs running in a full cabinet.
Speaker 1:What does that look?
Speaker 2:like. What's the size Each? Gpu is usually for you, full cabinet. What does that look like? What's the size? Each GPU is usually 4U. Okay, for you Got it. And most of these, they're these specialized data centers that are providing us with high-density cabinets, which not many of the data centers can provide us. So we are building three-phase circuits where we are able to put six or seven, even seven clusters, which makes a lot of difference on scale, right Totally, and so I'm assuming not all DCs can provide that at the moment.
Speaker 1:I mean, what we're hearing is that, hey, dcs are either at capacity in many cases and they're having to build new data centers, but a lot of it is they're at capacity from a power perspective, or they can't provide the cooling or it's kind of an interesting space. So we're seeing lots of investment and it's all future investment. They're building now sort of two to three years out, but there's obviously data centers that you've landed in that can provide you what you need today.
Speaker 2:Yes, yeah, we will have to manage that until all these data centers get ready. Right, it's been challenging.
Speaker 1:When do you go and build a data center at the bottom of Antarctica or something? Keep it nice. Does it get to that point? Uh, is there? Is there a value proposition for building a dc in a very low-cost power country and then actually moving uh traffic there to train, or is it? You know, it's not really at that point. It still makes sense to keep it um, local, say in the us, um it will definitely make sense for companies.
Speaker 2:They're, they're building these big farms of ai, right, um, that would be that. That would be another, another play that that could be. That'd be that could be interesting. Yeah, we're not bad.
Speaker 1:Yeah, yeah I get it now. That's cool, all right. Well, did you? Uh? Did you have anything else you'd like to share?
Speaker 2:yeah, um, we are. Um, you asked it about the what we, what you're building next, and we are really excited with building this future of the internet. So we are investing a lot of markets that are trying to disrupt. So we are pretty much into blockchain use cases. We are hosting many of the high throughput blockchains.
Speaker 1:And that's a crypto story.
Speaker 2:Yes, correct, yeah, and also AI. So we are coming out with a new hardware line which uses new CPUs, new architecture, and we are always trying to be ahead and provide the latest compute possible to power the innovation on these markets. So we are using the latest AMD and Intel chipsets and really excited what people will build with that on our platform.
Speaker 1:And each chipset relates to a different style of compute requirement or GPU requirement. Ie Bitcoin is different, for example, and is processed on a different platform. And then you've got AI which can use different GPUs, but then there's obviously inferencing. And then you've got AI which can use different GPUs, but then there's obviously inferencing. And then you've got other crypto platforms that I think can use different platforms. How hard is it to manage all the different hardware that you need to deliver for each of these specific use cases?
Speaker 2:We try to think of how we can provide the least amount of different compute possible to cover most like 95% of the users cases that they would like to use Laptop. So we have in this new generation we have about only six different hardware specs, but it can. It can cover a lot of cases that uses high throughput bandwidth and there are CPU intensive. So it goes from only 32 gig of RAM up to 2 terabytes of RAM and different, and from 2 terabytes of NVMe up to 32 terabytes of NVMe. So we are covering a large set of use cases and we standardize this compute across the globe.
Speaker 1:And is this all on demand? All?
Speaker 2:on demand.
Speaker 1:Yeah, built by an hour or per second in some cases, correct yeah.
Speaker 2:Yeah, we have another project called Build, where we customize hardware for a specific use case, but all of these compute that we have the six different compute. We provide those on demand in all of these locations. We are launching Singapore as well and excited to connect with Megaport there and it's a location where it hosts many of these disruptive workloads. We are pretty excited with this. With this location and with Singapore, we are going to cover the majority of the. We will be close to like 90% or more of the internet users.
Speaker 1:I I was gonna say. So you're seeing. Where are you seeing the most demand? Is it the? Obviously I presume it's the US today, but obviously Asia I think you had Japan as well potentially Australia. How is it all sort of playing out from a I think, more from the GPU side?
Speaker 2:Europe is of high demand. There are not many GPU offerings there and companies that are located in the EU. They cannot host data outside of EU due to the regulation regulation and there are not many compute offerings as we have in the US. But US it's, it's pretty hot as well, and from Asia Pacific we we see a lot of demand in Tokyo yeah, I saw that.
Speaker 1:So from our side, you know we love having you as a partner, as a customer. We're super excited about your growth. We'll be wherever you turn up. So how could a Megaport customer access Latitude? What's the process? I'm assuming they just jump on your website and make magic happen. Is that the easiest way to do that?
Speaker 2:Yeah, I would say that it's pretty seamless. We try to remove the friction of the platform so we don't require QYC at all. The customer just goes there, puts a credit card valid and in three minutes they can spin up the machines. And if they are running on a hyperscaler, they can go to Megaport portal, connect to our infrastructure in locations that we are enabled and extend the resources from the hyperscalers back to Latitude.
Speaker 1:It's pretty seamless, You'll see. What we expect to see is, say, someone's got a whole heap of data that could be in a cloud provider. They'll pull that out, potentially across Megaport land in your storage, train your model, leave it back in the storage inference. What do they do once they've trained the model? What happens from there?
Speaker 2:They can use resources from the multiple clouds right, so they can use, if they use it to S3, they can extend the private network from S3 to latitude compute and train and then push back data to S3. And when they do that through using Megaport, they are saving a lot in in terms of data transfer. As they are not using the public internet, they are doing much faster as they don't have to use the multiple paths. It's connected directly with the with the hyperscaler, and the latency is also pretty much different.
Speaker 1:It's perfect. It's a great use case for us. That's why it's perfect. It's funny because you're spinning up some pretty complex stuff in three minutes. Everything we try to do is in 60 seconds. So as long as we're not slowing you down, we'll become a. Yeah, we, I think we're a great partnership. So, hey look, we could chat forever.
Speaker 1:But uh, what you've built is clearly uh, impressive. I can see how it's come from. You know, when you started your business at 13 and ended up in 16 to go and build this out and and the world's changing so fast. But huge respect to you for actually doubling down, obviously extremely early, when this ai revolution sort of took hold, and getting access to nvidia, then building the models, deploying them in all these different locations and having it ready for the world when it needed it. You know that's a, that's an entrepreneurial spirit right there. So massive respect to what you've been doing. So, uh, yeah, really impressive.
Speaker 1:And I think anyone listening will love to watch the latitude journey and obviously, if anyone's interested in using latitude, jump on their website. It's latitudesh. Uh is the website and it's it's very slick. I was checking it out and you can go and look at all the H100s and whatever's available and whatever other compute platforms you decide to add to it, and I'm sure the next NVIDIA chips will end up there as well at some point and all these different locations around the world. So thank you for the partnership. It's been a pleasure.
Speaker 2:Thank you so much, michael, for having me. It was a pleasure talking to you and looking forward to collaborating more with Metaport. Love it, cheers.