Anyway, thank you so much for joining everyone. As Marcy mentioned, this is a continuation of our really successful IBC panel. We actually didn't have very much time at IBC. So, I'm really excited to be sharing some questions live from the audience while also keeping Richard and Fred on track.
We get very nerdy together and get very excited and just go off lots of tangents. So it is my job to keep everyone together.
So thank you so much for joining. As I'm sure you all know, now more than ever, social media reach and content production requires much faster turnaround times. The ability to produce even more content as quickly as possible on different platforms and formats for a vast, diverse audiences. And the challenges that we all face is to maintain both
the quality of output as well as the urgency of publication and maximizing the value of that content.
So a bit of scene setting about Loose Link Moments Lab. I can see on that poll that we have a real mix of people together. So we're just going to essentially build from the ground up and assume that people might not know anything about either either organization. But, Lisa Link is a real time cloud storage collaboration platform, which enables your remote teams from anywhere in the world to access their content in the cloud without any requirement to download files locally and just work on them. That will mean a vast efficiency in your turnaround time and creative output.
Meanwhile, Mobius Lab, where I used to work before joining LucidLink, is a multimodal, real time intelligence lab for indexing, searching for, and discovering your media shot by shot. It brings all moments within your live media and is cataloged and searchable, as well as in in your media archive.
Lisa Link and Movis Hub have worked together since twenty twenty three by supporting its high pressure and high volume workflows such as the Cannes Film Festival, which we'll be focusing on today with our customer groups by enabling the live and near life content to be discovered, cut, produced, and shared as quickly as possible. As I mentioned, really maximizing that value of the content in that urgency.
We've mentioned this workflow with Brute, and we now have a really quick case study from our customer themselves highlighting the workflow and their successes. Together, MomentsLab and Lucid Link enables Brute to achieve over six hundred million views across their global social media platforms, which was a ninety percent year on year increase in views across all of their social platforms including Instagram and TikTok. So Marcy if we could queue up the video please.
BRUT became the official partner of the Cannes Film Festival three years ago and also the producer of the festival's television. We wanted to capture live red carpet moments, interviews and master classes and publish highlights or key moments from Cannes as fast as possible. That's why we partnered with MomentsLab and LucidLink in order to capture, index, search, and edit fastest.
My mission for the Cannes Film Festival is to retrieve all the images, so red carpets, photos, press conferences. The journalist will ask me for the exact sequence he needs, Then I will go to MomentsLab and directly set an in point and an out point to extract the clip and immediately send it via LucidLink so they have it directly in his editing suite. I get the footage. I do the cutting, I send it for editing, and that's it.
Having this integrated workflow allows us to accelerate all of these steps and go live in a fraction of what was previously possible. The files are therefore quickly available on Lucid Link and then ingested into Premiere Pro so that our journalists or editors can edit the desired content. The video data is instantly usable in Premiere Pro anywhere without downloading or relinking media, saving massive amounts of time.
We could upload faster on our social media so the community managers could monitor streams directly on MomentsLab, and based on the sequences that were interesting, retrieve the images directly and then edit them on their side.
With MomentsLab and LucidLink, we have found the perfect combination to minimize the production time of our digital content in Cannes.
The time savings we achieved in the processes allowed us to produce and publish more content. And as a result, BRUTE achieved record breaking views of its Cannes Film Festival coverage. Previously, we covered the red carpet a bit less because of the production time it required. But now we can do it much more easily, thanks to MomentsLab and LucidLink.
Awesome. Thank you. Well, if anyone has any questions off the back of that, please do continue popping them in the audience chat and we'll fill them towards the end. But now, just to delve even more into that workflow because it's so fantastic. I'm just going to ask Marcy to share a little workflow diagram and then ask Fred to just walk through it as a just to delve further into that workflow and what that really has ramifications of.
Yeah. Sure. The the whole idea, I mean, of that workflow is we take as input live streams, and we want live streams to be, searchable and editable as fast as possible. It's actually the the value proposition.
The mission we do have at MomentsLab is to make any moments searchable, help our user find the right moment they need through livestreams and petabytes of archive. In less than two seconds. But those two seconds are really important, but I think we'll be talking more a bit more about that later on. So from a technical point of view, you do have basically livestream coming in, mostly SRT's here that are being then recorded as growing files.
And are being written as a growing file on a Lucid Link file space.
And it means two things. First, you can go live on the web UI in moments lab, and you can clip, and you can you can extract, and you can reuse.
Also, if you're opening the panel in Premiere and drag and dropping your asset on the timeline as a growing file. It's a bit like magic, but you will see your stream growing on your timeline. And it means like you're literally glass to glass at thirty seconds between what's happening on the red carpet, what happened on the red carpet, and what's coming on on your timeline, and that's kind of crazy. But what in enable is since we are indexing everything, it means that you can find instantly any moment whether it's happening live or it just happen. Oh, was maybe live, like, an hour or two ago or the day before.
And once you do have your flow in Premiere, you can then publish on social media, etcetera.
Cool. Yeah. I do believe, like, what is the the thing here is all about reactivity.
It's important for life, which is important for many part of the business here.
And that capacity to support growing file, if you think about it ago, I mean, like, few years ago, it was hard to find kind of a system that is managing that. If you think about it, working with object storage with, like, s three, for example, or any object storage, actually, I mean, you you don't have a native support for growing files. Because once you send a file there, I mean, it will only appear at the end of the transmission. So you might say, well, if I do have HLS, maybe I can do something and shrink it. But no. I mean, Adobe Premiere, for example, is not gonna like that. So that's why you need an alternative solution.
And that's why here, LucyLink and on the side works so well here because we could write we can write that file as a growing file and still while being on an object storage, having it growing because of the Lucid Link five space and tech on the timeline.
Absolutely.
So Richard, I'm gonna ask you a quick question. So how does that always on data layer fit into any virtually virtually any production tech stack?
Yes. So it's a great question and I appreciate what Fred described us as it's magic almost and we do have, we do get that reaction, you know, pretty regularly from folks who experience LucidLink for the first time, in conjunction with MomentsLab or just by itself as the cloud native storage collaboration platform. You know, we are always on and, you know, collaborators can access all the files that are stored in cloud storage from anywhere. And the magic there is exactly as Fred described, it's it's instantaneous.
It is on demand. There is no need to wait for a download or any kind of syncing to your local drive. We essentially act and behave and mount just like a local drive for all all of your application and file access needs. And we are combining this ease of a local file system with the scalability and flexibility and frankly cost efficiency and effectiveness of cloud storage of, S3 and Azure cloud storage.
So on top of that, then there's these, you know, specific capabilities like growing files where as a file is being uploaded, it can be accessed and used in editing and post production work even as a large file is being uploaded. So there's there's quite a bit of magic there.
So that's how we kinda fit in, and we fit in for both. I'll call it this more real time use case that MomentsLab and and LusiLink Powered for Brut at the festival. And also as Fred and I will talk about a little bit more, in lots of other I'll call it, production workflows, where we can access any data, whether it's, you know, incoming live streaming or archives or work in progress and we'll go through a few of those scenarios.
I think I saw a question on translation. Yeah. Indeed.
He was he was using AI for both transcription and and translation. It's a buffer time, so you can basically take thirty seconds of context to have, like, proper transcription and quality and then translate it. Translation is basically following up. I mean, if you were to open your premiere panel. So you'd have a moments panel here, and you would you would use all the AI native features, whether it's about, like, searching, about prompting.
And the what you do when you work with a Lucid Link Firespace is you configure your moments lap panel to load a locally mounted Recycling Firespace. It's very, very easy to set up.
We've just had a follow-up question which is can we create a policy on what you're talking about when you say growing files? Do you want talk about growing files here?
Good good point.
So file can be opened or it could be closed. Let's say it's closed.
Well, it's like your m p four that you are drag and dropping on your finder or Windows Explorer, and it's here. And I would say, like, in the world, maybe ninety percent of the file or more are closed. The one that are open when you start to record a live, well, you got two solution. Either use Like, keep the file open. It means there are more data bytes that are basically added, appending at the end of the file. That's what we call a growing file.
You basically don't display the file, then you only show it at the end of the transmission.
Being able to work with a growing file means you're working with technology that accept to work with file that are not closed.
And talking about object storage, object storage natively is is not supporting that technology because it's it's it's meant to scale big time. It's not meant for immediate live access. But when we you're you're working with livestream and you're thinking so there is here entertainment, but you're thinking about live or sports.
You you have to work with with open files. Otherwise, you need to wait for the end of the event to start doing I mean, your clipping, your highlight, and then it means that you're missing the momentum of publishing. So that's what is a growing fight.
Great. Thank you.
So back to our regularly scheduled Maybe I'll jump in because I think Enzo on the chat is asking a sort of a, I think a natural follow on question about growing files.
And the question is, every editor imports a high res feed, wouldn't that require massive bandwidth? And in a normal situation, you're absolutely right, Enzo, it would.
But this concept of a growing file That's enabled, that LUSA link enables is is is specifically, you know, tackling that problem, right, that challenge, if you will.
So, you know, if you think about a high res feed, if you need a closed file to work with or if if, you know, you don't have Lucid Link as a as an access technology, then you would have that challenge where you would need to Wait for all that data to be transported over your over whatever bandwidth you have available. And it would basically consume your massive bandwidth, all your bandwidth during that time, consume a massive amount of bandwidth. But the magic of Lucid Link is that we just like we are, you know, streaming data down, only the pieces of data that you need to work with from an existing file, And that imagine that streaming going up in the same way, right? We are basically streaming it very, very I'll call it in bite sized chunks.
And that enables both the growing file support as well as a way to to import high res fees without needing massive bandwidth. Right? And we can dive into this more if you'd like, but that's a quick answer to your question.
Back to you, Yermi.
Yeah. Great. Sorry. I've got the, Daft Punk song, Work It Better, Faster, Strongest stuck in my head now with that at work firm mind.
So, back to you, Fred, just quickly. Can you explain a little bit more? You've already touched on it a bit with the AI question but could you explain a bit more in a bit more detail how the moments of AI actually processes and understands the incoming media when paired with the Lucid Link real time storage.
Yeah. So Now two ways to to to basically ingest files in our platform, whether it's like a livestream or a file based asset that could sit on the resealing Firebase.
The key thing to have in mind is, first, I mean, let's take the example with high res files you had before. Let's say you do have an asset of one hour.
The key thing today relies on how you're splitting your content to have a very fast You know, processing, analysis of your content. So I'm I'm going to share my screen quickly here.
And it's the whole sort of philosophy we have and also strong differentiator is like, well, that's basically how what is a a media asset our language and for AI systems. It's it's basically a mix of what we call shots, audio segment, and and sequences. So if you think about shots, shot is today, we're using what we call a content aware algorithm that is splitting shots depending on lightness, depending on you, depending on camera movements, etcetera. So if you think about shots, it's like okay. Let's take a I'm doing this, this, going back. Okay. Here, you got two shots.
Human of twenty twenty five are using roughly six hundred shots to tell a story.
If you go on TikTok, it's going to be three times this amount because you need something to happen every two seconds to keep your audience engaged.
But then you do have shots. On all of those shots, what we analyze are basically all of that. We detect the type of shot. Is it, you know, a full shot, a wide shot, a long shot, full shot of someone?
We're detecting the faces. Who is it? Is it like Tom Cruise at the red carpet? What he's doing with captioning?
Is it Tom Cruise playing the guitar?
Brand logos, reading the text, very helpful.
That when you are for example, in news, you have someone speaking, you get the name that is on the lower third or physically actually, on the on the desk where he's talking to you. You have the capacity to read that logo, and then we do have audio. And about audio, audio are basically audio segments.
So audio segments is, like, the first thing you do is called diarization. You try to know who is talking and when, and you try to reconciliate who is talking at what moment.
All of that, we group into sequences. And you might have, let's say, Tom Cruise playing the guitar or this actress singing, and this person, like, doing I mean, I don't know, playing the drums. Instead of having three shots, you would have one sequence which that would say, well, this is a concert.
And today, we do have people searching for specific shot, for sequences, for audio, but that's basically how we we split content and use all modalities to understand, generate vector embeddings, reconcile reconciliate everything together, and then display that on the UI, make that searchable and promptable.
So that's basically how the how the system is working.
I can give a quick example of how it looks like for one specific content.
Click there.
Sharing the part of my screen.
So if you go there, you do have that's basically how the system is looking at the content. So you do have all the shots that are here. Okay. This is a medium shot of a person wrapping in a blanket, stands in front of a bed, displaying her brand. Basically, we read that.
We identified the type of shot, but also, well, those two entrepreneur named Matt and Angie Copper were basically seen there because we recognize those faces. So what we do here is summarize everything into a big sequence, and that makes like, if you want a specific shot, if you want a summary, you're basically able to read all of that, and the system will be able to to read through that. I'm going to stop here because I don't wanna take too much time, but just to give you a mental image of that. So from that point, you can add content to cart, export it, repurpose it, publish it, etcetera, etcetera.
But what's make it really interesting when you do have a Lucid Link Firespace here is you would have exactly the same experience in an Adobe panel, for example. But instead of reading proxy data, You're reading everything you see, every picture, every frame is coming from the the data under listed link file space, meaning that you don't need to download. You don't need to to duplicate or copy the content at any point. It's like immediate access.
Thank you. It's so cool to see it. You know really in context and actually understand how useful metadata can be instead of just a load of random bunch of tags thrown at the screen. Which brings me on to my next question. We've gone through a little bit about why this is so particularly suited to live and near live workflows. Do we want to talk a little bit about how the combination specifically of the tagging and LucidLink Instant Access can produce work that much more quickly and the sort of ramifications of that for the for the wider industry.
Yeah. So the really, this, like, the association is interesting because the it's it's all I was I was sharing about momentum just before.
And if you think about momentum, of course, the use case for Brut is for Brut is key. I mean, if you think about it, people are sometimes buying rights and paying extra money to get a thirty minute extra time after a football game to actually have the right to publish on digital social media on the thirty minutes time. And why? Because the first to publish is the first to get the audience.
And but it's not just a matter of minutes. You know? If you think about it, for a brand, for example, every time you do have an event happening, let's say, on Instagram or something new or you've seen the the two people from this and the HR the the company with the HR guy and the CEO got together on screen or, you know, you remember maybe the blue dress that looks like a golden dress, and nobody agrees why. I mean, anytime this is happening, you got ten days.
You got ten days as a brand, as an agency to actually bounce on that on Instagram again to make sure you can leverage the momentum of that. And if you think about it, the time it takes to source, to have the ideas, source content, find content, retrieve content is taking so much time that, I mean, you don't have unlimited time to actually leverage that. And I'm not taking into account, like, all the validation loop, etcetera, etcetera. So, of course, content creators can go very fast because they don't need, like, a lot of approval and they don't deal with the same amount of content.
They don't necessarily deal with, like, you know, copyright infringement all the time. But that that's why when you start to be a bigger company with more content, you need to be equipped to actually make sure you can leverage that momentum. So finding clips, know, the the promise we do have finding that moment in less than two seconds works very well because when you have a Lucene in Filespace, you don't need to retrieve. It's already there.
So you find it using metadata, and then you loading it up right away in your video editor or writing downloading it on your on your on your your laptop. Right? I mean, and I'm saying downloading it. It's actually not even downloading it downloading it. It's already there on your desktop.
It just looks like it's been downloaded By magic.
Exactly.
Yeah, if I may add a little bit more to what Fred shared, I think the magic here is really something that hopefully everybody in the audience can sort of envision a world where you're not spending the majority of your time trying to find the right moments, right, trying to find the shots and the sequences to tell a story.
You're able to sort of think of the story you want to tell in your final product and focus on that part of your creative process. And then like magic with Lucid Link and with MomentsLab, You're able to, find all the actual shots and sequences that supports the narrative, supports your story, supports what you're, you know, what you're trying to communicate and and share with your audience. Right?
So it's almost like, let's take the workflow and turn it from this sort of inverted way of we have whatever, you know, assets we can get our hands on and we have to construct a narrative and kind of turn that right side up again and start with a narrative and, you know, using our our technologies, find all the shots and sequences that you and all the moments you need to easily support that story.
Yeah. To to to bounce back on what you were saying, Richard, is, like, that creative easy thing. So I I I shared that at ABC, and I will I will share it again. I mean, that example are you know that The way you access your content is going to impact the way you tell your story.
And that's happening to us every day, you know, when you're looking for presentation. Oh, I actually need that picture, but I could not find it. So I'm using another one. I'm I'm really using it.
And we often see technology as a way to increase ROI, you know, or, like, improve productivity. But it's not just about that, and there is another way to to see that. I mean, history has been like that. I mean, I mean, engine engines are measured in horsepower. Right? But, I mean, an engine is bringing so much that gen horses in front of your carriage, you know, or and I think it's pretty much the same thing here. So let's take my example.
Every country has video editors in newsroom that well, at some point, you need to talk about inflation. Right? So let's take a French example. Anytime we're talking about prices raised, even taxes or purchasing power, Any newsroom will use exactly the same picture.
Will take someone going into a boulangerie and taking a baguette because our index there is a Big Mac index, you know, but in France, it's not a Big Mac index. It's the baguette index. So the The money, you're buying your baguette. And, of course, when you want to tell that story, well, you're gonna use a close-up shot of a hand taking a baguette.
And guess what? Well, there is one that is indexed in all the newsrooms. So every year, every month, you're always using the same one. And we could we do have exactly the same thing with the people from the Middle East.
Need to talk about prices, so they need an oil derrick. And I'm I'm not making that up. Right? It can sound like cliche, like just like a baguette, but it's not. That's what happening what is happening. And you need that, but it's always the same one that is indexed with the Saudi Aramco logo.
And well, if you do have a way to actually increase and and search and expand what you can find and retrieve, you will tell the story in a different way and and have maybe a more creative impact and quality impact on That does actually segue very nicely into a poll that we were about to use, which was, Marcy, if you can publish that.
What kind of content are you using in your workflows? Is it all new content? Is it existing content? Or is it both kinds of content?
And Rich, while people are answering that, I do have to ask a question off the cuff.
Is there an American equivalent for the baguette inflation sort of graphic that you Yeah.
Yeah.
I was just thinking about that. I think I think that's right.
Right. So I mentioned the Big Mac Index. I I guess that's kind of an index, but you know, Full disclosure, I don't eat many Big Macs so I don't quite know what a Big Mac costs nowadays at McDonald's, but I think that it would be like the one dozen eggs index, know, egg cost of eggs is a big thing here. And, you know, it's gone, I don't know, from three, four dollars for one dozen eggs to to probably six, seven, eight dollars, right? So we tried that a lot.
And Chad, you're spot on. The gas prices is a big thing too, but gas prices tend to have some nuances like, you know, states and cities add a bunch of tax, so you never really know what the true underlying costs of of gap of a gallon gas is. But eggs are pretty, eggs are pretty, pretty sharp and it's a staple we all eat and you know, so you go in the store and you see the dozen eggs doubling, that's a big thing.
Snickers, that's a good index too. Yeah, yeah. So but I think the first point is kind of, you know, you, you know, you might be as a, as a, you a broadcaster stuck in a way with using the same, you know, Yep.
Standard footage, stock footage all the time when you're talking about the baguette index.
But you know, if you can get a little creative with that and find different ways to describe what you're trying to communicate, whether it's inflation or you know, cost of living increases, That's where you know, The idea of telling a story or creating a narrative first and then finding The right, you know, footage to support that, the right shots and sequences support that is pretty powerful.
Absolutely.
Oh, yeah. Yep. Maria from Argentina. Yeah. Cost of living. Yeah. Totally get it.
Great. So I think some questions now is sort of what's next to the integration between Moments Lab and LucidLink? What are we allowed to disclose? What can we talk about? Which will then bring us on to MXT two, Fred.
Yeah. Yeah. The the the one thing to have in mind is, like, LucidLink is being built with one concept at its core, and I don't want to overlap on on on you guys, you know, so correct me if I'm wrong, but that that is security. And when you build that integration, when we build that, you need to make sure that you have the proper authentication delegation. So it means that you do have machines that are working on that are writing over.
So I think the kind of the one of the secret source of that integration is how we basically manage to keep that creating, instancing a kind of gateway that is able to write on the Lucid Link file space while Being still in the dual knowledge approach that proceeding has. So it means that we shall not know what is actually on that storage, But we are writing on it.
And that was kind of the, I think, hard part to tackle. But with that gateway, with specific rights and permission, we're able to actually keep that concept from the end to end.
Again, the second part was more about life where you need to have enough I would say smart way to write in a growing file on that space using different technology, we could achieve that. And we implemented something that is very unique to keep, again, that security end to end ready. It means that the system is writing the files and capacity also to write the metadata along the file on the on the receiving file space.
Rich, same question.
Yeah, I think what Fred's describing is just, you know It illustrates how deeply we're partnered here. And for those of you who Don't quite know what the zero knowledge concept is. I'll elaborate allow me to elaborate a little bit more. At Lucent Link, we we think of security and and the, you know, kind of privacy and and of your data as being very, very important.
You Just as an example, I know this is a little bit outside of our our creative workflows and media and entertainment workflows. But we have customers who are using LucidLane to manage data for healthcare, data for the government. And I won't mention the agency in the US that uses us, but it's a it's a three letter acronym agency. And every one of us who has an income in the US has to deal with this agency, right?
So there's a lot of very, you know, highly private data.
That are often being accessed, but maybe to bring in more to home.
Those of you who might have worked on, maybe a streaming service, you know, show, or even a movie, a major production, motion picture type movie, you know that the leaks are really a big deal, right? And people don't like leaks and, you know, your clients or your own companies and You know, studios, yeah, I'm sure you wanna keep those leaks at minimum. And that's a part of the value of what LucidLink brings is that we don't have the ability to actually see what's in your your storage.
We enable this incredibly fast access both up uploads and and, you know, no downloads, but access to the data, stored in the cloud, but we do not have access to the actual data itself. It's it's fully encrypted, and it's not just normal encryption, it's actually encrypted in a way that only you have the keys to your encrypted storage. And so it did present a little bit of a challenge for us when, you know, MomentsLab and Lusamine started working together because as you can imagine, for MomentsLab to do their magic to index and create this incredible, you know, multimodal search capability on all the content that you have, MomentsLab has to see the data.
So, you know, so in a way, we kind of work together to create this highly secure way to access your data that still preserves that zero knowledge so that lose the link because we're not actually processing the data, We still don't have any access to it. But moments lap Because Momentside is processing and working with your data with your permission, obviously, has access.
And we've been able to preserve that. You know, the zero knowledge capability in that way.
So that that's that's all, you know, I wanted to add to that. And, you know, just maybe it's just an illustration of how closely we're partnering in terms of bringing in very valuable capability to, you know, our joint customers and users.
Fred, does that bring us on nicely to talking a bit about MXT two?
Yes. I think the, yeah, model video we do have a video understanding pipeline, so that is called MXT two.
This is well, second second version of it. Actually, it's a sub version of it because we had the intermediary version. That's what I was showcasing before.
That NXT is, like, the core technology that does index. So that technology, you can access a product in two ways. We do have something we call media, which is basically a UI.
You bring your your own storage. So, ideally, this is Link Firespace, for example, and then you can start working. And you do have it's like having your own Netflix, you know, but with your content in a UI, in a panel. And next is here to index. That's what we call video understanding. So splitting shots, understanding the type of content, then storing it, making it not searchable, etcetera, etcetera.
Second version of it has the capacity to really understand a little more the content and kind of predict what people would be looking for. So for example, I was I was sharing entertainment shows before.
I was in Shark Tank. I got a show in UK. You got Dragon's Den. You do have other type of those show where you do have entrepreneur that are pitching their ideas. And most of the time, if you wanna repurpose or post highlights of those shows, what you would be looking for are, like, emotional moment or all the pitches. I'm taking the example we recently announced on your partnership with Benite. So Benite is the production companies that is working that has the license of Peaky Blinders or MasterChef, for example.
When you're working on MasterChef, well, you wanna know where the dishes are being presented to the judges. That's kind of the the the really the highlight. And the system MXT two on top of, like, generating metadata, describing shots, will actually find those dishes. And we actually classify whether it's like a fish dish, a veggie dish, if it was ranked to the first, to the bottom, etcetera, etcetera. So those are what we call fine tuned model or sometimes that are implemented, that are industry specific, that kind of help Well, creators and and video editors find their content even faster.
I think moving on further on, that also enable prompting capacities. And That's one thing that we release at at at IBC is that agentic flow, though you can basically prompt that whole point of. And it's been actually actually a few months that our user was starting to say, okay. I really like your search engine, but could I prompt in that search engine?
I said, well, yeah, that that's not exactly a search engine, but I get the point. I mean, they were kind of looking for, okay. I want a shot for that. I want a shot for that.
And at some point, they were saying, maybe you tell me the story, and and you can suggest some shots.
And that's what we did with the with the discovery agent that is enabled by that MXT two technology. Maybe we can share a quick video of that, Marcy, if you if you do have that.
AI tools have changed the way we search.
We've gone from typing keywords into search engines to simply asking for what we need.
But for teams working with video, the search evolution hasn't kept up.
Hours of great footage remains hidden by outdated folder structures, meaningless metadata, or reliant on that one colleague who knows how and where to look.
It's time for change.
Meet the MomentsLab discovery agent, your personal research assistant for video that knows your entire media library.
Powered by MXT Multimodal AI Indexing, the Discovery Agent finds the right clip, quote, or scene in seconds.
Simply ask it a question, request a theme, or describe what you're looking for, and it'll deliver exact moments.
It can suggest what to create, uncover hidden gems you didn't even know you had, and bring context to your stories, pulling insights from beyond your own archive.
So whether you're repurposing footage for a recap, a best of compilation, trailer, documentary, shorts, or reels, forget about all that time spent scrolling and scrubbing. Find the moment, not just the file, with the MomentsLab discovery agent.
So yeah. And we turned that request into a reality, and and we do have now what we call a discovery agent, which is basically the capacity to prompt, talk to your hundred of thousand hours of video library that you do have, and turn that into a new story. And and and you can search like you think. And the more context you provide, the the the the system is basically doing what we call reasoning.
So some some of you may have tried, you know, perplexity. You do have something that is called deep research. And when you're doing deep research, you are enabling the system to actually query the Internet to expand its knowledge.
That's why you do have a loading bar on perplexity when it does that. Well, it's exactly the same here. And for the fun story here, As a CTO, I was quite against it querying the Internet at the very beginning.
And somehow, in the first initial version, one of the engineer let that happen, and we saw the results on our alpha version. That was crazy because the system was actually getting way beyond the LLM training dataset. And it became actually one of our main value proposition, which is this capacity to reason, to actually get inspiration from either Twitter or Reddit thread when it applies or when you authorize the system to go there or from other sources. So that's that's really interesting because most of the thing we realized, and it's been in beta for six months, and now it's being used, I think, we were roughly at ten to twenty thousand prompts, which is causing a new challenge for our product team, but I think that's that's another topic of, like, how do you understand and prompts at that scale.
But the that system is basically gathering and take an assumption that most of people don't know yet how to prompt. So the first thing the system is basically doing is kind of rewriting the prompt, getting extensive knowledge. So if there is someone building a trailer for MasterChef, we assume that maybe the the system might need more information about MasterChef or might actually find let's say, I did that the other time. Okay, I want the top ten dishes from MasterChef UK where there is basil.
Don't ask me why I was searching for that. I was just searching for something. And well, chances are the Internet may have been speaking about that before. So the first thing is the system is doing is basically querying anything that might already exist.
Then it will actually build, suggest a narrative if the system if the user is asking for it, then suggest moments that would basically match and would work.
So that's yeah. I I saw you're asking a question on storages. I mean Yeah. It's nothing to do with with with with agent here, but, well, it could Answer is yes, but with limitation. For example, growing five capacities can only work on on the receiving file space. That's that's the only really technology here and the one who mentioned that that that can that can achieve that.
And I think also the speed of reading assets on missing links is especially faster.
It's bringing me to another point on MXC two is, like, you know, all know that indexing and AI takes compute.
And the reality is and we we had we had a customer from the US that had a hundred and eighty thousand hours of video. Then you can imagine that when you wanna read through that, analyze everything, well, it's going to take a lot of compute, but that's not what is actually very slow. What is slow is actually reading the files.
And if you think about it, well, at first, we had to digitize state. Okay. Then we have to they had to move it to LTO, then they had to move it to object storage. But object storage is super slow to read.
If you think about it, it's like milliseconds. Anytime you do a request, you do have, like, hundred milliseconds, which is super slow. And for AI that runs, it means, like, maybe you're going to consume a lot of GPU, but the system is going to wait a lot to retrieve the file and read the file, and most of the time we're not aware of that, But the reading speed impacts your compute at the end of the day and speed of analysis. So again, reading it from a listening Firespace is way faster, and you're consuming at the end of the day less energy because it goes faster to read.
Yeah, and let me, if I may just jump in here, there's quite a few questions. So I'll try to adjust in sort of in reverse order because we're talking, as Fred mentioned, there's sort of this, now there's this value prop where if you think in Both the solution that MomentsLab and LusiLink are creating here, and also maybe just in general terms, right? The notion of kind of getting AI access to the data that it has to either do inferencing on or, you know, in other situations to kind of do training on or, you know, it's it's super important.
At the scale that we're now approaching with with, Yeah. AI and and large large models. And the cost of the CPU time is is really becoming one of the dominant factors there. So the wait time, the total cost and efficiency of that system, being able to get to The data is super important. And I think that kind of speaks to Kyle's question where, you know, we're talking about online nearline and and offline storage for sure.
But you still have this sort of value proposition that applies to kind of nearline and offline even if if it's, you know, if it's not something you need for instantaneous production because you do have this cost and and need to fully index it. Right? And as more and more of your content gets put into nearline or maybe put in offline storage, you still wanna be able to access and and, Yep. Monetize or leverage all that existing storage. So that's that's where I think there's still a really strong value proposition across the different tiers of storage and the different types of content there.
Just to go back a little bit, there was a couple of questions about, know, can we can we provide LusiLink access or, you know, the moments lab LusiLink joint solution into other other storage types of storage.
And this is at this time, it's really s three and and blob and Azure blob. Those are the the main types of object storage that we work with.
To answer your specific questions about some of the other large cloud storage providers out there, we are exploring those and I'm happy to, you know, be sure to chat one on one about them, Not ready to talk about that right now, but happy to take feedback and input on which ones are important. And, you know, we're always looking to kind of unlock our capabilities with clearly, there's lots of cloud storage out there, but you know, they're sort of proprietary, right? All the ones that are mentioned are somewhat closed versus open standards, I guess, three and ten and blob.
Do we want to I'm just looking at, A, the time and B, some of these questions.
Do you want to talk a little bit more about hybrid data setups as well, Rich?
Sure. Sure. Let's talk about that. Who knows? I'm sorry.
Sorry, I interrupted you.
Yeah, I was just thinking of some of the public questions.
Yeah. I'm Trying to find the question. Hybrid data setups?
Would you be able to connect loose link to local storage as well, or does it only work with object storage? Would it allow for hybrid data setups, or would you need a complete copy on the cloud storage? We can hear that from Rich and also from Fred.
Yep. Yep. So I'll I'll get us started. From loose link perspective, we are Well, we can support on prem storage in the sense that if you have on prem S3, for example, and there are solutions out there. If you're talking about an on prem, you know, file system.
That's that's a little bit trickier for us. But, know, again, if you have any specific example, please feel free to reach out to us and we can explore with you and see what possible solutions there are. From a I'll turn it over to Fred from a moments perspective.
Yeah. I think that the the the the idea scenario is if you have everything referenced on the receiving ViseBase, of course, like, of the box going to work. We have few examples where we basically deploy a kind of gateway that is kind of able to to to be compliant with other type of storage. I mean, thinking, for example, NAS is is is kind of NAS is kind of a very wide range of of, you know, of product, and you always have that question of firewalls and so on. So happy to also get in a in a one on one conversation on on this one.
Yeah. And I'm scanning the questions and I think there was one asked about file locking.
Great question.
You know, it is definitely, the the kind of Multiplayer collaboration that we have right now is really, hey. We all have eye open. And that is something that is, you know, if we step out of AI piece for a second, just talk about real people and real human beings collaborating.
Lucia Link absolutely supports that, right? We also support if you have specific needs for file locking and it it's supported on a Windows operating system. Right? So if you if your, you know, local application is running on Windows and it requires firewalling, then yes, we have that capability.
As of right now, we don't have that if you're running on macOS.
So, you know, again, if that's a need, please reach out to us, and I'd love to hear more about it and see how we can we can actually meet that need for you.
Regarding the this latest question on on Discovery Agent and growing files, I like that question.
I think we didn't really push that, and but, yeah, it is. So it is for the kind of AI features that are working on live. So for example, a summary.
We're not generating summary on on live streams. We're waiting for the end to actually have summaries and sequences. But for example, you will have this agent would have the capacity to search through, like, faces that are being recognized in real time or transcription that is being done in real time and suggest moments that happen, like a few minutes or seconds before, indeed.
I'm I'm reading through the question. Thank you so much for asking all of those questions.
So I think we're gonna be stuck to answer all of them. But if there are any inquiries at the end, please do get in touch with all of us and either via our websites, and we'll try and answer these.
Following on from growing files, primarily supports growing files in MXF container. Is that what we used in the briefcase, Fred?
I would specify, yeah, OP one to be even more precise, I think you would take that answer for yes, Ruta.
Great.
Okay, we only have time for a couple of questions left. Rich and Fred, pick your favourite and answer Quick.
Maybe I'll I'll just take the chance to kinda, you know, ask a thought provoking question for to the audience. There you know, obviously, there's sort of, You know, different types of storage as mentioned earlier, the the, you know, nearline and online.
But then there's also this notion of, like, real time as Fred described, you know, some of the broadcasting and the real life real life real time live productions that we have.
I I would love to kinda ask the question to the audience that you know, What What happens in a potential future world when, know, actual production and post production fully converge? Right? So the notion of, like, having to shoot and then work with the footage and have, you know, you know, editing happening. And, you know, there's work being done, collaboration being done by different people, you know, maybe in different locations. That's kind of how we work today. But with the advent of AI, with technologies like listening links to provide real time access And with Moments Labs technologies to provide sort of really smart agentic type workflows.
Is there a future where know, all the creators out there are just going straight from, Yeah. Shoot to the final product and, you know, driven where the it the workflow to create the final product is driven largely by, you know, your prompting and your creation of very, you know, AI agentic workflows, right? There's a lot of technology in place now. There's the ubiquity of the cloud. There is the kind of the incredible rate of content creation and capture that's happening. There's a lot of direct to cloud capabilities from a hardware perspective that's kind of emerging. And then there's all this incredible stuff that, know, companies like Lucid Link and Moments Lab are are trying to pave the path for in terms of the production, the indexing, the VFX, you know, kind of AI driven reshoots and things like that.
So is there a future world where like that time to final product is almost instantaneous.
Join LucidLink and Moments Lab for a practical look at how real-time access and AI-powered automation are reshaping creative workflows.
Moments Lab's multimodal real-time intelligence layer indexes and searches media shot by shot, making all content quickly accessible and searchable. LucidLink's cloud-native storage collaboration platform allows remote teams to work on media files without downloading them, significantly speeding up workflows. We’ll demo how they work together to enable you to deliver content and coverage faster than ever.
We’ll discuss how AI and cloud solutions enable faster and more efficient content production, particularly in high-pressure environments where quick turnaround is crucial — specifically how Brut Media tackled its record-breaking coverage of the Cannes Film Festival, from live feeds and AI tagging to instant global editing, all without downloads or delays.