About LucidLink

A LucidLink retrospective, and a glimpse of the future

January 15, 2020, marks the 4th (official) year of LucidLink, growing it from the seed of an idea into a real company, and making a difference in the way that companies access their data.

Working for a start-up, it is easy to focus on what’s missing while operating in that frenetic mode of survival as you build and grow something from nothing, concentrating on the next milestone, and the next feature.

This year we made sure to pause over the holidays and New Year break to raise ourselves out of the morass to survey (and celebrate) some of the successes and progress we’ve made. Reviewing the past is also the perfect time to contemplate the future, and I’d like to share a couple of the highlights here.

From General Availability (GA) to hundreds of customers

We were told by many that our goal was not possible or practical, yet were supported by a few who shared in our vision. Fueled by a desire to prove the naysayers wrong, we started coding. By the end of 2018, we had assembled an incredible team of engineers that created a reliable beta product. Our nascent sales team had introduced it to a handful of customers using it in production.

As word spread over the ensuing year, we experienced over 3,000% growth of data being accessed via our service and ten times the number of customers. The LucidLink family doubled in size. (Ain’t early-stage company growth figures grand?)

All joking aside, something is humbling and satisfying (yet terrifying) in having customers place their trust in you to secure a critical component of their business.

Adding enterprise features

After delivering on the core technology in the form of a cloud file service allowing full control over a “bring your own storage” SaaS model, we attacked a few of the most often requested features.

First, we implemented no-overhead snapshots, with the capability of preserving a point in time of an entire bucket (as opposed to individual object versioning).

This was closely followed by enabling user-access controls to allow specific access to files or folders to individual users, all the while maintaining our strict “zero-knowledge” encryption model. (Come for the streaming, stay for the security!)

Finally, we made good on our commitment to support global file locking —the particular way of checking out portions of a larger model allowing for collaboration without having users step on each other’s edits. (See the AEC use case below.) The finishing touches were completed on that feature in Q4.

Expanded partnerships and use cases

Throughout the year, we qualified several different cloud providers and on-prem object storage vendors. The fact that most of the world is AWS compliant made this a bit easier. Still, we certainly observed many differences in the ability to scale diverse workloads. (Not all clouds are created equal — but more on that in a later blog post.)

You may have noted that I qualified “most of the world” above. We rounded out the year by rolling out support for another major cloud player, thereby supporting nearly all object storage vendors. Almost. (Looking at you, Backblaze.) We actually haven’t officially announced this yet, and I better not reveal all the details or Marketing will kill me. Stay tuned for further info on this front!

Of course, throughout all of the release cycles, continuous improvements to stability, performance, and usability provided a steady drumbeat. Our DNA, stemming from high-performance, data center-class SAN enterprise storage, runs deep. (We publish the changelog on our support site in case you are interested.)

From day one, customers expressed their desire for high-performing cloud storage that they could simply “mount and use as a local disk.” Our vision of a streaming, cloud file system with the performance to be able to use it anywhere, definitely struck a chord. We worked with dozens of them as they explored a variety of use cases.

Decoupling applications from data in terms of proximity opens the door to profound change regarding how companies manage their storage. In other words, users can make storage decisions independently from considerations around where and how that data must be accessed, and without having to constantly move, copy, or sync that data. This is the “big idea” behind LucidLink.

Workflows most significantly benefitting from this paradigm are typified by distributed workflows collaborating on large files or big data sets.  Highly distributed teams have different requirements and must navigate the issues around sovereignty, proximity (distance-to-data), and infrastructure differences when dealing with home users, branch offices, and HQ.

We began supporting customers working with Adobe products collaborating on media production, CAD/CAM/BIM software in the Architecture, Engineering, and Construction (AEC) space, radiology imaging in Healthcare, and security video footage combined with analytics.

Spoiler alerts!

A high-level contemplation of the past provides the perspective to consider a roadmap of the future. The following is the overall direction we plan to take our technology as well as a couple of specific features that are either in the current release cycle or on the near-term roadmap.

Since Filespaces is a bring-your-own-storage SaaS model, integrating multiple vendors, regions, or even a mix of on-prem and cloud-based services is a natural extension of our core technology. LucidLink will continue to blur the lines between accessing data locally with the capability of securely consolidating storage on the most modern protocols available within cloud infrastructure.

Naturally, getting from here to there involves several smaller steps, so as we continue to make the service more powerful and better performing, so continues our focus on providing the best user experience possible. To that end, here are the latest things we’re working on:

  • Active Directory integration – The first step, as noted above, was implementing a user authentication model that stands up to our encryption model. Next, we will tie that to the most popular authentication services. First up is AD.

  • Bigger, faster stronger – scale up and scale out to support massive datasets. Why? Because as datasets grow, the metadata associated with the files becomes a big-data problem in and of itself. We will extend our streaming model to the metadata service as well, thereby maintaining performance and usability while accommodating hundreds of millions of files.

  • Any-to-any cloud replication – We will enable access between vendors, regions, or hybrid environments. Users who want to work with a combination of vendors due to locale or just to make sure their eggs are not all in the same basket will be able to do so easily.

Let us know what would be killer features for you — just leave a comment here or reach out directly. We love the feedback, and I promise that your voice matters!

Of course, that is not all. These are just the highlights. But as we embark on a new year and more good things to come, we’d once again like to express our deep and sincere thanks to all of the people who have made this journey possible.