Like many other Development Operations (DevOps) teams in software today, LucidLink uses Jenkins as our core platform for continuous integration and test.
And like many other DevOps teams, managing and backing up the underlying storage is a challenge and a distraction.
Currently LucidLink supports 5 different OSes which results in running 12 different builds, and over time our build artifacts have increased in size.
The result is that every 2 to 3 days, we end up with around 30GB of archived artifacts.
We originally did our best to plan, estimate, and pre-allocate the storage that we thought we would need as per best practices. We would often choose not to archive certain things that we deemed unnecessary for the purpose of saving on storage space. However, hitting the limits of traditional storage are inevitable, and before long we were looking at the time and resource consuming task of migrating our master Jenkins instance in order to increase the storage.
Wouldn’t it be great if we could simply mount an object store and make use of its elasticity, durability and low cost instead? No solution out there was really any simpler, and required significant changes to our existing infrastructure along with a lot of added complexity.
Luckily for us, LucidLink is in the business of building a distributed file system using object storage as a backend and supports all the OSes we need (including Linux)! And with our beta headed out the door, it was time to eat our own dog food for critical business applications. In fact, it was a version of this very challenge which sparked the idea for LucidLink in the first place.
Following is the step by step process in implementing LucidLink in our very standard Jenkins instance.
From 30,000 ft level, implementing a LucidLink filespace is a pretty simple thing. We leverage AWS S3 as the back-end storage, overlay a distributed, streaming architecture, sell it as a subscription service, and implement on your devices as a configurable mount point. Here is what you need.
There are three major init systems in the Linux world: System V, Upstart and systemd. They are all completely different.
System V is the oldest and its scripts are stored in /etc/init.d/.
Upstart is something in the middle – it supersedes System V, but it’s already deprecated in favor of systemd. Upstart‘s scripts are located in /etc/init/.
systemd is the newest init system.
Our Jenkins master machine is with Ubuntu 14.04.5 LTS and supports the following two init systems: System V and Upstart.
Unfortunately, Jenkins’ service configuration is written for System V (/etc/init.d/jenkins).
Supporting System V is a bit hard and it’s not that flexible. For this reason, we implemented the Lucid service as an Upstart service.
Upstart services may depend on each other, but cannot depend on System V services (neither can System V services depend on Upstart services). On the other hand, we need to start the Lucid service before starting Jenkins. To workaround this, we should first stop the auto start of Jenkins and then let the Lucid service manage the Jenkins service.
We have a script to share upon request: lucid.conf – the actual service configuration; note, that this is an Upstart service configuration (and it also manages the jenkins service).
The script is pretty straight forward and self-explanatory. A short summary:
Jenkins’ service is System V and the host supports Upstart init system.
Jenkins -> Manage Jenkins -> Configure System -> Home directory -> Advanced -> Build Record Root Directoryset to
update-rc.d jenkins disable
./Lucid daemon --config-path /var/lib/jenkins/lucid/.lucid &
./Lucid init-s3 --fs <name> --access-key XXXX --secret-key XXXX --https --s3 <region> --root-path /var/lib/jenkins/.lucid
./Lucid link <name> --mount-point /var/lib/jenkins/builds --root-path /var/lib/jenkins/lucid/.lucid
Note: make sure there’s no running Lucid process.
service jenkins stop && service lucid start
With no change to our workflow, and only a few minimal configuration tweaks, we now have highly durable and elastic storage where we can store and access our build artifacts. Some benefits include:
This approach could be utilized for any process or utility within the DevOps space where you want to elegantly replace local storage (physical or EBS etc.) with object storage.
We believe cloud object storage has the power to fundamentally change the way individuals and businesses store and access their files.