Internal S3 Storage for Media Files on Top of Ceph Storage Technology

by Gerhard Sulzberger,
Infrastructure Engineer @Runtastic
As a member of the Infrastructure team, it’s always exciting to work with new technologies in our company. This year we had a request from our internal Media team to create a storage solution for them within our office environment.
Our Media team created around 30TB of video, audio and images in 2016 and the amount of data is growing faster and faster. There are a lot of 4G files because that is the maximum file size of many video cameras.
The requirements from the Media team:
- Minimal storage space of 100TB and easily extendable
- Data redundancy
- 1Gbit/s download & upload bandwidth with multiple concurrent members of the Media team
- Support for files bigger than 4GB
- Usable with MacOS, Windows and Linux
The requirements from the Infrastructure team:
- Easy to scale out
- High availability
- Data redundancy
- Easy to automate
- Technology should also fit for other use cases
Based on the requirements listed above, we started to evaluate which technology would be the best fit for us. After a short evaluation phase, we decided to give Ceph Object Gateway a try. We could extend our existing Ceph cluster that we were already using for our virtualization infrastructure which is based on Opennebula, KVM and Ceph Storage Cluster. If you’ve never heard of Ceph so far, the official website describes it as a unified, distributed storage system designed for excellent performance, reliability and scalability.
Extending our existing Ceph cluster would be a perfect synergy because we wouldn’t need that much space for our virtualization, but the more disks there are, the better the IO performance is. Media storage, on the other hand, needs a lot of space but less disk IO. With Ceph Object Gateway, it is possible to create an S3-compatible gateway which is scalable and very easy to automate with our tools. There are quite a lot of free and commercial S3 clients out there for all the platforms we need. After a short test, we knew that this is the way to go and we calculated and ordered additional servers and disks.
Media storage architecture

In the image above, you can see the architecture. At the bottom, you can see the Ceph Storage Cluster. It includes monitor nodes, called MON NODE, which maintain the maps of the cluster state and other important information of the cluster. The data is stored in the object storage daemons, called OSD NODE. They handle the data replication, recovery, rebalancing and provide information to the Ceph monitors. This layer is connected via a 10G network with low latency switches.
The second layer is a virtualized environment based on a Kernel-based Virtual Machine (KVM). The KVM hosts and guests are managed by Opennebula. The Ceph Object Gateway and their load balancers and SSL offloaders run in the virtual environment.
The third layer consists of the client machines. The office network is connected by 10G, which allows more than one S3 Client of the Media team to use the full bandwidth of the 1G network inside the office.
Deploy the Ceph extension
The configuration of the base system is done by Chef. The existing Ceph Hammer LTS cluster was originally deployed with a Chef ceph-cookbook, but the maintenance of this cookbook was stopped when we started to extend the cluster. Instead of ceph-cookbook, we decided to use ceph-ansible playbooks for the installation and configuration of the Ceph part because the newer versions of Ceph offer better support. So we upgraded to the latest available Ceph LTS release, which was Ceph Jewel LTS at the time.
In the end, we had an automatically configured Ceph Storage Cluster that we can use.
The Ceph Object Gateway nodes are running in our virtualization on top of the KVM managed by Opennebula. The rollout of those nodes is done by Terraform which can create and configure the Haproxy, NGINX and Ceph Object Gateway VM’s within our environment. We can also scale out the whole front end part with those plans.
Conclusion
All the components in our system are easy and quick to scale. If we need more space, we just add more Ceph OSD nodes, and if we need more throughput, we just add Ceph Object Gateways.
On the client side, the Media team can use any S3-compatible client that supports AWS signature version 2, like Cloudberry Explorer or Cyberduck. It’s possible to saturate the 1Gbit/s network connection of multiple workstations at the same time. With the next Ceph release, Luminous, we will also be able to support newer S3 Signature versions.
In the end, this setup is a win-win situation for the whole company. We have an S3-compatible object gateway in the office which is easy to use from every client, and in the future, it can also be offered to other teams and services within the company.

***