Dashboards in Ceph have always been a bit of a problem. In the past, I tried first to deploy and run Calamari, but it was a complete failure. I talked about my disgraces in this blog post, and there I also suggested a way better solution: Ceph Dash. But now with the release of Luminous, Ceph is trying again to have its own dashboard. Will it be good this time?
Tag: ceph
How to migrate Ceph storage volumes from Filestore to Bluestore
In my two previous posts about the new Ceph 12.2 release, named Luminous, I first described the new Bluestore storage technology, and I then upgraded my cluster to the 12.2 release. By default, Ceph can run both OSD using Filestore and Bluestore, so that existing clusters can be safely migrated to Luminous. On the long run, however, users who have previously deployed FileStore are likely to want to transition to BlueStore in order to take advantage of the improved performance and robustness. However, an individual OSD cannot be converted in place. The “conversion” is, in reality, the destruction of a Filestore and the creation of a Bluestore OSD, while the cluster takes care every time of evacuating the old OSD, replicate its content into other OSDs, and then rebalance the content once the new Bluestore is added to the cluster.
How I upgraded my Ceph cluster to Luminous
After the release of the latest version of Ceph, Luminous (v12.2.x), I read all the announcements and blogs, and based on the list of new interesting features as Bluestore, I decided to upgrade the Ceph cluster running in my lab. This blog shows you the step by step procedure to upgrade a Ceph Jewel cluster to Luminous.
The new Ceph 12.2 Luminous and its BlueStore storage backend
With the release of Ceph Luminous 12.2 and its new BlueStore storage backend finally declared stable and ready for production, it was time to learn more about this new version of the open-source distributed storage, and plan to upgrade my Ceph cluster.
My adventures with Ceph Storage. Part 10: Upgrade the cluster
As any existing software, Ceph is subject to minor and major releases. My entire series of posts has been realized using version Giant (0.87), but by the time I completed the series, Hammer (0.94) was released. Note that Ceph, as other linux softwares, uses a major release naming scheme based on the letter of the alphabet, so Giant is the 7th major release. Ceph releases both minor and major versions of the software, so it’s important to know how to upgrade it.
My adventures with Ceph Storage. Part 8: Veeam clustered repository
In the last months, I’ve refreshed my knowledge on Ceph storage, an open source scale out storage entirely made in software. As I’ve walked through my own learning path, I’ve created a series of blog posts explaining the basics, how to deploy and configure it, and my use cases. In this 8th part: Veeam clustered repository.
My adventures with Ceph Storage. Part 7: Add a node and expand the cluster storage
In the last months, I’ve refreshed my knowledge on Ceph storage, an open source scale out storage entirely made in software. As I’ve walked through my own learning path, I’ve created a series of blog posts explaining the basics, how to deploy and configure it, and my use cases. In this 7th part: Add a node and expand the cluster storage.
My adventures with Ceph Storage. Part 6: Mount Ceph as a block device on linux machines
In the last months, I’ve refreshed my knowledge on Ceph storage, an open source scale out storage entirely made in software. As I’ve walked through my own learning path, I’ve created a series of blog posts explaining the basics, how to deploy and configure it, and my use cases. In this sixth part: Part 6: Mount Ceph as a block device on linux machines.