1

We're having some troubles deploying Kilo on a system with 3 controllers and 3 computes, through mirantis fuel 7.0.

The problems involve creation and attaching of volumes, specially the ones stored on a NetApp SAN. As a result, I had to delete some stuck volumes and instances through accessing to cinder and nova databases and deleting files from instances, volumes, volumes_admin_metadata, volume_attachment and volume_glance_metadata.

The problem is, the volume count on the "Overview" for the project still counts those disappeared volumes and instances, so I'd like to know what part of the database that information is being read and how to correct it / synchronize it.

Also I't like to know how to remove the phisical LVM corresponding to those volumes, since they still show up when I do an "lsblk" on the controller that was storing them.

Thanks

1 Answers1

1

I think you are using a multi backed cinder that can create volumes by using the netapp and lvm drivers - sometimes volumes can become stuck in any type of status 'create, extend, snapshot, delete etc. there is already a cli and horizon tool for resetting the status of stuck volumes, since you can't delete a volume that is stuck at a different status:

cinder reset-state --state available uuid

enter image description here

as for where the LVM is - it will be on the server which you installed the cinder role: from the fuel server

fuel role list

and then ssh onto the cinder node and look at lvm -v

if you don't intend you use the LVM driver (it's a reference driver so you can see how storage as a service works) then make sure to remove reference to the LVM driver in your cinder.conf.

you shouldn't have to go into the database to remove infrastructure, but it is necessary some times.

Sum1sAdmin
  • 2,004