- 20 Apr, 2022 2 commits
- 06 Apr, 2022 1 commit
-
-
Rajat Dhasmana authored
Currently the schema validation for attachment create assumes that instance UUID will always be present in the request but that is not the case when glance calls cinder for attachment. Also there isn't any schema validation for MV3.54 which accepts attachment mode in the request hence failing requests passing mode. This patch removes the instance_uuid from required parameters and adds schema validation for MV3.54. Also includes a squash to add a releasenote from Change-Id: I4a6e93ea98cfd4988d38bedca6e5538391c1f74d Change-Id: I5108fd51effa4d72581654ed450d191a13e0e964 (cherry picked from commit 560318c8) (cherry picked from commit 58250aae) Conflicts: cinder/api/v3/attachments.py
-
- 08 Mar, 2022 1 commit
-
-
Eric Harney authored
If a volume driver takes an excessively long amount of time to return volume stats info, the only indication if this is a generic "Function outlasted interval" message that doesn't really explain what is going on. Introduce a specific log message when this happens so that this situation is more clear for someone looking at cinder-volume logs. The message is set to trigger if the driver takes more than half of the stats polling interval to complete. Change-Id: Id6b27eb7b90c8a8c91fb46de69aa94b8210da18d (cherry picked from commit 6d85d05f) (cherry picked from commit 63abc404)
-
- 11 Feb, 2022 1 commit
-
-
Zuul authored
-
- 10 Feb, 2022 2 commits
- 09 Feb, 2022 5 commits
- 03 Feb, 2022 1 commit
-
-
Brian Rosmaita authored
Cinder only supports uploading volumes of encrypted volume types as images with disk format 'raw' and container format 'bare'. Screen for this at the REST API layer when the request is made. This change is applied directly to the VolumeActionsController in the cinder.api.contrib module, so it affects both the Block Storage API v2 and v3. Change-Id: Ibb77b8b1be6c35c5db3b07fdc4056afd51d48782 Closes-bug: #1935688 (cherry picked from commit de8b3b0b) (cherry picked from commit 78682022) (cherry picked from commit 12f7376f)
-
- 01 Feb, 2022 1 commit
-
-
Rajat Dhasmana authored
There is an initial policy check in the transfers accept API[1] which validates correctly if the user is authorized to perform the operation or not. However, we've a duplicate check in the volume API layer which passes a target object (volume) while authorizing which is wrong for this API. While authorizing, we enforce check on the project id of the target object i.e. volume in this case which, before the transfer operation is completed, contains the project id of source project hence making the validation wrong. In the case of transfers API, any project is able to accept the transfer given they've the auth key required to secure the transfer accept So this patch removes the duplicate policy check. [1] https://opendev.org/openstack/cinder/src/branch/master/cinder/transfer/api.py#L225 Closes-Bug: #1950474 Change-Id: I3930bff90df835d9d8bbf7e6e91458db7e5654be (cherry picked from commit 7ba9935a) (cherry picked from commit 46415541) (cherry picked from commit b86da5d4)
-
- 26 Jan, 2022 1 commit
-
-
Brian Rosmaita authored
Taking the advice given here: http://lists.openstack.org/pipermail/openstack-discuss/2022-January/026905.html to fix the openstacksdk-functional-devstack job. NOTE: in stable/victoria, this job was not specified in cinder's .zuul.yaml, but was brought in by the integrated-gate-storage template: https://opendev.org/openstack/tempest/src/commit/86db21ea6afb6c26a21fc143a7d061e947c31a93/zuul.d/integrated-gate.yaml#L382-L398 So we add the job here so that we can override its definition. Change-Id: Iee14d5efd9286f478da373dcf2fa2c86aabc9975 (cherry picked from commit b66dbb15) (cherry picked from commit f9d85e60)
-
- 21 Jan, 2022 1 commit
-
-
Zuul authored
-
- 18 Jan, 2022 8 commits
-
-
Zuul authored
-
Gorka Eguileor authored
There are cases where requests to delete an attachment made by Nova can race other third-party requests to delete the overall volume. This has been observed when running cinder-csi, where it first requests that Nova detaches a volume before itself requesting that the overall volume is deleted once it becomes `available`. This is a cinder race condition, and like most race conditions is not simple to explain. Some context on the issue: - Cinder API uses the volume "status" field as a locking mechanism to prevent concurrent request processing on the same volume. - Most cinder operations are asynchronous, so the API returns before the operation has been completed by the cinder-volume service, but the attachment operations such as creating/updating/deleting an attachment are synchronous, so the API only returns to the caller after the cinder-volume service has completed the operation. - Our current code **incorrectly** modifies the status of the volume both on the cinder-volume and the cinder-api services on the attachment delete operation. The actual set of events that leads to the issue reported in this bug are: [Cinder-CSI] - Requests Nova to detach volume (Request R1) [Nova] - R1: Asks cinder-api to delete the attachment and **waits** [Cinder-API] - R1: Checks the status of the volume - R1: Sends terminate connection request (R1) to cinder-volume and **waits** [Cinder-Volume] - R1: Ask the driver to terminate the connection - R1: The driver asks the backend to unmap and unexport the volume - R1: The last attachment is removed from the DB and the status of the volume is changed in the DB to "available" [Cinder-CSI] - Checks that there are no attachments in the volume and asks Cinder to delete it (Request R2) [Cinder-API] - R2: Check that the volume's status is valid. It doesn't have attachments and is available, so it can be deleted. - R2: Tell cinder-volume to delete the volume and return immediately. [Cinder-Volume] - R2: Volume is deleted and DB entry is deleted - R1: Finish the termination of the connection [Cinder-API] - R1: Now that cinder-volume has finished the termination the code continues - R1: Try to modify the volume in the DB - R1: DB layer raises VolumeNotFound since the volume has been deleted from the DB - R1: VolumeNotFound is converted to HTTP 404 status code which is returned to Nova [Nova] - R1: Cinder responds with 404 on the attachment delete request - R1: Nova leaves the volume as attached, since the attachment delete failed At this point the Cinder and Nova DBs are out of sync, because Nova thinks that the attachment is connected and Cinder has detached the volume and even deleted it. Hardening is also being done on the Nova side [2] to accept that the volume attachment may be gone. This patch fixes the issue mentioned above, but there is a request on Cinder-CSI [1] to use Nova as the source of truth regarding its attachments that, when implemented, would also fix the issue. [1]: https://github.com/kubernetes/cloud-provider-openstack/issues/1645 [2]: https://review.opendev.org/q/topic:%2522bug/1937084%2522+project:openstack/nova Closes-Bug: #1937084 Change-Id: Iaf149dadad5791e81a3c0efd089d0ee66a1a5614 (cherry picked from commit 2ec22228) Conflicts: cinder/tests/unit/attachments/test_attachments_manager.py cinder/volume/manager.py (cherry picked from commit ed0be0c8) (cherry picked from commit 7210c914) Conflicts: cinder/db/sqlalchemy/api.py -
Gorka Eguileor authored
In change-id Iabf9c3fab56ffef50695ce45745f193273822b39 we left the `volume_attachment` out of the expected attributes of the Volume OVO (even when te DB layer is providing that information) because it was breaking some unit tests. This means that in some cases we are unnecessarily loading the attachments again (manually or via lazy loading) once we have the volume OVO because that field is not set. In this patch we populate the `volume_attachment` when we load the Volume OVO from the DB. Change-Id: I6576832b2c2722ab749cfe70bbc2058ead816c36 (cherry picked from commit e07bf378) Conflicts: cinder/objects/volume.py cinder/tests/unit/objects/test_volume.py (cherry picked from commit e85c22ce) (cherry picked from commit a5b67fd9) Conflicts: cinder/tests/unit/volume/flows/test_create_volume_flow.py
-
Gorka Eguileor authored
When we create a volume in the DB using the OVO interface we were missing some fields in the volume present in memory after it has been created. This was caused by the create method not passing the expected attributes to the _from_db_object. Because of this missing information we have places in the code where we forcefully a reload of the whole Volume OVO when it shouldn't be necessary. This patch fixes the create method of the Volume OVO and removes an instance of the forceful reload of the volume, reducing our DB calls. Change-Id: Ia59cbc5a4eb279e56f07ff9f44aa40b582aea829 (cherry picked from commit bea45866)
-
Gorka Eguileor authored
In Change-ID Ic8a8ba2271d6ed672b694d3991dabd46bd9a69f4 we added: vref.multiattach = self._is_multiattach(volume_type) vref.save() Then we remove the assignment but forgot to remove the save. This patch removes that unnecessary save call. Change-Id: I993444ba3b6e976d40ae7c5858b32999eb337c66 (cherry picked from commit 9607e2e6)
-
Gorka Eguileor authored
The old attachment API has a mix of OVO and DB method calls that can result in admin metadata being removed. When we automatically update the admin metadata using DB methods these changes are not reflected in the volume OVO, so when we pass the OVO via RPC the receptor will assume that admin metadata present in the OVO is up to date, and will delete key-value pairs that exist in the DB and are not in memory. That is happening on the old `attach` method with the `readonly` key that gets removed by the volume service after it was added (calling the DB) on the API service. Patch doesn't include a release note to avoid making unnecessary noise in our release notes, because it is unlikely to affect existing users since Nova will be using the new attachment API. Change-Id: Id3c7783a80614e8a980d942343ecb9f47a5a805a (cherry picked from commit 779b0249)
-
Gorka Eguileor authored
When deleting an attachment, if the driver's remove_export or the detach_volume method call fails in the cinder driver, then the attachment status is changed to error_detaching but the REST API call doesn't fail. The end result is: - Volume status is "available" - Volume attach_status is "detached" - There is a volume_attachment record for the volume - The volume may still be exported in the backend The volume still being exported in the storage array is not a problem, since the next attach-detach cycle will give it another opportunity for it to succeed, and we also do the export on volume deletion. So in the end leaving the attachment in error_detaching status doesn't have any use and creates confusion. This patch removes the attachment record when on an attachment delete request if the error happens on remove_export or detach_volume calls. This doesn't change how the REST API attachment delete operation behaves, the change is that there will not be a leftover attachment record with the volume in available and detached status. Closes-Bug: #1935057 Change-Id: I442a42b0c098775935a799876ad8efbe141829ad (cherry picked from commit 3aa00b08) Conflicts: cinder/volume/manager.py (cherry picked from commit 1328c68a) (cherry picked from commit be7b0012)
-
Gorka Eguileor authored
Our current `attachment_delete` methods in the volume API and the manager are using DB methods directly, which makes the OVOs present in those methods get out of sync with the latest data, which leads to notifications having the wrong data when we send them on volume detach. This patch replaces DB method calls with OVO calls and moves the notification call to the end of the method, where we have the final status on the volume. It also adds the missing detach.start notification when deleting an attachment in the reserved state. Closes-Bug: #1916980 Closes-Bug: #1935011 Change-Id: Ie48cf55deacd08e7716201dac00ede8d57e6632f (cherry picked from commit 68d49445) Conflicts: cinder/volume/api.py Changes: cinder/volume/manager.py (cherry picked from commit c0197c6f) (cherry picked from commit ed06fc74)
-
- 04 Jan, 2022 1 commit
-
-
Zuul authored
-
- 29 Dec, 2021 1 commit
-
-
Ivan Kolodyazhny authored
We always create a full backup for snapshots. It means we have to generate a correct buckup name for a base backup. Closes-Bug: #1860739 Change-Id: Ia08c252d747148e624f8d9e8b0e43f94773421e0 (cherry picked from commit f04d905a)
-
- 16 Dec, 2021 1 commit
-
-
Felipe Rodrigues authored
The ONTAP documentation states that the `clone-create` ZAPI call fails with `block-ranges` and `space-reserve` parameters sent together. The sub-clone uses the `block-ranges` and is failing because of that restriction. This patch fixes the `clone-create` operation by using exactly one of `block-ranges` or `space-reserve`. Change-Id: I05d83d73de69c57d885e0c417e8a376f7cfb1e4f Closes-Bug: #1924643 (cherry picked from commit dd0b1076) (cherry picked from commit 344f3e8b) (cherry picked from commit fffe9b57)
-
- 06 Dec, 2021 1 commit
-
-
Raghavendra Tilay authored
For HPE Primera 4.2 or higher versions, now iSCSI driver is supported along with existing FC driver. Accordingly updated code and documentation. Change-Id: Ie2542fc4b21050c4f14aea67ea488d9f9eeaae79 (cherry picked from commit 6cabe11c)
-
- 13 Nov, 2021 1 commit
-
-
Rajat Dhasmana authored
Currently there is no way to verify the connection info returned from the driver to cinder is the same as what cinder sends to nova (or other consumers) to connect to the volume. This log will help narrow down issues during attachments since there are many components involved (nova, cinder, os-brick). Change-Id: I8ed3567f8ae6c6384244cc1d07f1eaafbd7bf58e (cherry picked from commit 6a0b41a8) (cherry picked from commit 74c5a333)
-
- 09 Nov, 2021 1 commit
-
-
Eric Harney authored
In cases where we don't need to modify the image, open rbd images in read-only mode. Closes-Bug: #1947518 Change-Id: I8287460b902dd525aa5313861142f5fb8490e60a (cherry picked from commit e644e358) (cherry picked from commit 5b169aee) (cherry picked from commit f2fe6cc1)
-
- 08 Nov, 2021 1 commit
-
-
Zuul authored
-
- 27 Oct, 2021 1 commit
-
-
Eric Harney authored
stable/victoria lists that ddt 1.2.1 is supported, but this decoration does not work with that version of ddt. Remove the TestNameFormat parameter for these tests. (stable-branch only fix) Closes-Bug: #1948934 Change-Id: Ic91a978303f11617bf29c88bf82235998947552a
-
- 22 Sep, 2021 1 commit
-
-
Helen Walsh authored
When checking if a storage group is a child of a parent storage group the check is currently case sensitive. We should allow for a pattern match that is not case sensitive. For example, myStorageGroup should equal MYSTORAGEGROUP or mystoragegroup. Closes-Bug: #1929429 Change-Id: I8dd114fedece8e9d8f85c1ed237c31aede907d67 (cherry picked from commit 1f65c2a9) (cherry picked from commit 80ef8c10)
-
- 17 Sep, 2021 2 commits
- 16 Sep, 2021 2 commits
-
-
Luigi Toscano authored
Port the legacy legacy-tempest-dsvm-multibackend-matrix job to the native Zuul v3 syntax, and rename it following the guidelines (cinder-multibackend-matrix-migration). This job tests the migration between two different backends specified through the volume.backend_names configuration key in tempest.conf. Now the job leverages the existing zuul code, namely the run-tempest role, which is called multiple times with all the possible combinations of the 3 tested backends (lvm, ceph, nfs) where the source and the destination differ. The final JUnitXML output summarizes the test results for each of the tested combinations. Conflicts: .zuul.yaml -> due to the addition and removal of several jobs in the newest .zuul.yaml Change-Id: I34e7e48ee63c4c269f82ae178a7118ed402cad6d (cherry picked from commit 1c0c25ba) (cherry picked from commit 326fa62d)
-
Zuul authored
-
- 15 Sep, 2021 3 commits
-
-
Eric Harney authored
Retry lvextend commands upon segfault, similar to other LVM calls. This affects the volume extend path. Change-Id: I0c0cb5308246a3dce736eade67b40be063aa78bb Related-Bug: #1901783 Related-Bug: #1932188 Closes-Bug: #1940436 (cherry picked from commit c4b89567) (cherry picked from commit 2425f3ef)
-
Sofia Enriquez authored
This is a follow-up to I0a2420f3e4a411f5fa52ebe2d22859b138ef387f. LVM commands segfault occasionally, exiting with code 139. Change I6824ba4f introduced a workaround to retry the command when code 139 is returned, which generally works. Closes-Bug: #1932188 Change-Id: I7c0f4d4ea7de635afede3c8514a5da9e85ad9b48 (cherry picked from commit a8552ed2) (cherry picked from commit 77d4aa6a)
-
Zuul authored
-