1. 14 Apr, 2022 2 commits
    • Thomas Goirand's avatar
      Now packaging 19.1.0 · ec2fd7cc
      Thomas Goirand authored
      ec2fd7cc
    • Thomas Goirand's avatar
      Merge tag '19.1.0' into debian/xena · d17ab462
      Thomas Goirand authored
      cinder 19.1.0 release
      
      meta:version: 19.1.0
      meta:diff-start: -
      meta:series: xena
      meta:release-type: release
      meta:pypi: no
      meta:first: no
      meta:release:Author: whoami-rajat <rajatdhasmana@gmail.com>
      meta:release:Commit: whoami-rajat <rajatdhasmana@gmail.com>
      meta:release:Change-Id: I132116612caa217919a52075b8e07793c29a189a
      meta:release:Code-Review+1: Brian Rosmaita <rosmaita.fossdev@gmail.com>
      meta:release:Code-Review+2: Thierry Carrez <thierry@openstack.org>
      meta:release:Code-Review+2: Elod Illes <elod.illes@est.tech>
      meta:release:Workflow+1: Elod Illes <elod.illes@est.tech>
      d17ab462
  2. 02 Feb, 2022 3 commits
  3. 28 Jan, 2022 3 commits
  4. 27 Jan, 2022 4 commits
  5. 26 Jan, 2022 1 commit
  6. 25 Jan, 2022 1 commit
  7. 18 Jan, 2022 2 commits
  8. 17 Jan, 2022 4 commits
    • Gorka Eguileor's avatar
      Fix: Race between attachment and volume deletion · ed0be0c8
      Gorka Eguileor authored
      There are cases where requests to delete an attachment made by Nova can
      race other third-party requests to delete the overall volume.
      
      This has been observed when running cinder-csi, where it first requests
      that Nova detaches a volume before itself requesting that the overall
      volume is deleted once it becomes `available`.
      
      This is a cinder race condition, and like most race conditions is not
      simple to explain.
      
      Some context on the issue:
      
      - Cinder API uses the volume "status" field as a locking mechanism to
        prevent concurrent request processing on the same volume.
      
      - Most cinder operations are asynchronous, so the API returns before the
        operation has been completed by the cinder-volume service, but the
        attachment operations such as creating/updating/deleting an attachment
        are synchronous, so the API only returns to the caller after the
        cinder-volume service has completed the operation.
      
      - Our current code **incorrectly** modifies the status of the volume
        both on the cinder-volume and the cinder-api services on the
        attachment delete operation.
      
      The actual set of events that leads to the issue reported in this bug
      are:
      
      [Cinder-CSI]
      - Requests Nova to detach volume (Request R1)
      
      [Nova]
      - R1: Asks cinder-api to delete the attachment and **waits**
      
      [Cinder-API]
      - R1: Checks the status of the volume
      - R1: Sends terminate connection request (R1) to cinder-volume and
        **waits**
      
      [Cinder-Volume]
      - R1: Ask the driver to terminate the connection
      - R1: The driver asks the backend to unmap and unexport the volume
      - R1: The last attachment is removed from the DB and the status of the
            volume is changed in the DB to "available"
      
      [Cinder-CSI]
      - Checks that there are no attachments in the volume and asks Cinder to
        delete it (Request R2)
      
      [Cinder-API]
      
      - R2: Check that the volume's status is valid. It doesn't have
        attachments and is available, so it can be deleted.
      - R2: Tell cinder-volume to delete the volume and return immediately.
      
      [Cinder-Volume]
      - R2: Volume is deleted and DB entry is deleted
      - R1: Finish the termination of the connection
      
      [Cinder-API]
      - R1: Now that cinder-volume has finished the termination the code
        continues
      - R1: Try to modify the volume in the DB
      - R1: DB layer raises VolumeNotFound since the volume has been deleted
        from the DB
      - R1: VolumeNotFound is converted to HTTP 404 status code which is
        returned to Nova
      
      [Nova]
      - R1: Cinder responds with 404 on the attachment delete request
      - R1: Nova leaves the volume as attached, since the attachment delete
        failed
      
      At this point the Cinder and Nova DBs are out of sync, because Nova
      thinks that the attachment is connected and Cinder has detached the
      volume and even deleted it.
      
      Hardening is also being done on the Nova side [2] to accept that the
      volume attachment may be gone.
      
      This patch fixes the issue mentioned above, but there is a request on
      Cinder-CSI [1] to use Nova as the source of truth regarding its
      attachments that, when implemented, would also fix the issue.
      
      [1]: https://github.com/kubernetes/cloud-provider-openstack/issues/1645
      [2]: https://review.opendev.org/q/topic:%2522bug/1937084%2522+project:openstack/nova
      
      Closes-Bug: #1937084
      Change-Id: Iaf149dadad5791e81a3c0efd089d0ee66a1a5614
      (cherry picked from commit 2ec22228)
      ed0be0c8
    • Gorka Eguileor's avatar
      Expose volume_attachments in Volume OVO · e85c22ce
      Gorka Eguileor authored
      In change-id Iabf9c3fab56ffef50695ce45745f193273822b39 we left the
      `volume_attachment` out of the expected attributes of the Volume OVO
      (even when te DB layer is providing that information) because it was
      breaking some unit tests.
      
      This means that in some cases we are unnecessarily loading the
      attachments again (manually or via lazy loading) once we have the volume
      OVO because that field is not set.
      
      In this patch we populate the `volume_attachment` when we load the
      Volume OVO from the DB.
      
      Change-Id: I6576832b2c2722ab749cfe70bbc2058ead816c36
      (cherry picked from commit e07bf378)
      e85c22ce
    • Gorka Eguileor's avatar
      Delete attachment on remove_export failure · 1328c68a
      Gorka Eguileor authored
      When deleting an attachment, if the driver's remove_export or the
      detach_volume method call fails in the cinder driver, then the
      attachment status is changed to error_detaching but the REST API call
      doesn't fail.
      
      The end result is:
      - Volume status is "available"
      - Volume attach_status is "detached"
      - There is a volume_attachment record for the volume
      - The volume may still be exported in the backend
      
      The volume still being exported in the storage array is not a problem,
      since the next attach-detach cycle will give it another opportunity for
      it to succeed, and we also do the export on volume deletion.
      
      So in the end leaving the attachment in error_detaching status doesn't
      have any use and creates confusion.
      
      This patch removes the attachment record when on an attachment delete
      request if the error happens on remove_export or detach_volume calls.
      
      This doesn't change how the REST API attachment delete operation
      behaves, the change is that there will not be a leftover attachment
      record with the volume in available and detached status.
      
      Closes-Bug: #1935057
      Change-Id: I442a42b0c098775935a799876ad8efbe141829ad
      (cherry picked from commit 3aa00b08)
      1328c68a
    • Gorka Eguileor's avatar
      Fix detach notification · c0197c6f
      Gorka Eguileor authored
      Our current `attachment_delete` methods in the volume API and the
      manager are using DB methods directly, which makes the OVOs present in
      those methods get out of sync with the latest data, which leads to
      notifications having the wrong data when we send them on volume detach.
      
      This patch replaces DB method calls with OVO calls and moves the
      notification call to the end of the method, where we have the final
      status on the volume.
      
      It also adds the missing detach.start notification when deleting an
      attachment in the reserved state.
      
      Closes-Bug: #1916980
      Closes-Bug: #1935011
      Change-Id: Ie48cf55deacd08e7716201dac00ede8d57e6632f
      (cherry picked from commit 68d49445)
      c0197c6f
  9. 12 Jan, 2022 1 commit
    • Rajat Dhasmana's avatar
      Volume transfers: Remove duplicate policy check · 46415541
      Rajat Dhasmana authored
      There is an initial policy check in the transfers accept API[1]
      which validates correctly if the user is authorized to perform
      the operation or not. However, we've a duplicate check in the volume
      API layer which passes a target object (volume) while authorizing
      which is wrong for this API. While authorizing, we enforce check on
      the project id of the target object i.e. volume in this case which,
      before the transfer operation is completed, contains the project id
      of source project hence making the validation wrong.
      In the case of transfers API, any project is able to accept the transfer
      given they've the auth key required to secure the transfer accept
      So this patch removes the duplicate policy check.
      
      [1] https://opendev.org/openstack/cinder/src/branch/master/cinder/transfer/api.py#L225
      
      Conflicts:
            cinder/volume/api.py
      
      Closes-Bug: #1950474
      Change-Id: I3930bff90df835d9d8bbf7e6e91458db7e5654be
      (cherry picked from commit 7ba9935a)
      46415541
  10. 11 Jan, 2022 1 commit
    • Brian Rosmaita's avatar
      Reject bad img formats for uploaded encrypted vols · 78682022
      Brian Rosmaita authored
      Cinder only supports uploading volumes of encrypted volume types as
      images with disk format 'raw' and container format 'bare'.  Screen
      for this at the REST API layer when the request is made.
      
      Change-Id: Ibb77b8b1be6c35c5db3b07fdc4056afd51d48782
      Closes-bug: #1935688
      (cherry picked from commit de8b3b0b)
      78682022
  11. 05 Jan, 2022 1 commit
  12. 15 Dec, 2021 1 commit
    • Felipe Rodrigues's avatar
      NetApp ONTAP: Fix sub-clone zapi call · 344f3e8b
      Felipe Rodrigues authored
      The ONTAP documentation states that the `clone-create` ZAPI call
      fails with `block-ranges` and `space-reserve` parameters sent
      together. The sub-clone uses the `block-ranges` and is failing
      because of that restriction.
      
      This patch fixes the `clone-create` operation by using exactly one
      of `block-ranges` or `space-reserve`.
      
      Change-Id: I05d83d73de69c57d885e0c417e8a376f7cfb1e4f
      Closes-Bug: #1924643
      (cherry picked from commit dd0b1076)
      344f3e8b
  13. 01 Dec, 2021 1 commit
  14. 21 Nov, 2021 1 commit
    • Brian Rosmaita's avatar
      [stable-xena-only] update xena personas doc · 93363e3c
      Brian Rosmaita authored
      - update the description of the personas implemented in Xena
      - update the implementation schedule for the remaining personas
      - remove system-reader and project-admin from the matrix
        since (a) they're not implemented in Xena, and (b) the
        range of action of these personas in Yoga will be different
        than what's defined here
      
      Change-Id: If0391cef88a2476ed0f85ac5eb618cdeee380992
      93363e3c
  15. 19 Nov, 2021 1 commit
  16. 18 Nov, 2021 1 commit
  17. 15 Nov, 2021 2 commits
    • Chris M's avatar
      Dell PowerVault: Fix "cinder manageable-list" · 5bebe93e
      Chris M authored
      The Dell PowerVault ME driver claims support for importing volumes and
      although it does support "cinder manage", some related functions were
      missing, so it did not support "cinder manageable-list" or the related
      snapshot functions.
      
      Partial-Bug: #1922255
      Depends-on: https://review.opendev.org/c/openstack/cinder/+/809968
      
      Change-Id: I73958099b32e44e7e4875d0eba0e2c0096a12252
      (cherry picked from commit bff1d26f)
      5bebe93e
    • Chris M's avatar
      Seagate driver: fix get_volume_size() · b15107b3
      Chris M authored
      The driver was reporting volume sizes in units of GB (10^9), the
      default format the native array UIs, rather than GiB (2^30).  This
      change also affects the HPE MSA, Dell PowerVault, and Lenovo drivers
      as they use the same code.
      
      Change-Id: I038f1399812e6c053622751eb62df89eff8a33db
      (cherry picked from commit 4653773b)
      b15107b3
  18. 10 Nov, 2021 1 commit
    • digvijay2016's avatar
      Fixed copy-on-write mode in GPFS NFS driver · 6c64d92a
      digvijay2016 authored
      IBM Spectrum Scale cinder driver (GPFS) support copy-on-write feature
      in all the configuration. Resolving the bug mentioned below will
      enable mmclone feature of the IBM Spectrum Scale filesystem to provide
      better performance while configured in GPFS NFS mode.
      
      Closes-Bug: #1947134
      Closes-Bug: #1947123
      Change-Id: I3e77c890c7abca85dab92500eae989b4dff9824d
      (cherry picked from commit dcc19164)
      6c64d92a
  19. 09 Nov, 2021 1 commit
  20. 08 Nov, 2021 3 commits
  21. 04 Nov, 2021 1 commit
    • Helen Walsh's avatar
      PowerMax Driver - Fix for legacy PowerMax OS around generations · fe1e6be5
      Helen Walsh authored
      In the previous version of PowerMax OS generations of snapVx were
      used instead of unique snap ids which were just introduced.
      A generation can be returned as a 0 integer which equates to False
      in python.  The fix is to convert the int to a string if it is
      returned from REST as an int.
      
      Closes-Bug: #1938572
      Change-Id: I5b660776190f3026296d6d3237bd3b0d609f769f
      (cherry picked from commit ee1b5e2b)
      fe1e6be5
  22. 02 Nov, 2021 1 commit
  23. 29 Oct, 2021 1 commit
  24. 26 Oct, 2021 2 commits