1. 28 Sep, 2019 3 commits
  2. 27 Sep, 2019 4 commits
  3. 26 Sep, 2019 1 commit
    • Matthew Oliver's avatar
      sharder: Keep cleaving on empty shard ranges · e9cd9f74
      Matthew Oliver authored
      When a container is being cleaved there is a possiblity that we're
      dealing with an empty or near empty container created on a handoff node.
      These containers may have a valid list of shard ranges, so would need
      to cleave to the new shards.
      Currently, when using a `cleave_batch_size` that is smaller then the
      number of shard ranges on the cleaving container, these containers will
      have to take a few shard passes to shard, even though there maybe
      nothing in them.
      
      This is worse if a really large container is sharding, and due to being
      slow, error limitted a node causing a new container on a handoff
      location. This empty container would have a large number of shard ranges
      and could take a _very_ long time to shard away, slowing the process
      down.
      
      This patch eliminates the issue by detecting when no objects are
      returned for a shard range. The `_cleave_shard_range` method now
      returns 3 possible results:
      
        - CLEAVE_SUCCESS
        - CLEAVE_FAILED
        - CLEAVE_EMPTY
      
      They all are pretty self explanitory. When `CLEAVE_EMPTY` is returned
      the code will:
      
        - Log
        - Not replicate the empty temp shard container sitting in a
          handoff location
        - Not count the shard range in the `cleave_batch_size` count
        - Update the cleaving context so sharding can move forward
      
      If there already is a shard range DB existing on a handoff node to use
      then the sharder wont skip it, even if there are no objects, it'll
      replicate it and treat it as normal, including using a `cleave_batch_size`
      slot.
      
      Change-Id: Id338f6c3187f93454bcdf025a32a073284a4a159
      Closes-Bug: #1839355
      e9cd9f74
  4. 25 Sep, 2019 6 commits
  5. 24 Sep, 2019 1 commit
    • Thiago da Silva's avatar
      Add func test for changing versionining modes · 6271d88f
      Thiago da Silva authored
      Users are able to change versioning in a container
      from X-Versions-Location to X-History-Location, which affects
      how DELETEs are handled. We have some unit tests that check this
      behavior, but no functional tests.
      
      This patch adds a functional test that helps us understand and
      document how changing modes affects the handling of DELETE
      requests.
      
      Change-Id: I5dbe5bdca17e624963cb3a3daba3b240cbb4bec4
      6271d88f
  6. 23 Sep, 2019 3 commits
    • Tim Burke's avatar
      sharding: Update probe test to verify CleavingContext cleanup · 9495bc00
      Tim Burke authored
      Change-Id: I219bbbfd6a3c7adcaf73f3ee14d71aadd183633b
      Related-Change: I1e502c328be16fca5f1cca2186b27a0545fecc16
      9495bc00
    • Matthew Oliver's avatar
      Sharding: Use the metadata timestamp as last_modified · 370ac4cd
      Matthew Oliver authored
      This is a follow up patch from the cleaning up cleave context's patch
      (patch 681970). Instead of tracking a last_modified timestamp, and storing
      it in the context metadata, use the timestamp we use when storing any
      metadata.
      
      Reducing duplication is nice, but there's a more significant reason to
      do this: affected container DBs can start getting cleaned up as soon as
      they're running the new code rather than needing to wait for an
      additional reclaim_age.
      
      Change-Id: I2cdbe11f06ffb5574e573c4a60ba4e5d41a00c50
      370ac4cd
    • Tim Burke's avatar
      proxy: Don't trust Content-Length for chunked transfers · 291873e7
      Tim Burke authored
      Previously we'd
      - complain that a client disconnected even though they finished their
        chunked transfer just fine, and
      - on EC, send a X-Backend-Obj-Content-Length for pre-allocation even
        though Content-Length doesn't determine request body size.
      
      Change-Id: Ia80e595f713695cbb41dab575963f2cb9bebfa09
      Related-Bug: 1840507
      291873e7
  7. 21 Sep, 2019 2 commits
  8. 20 Sep, 2019 7 commits
  9. 19 Sep, 2019 3 commits
  10. 18 Sep, 2019 3 commits
    • Zuul's avatar
      28f292f2
    • Zuul's avatar
      Merge "Add python3 to setup.cfg" · 450fc5bf
      Zuul authored
      450fc5bf
    • Matthew Oliver's avatar
      Sharding: Clean up old CleaveConext's during audit · 81a41da5
      Matthew Oliver authored
      There is a sharding edge case where more CleaveContext are generated and
      stored in the sharding container DB. If this number get's high enough,
      like in the linked bug. If enough CleaveContects build up in the DB then
      this can lead to the 503's when attempting to list the container due to
      all the `X-Container-Sysmeta-Shard-Context-*` headers.
      
      This patch resolves this by tracking the a CleaveContext's last
      modified. And during the sharding audit, any context's that hasn't been
      touched after reclaim_age are deleted.
      
      This plus the skip empty ranges patches should improve these handoff
      shards.
      
      Change-Id: I1e502c328be16fca5f1cca2186b27a0545fecc16
      Closes-Bug: #1843313
      81a41da5
  11. 17 Sep, 2019 3 commits
  12. 16 Sep, 2019 1 commit
  13. 15 Sep, 2019 1 commit
    • Nguyen Quoc Viet's avatar
      versioned_writes: checks for SLO object before copy · b4288b4a
      Nguyen Quoc Viet authored
      Previously, versioned_writes middleware copy an already existing
      object using PUT. However, SLO requires the additional query
      to properly update the object size when listing.
      
      Propose fix: In _put_versioned_obj - which is called when on
      creating version obj and also on restoring obj,
      if 'X-Object-Sysmeta-Slo-Size' header is present it will
      add needed headers for container to update obj size
      
      Added a new functional test case with size assertion for slo
      
      Change-Id: I47e0663e67daea8f1cf4eaf3c47e7c8429fd81bc
      Closes-Bug: #1840322
      b4288b4a
  14. 14 Sep, 2019 2 commits