1. 13 Aug, 2019 2 commits
  2. 05 Aug, 2019 14 commits
  3. 23 Jul, 2019 2 commits
  4. 18 Jul, 2019 2 commits
  5. 17 Jul, 2019 1 commit
  6. 15 Jul, 2019 12 commits
  7. 13 Jul, 2019 2 commits
  8. 12 Jul, 2019 3 commits
  9. 11 Jul, 2019 2 commits
    • Zuul's avatar
    • Tim Burke's avatar
      sharding: Cache shard ranges for object writes · a1af3811
      Tim Burke authored
      Previously, we issued a GET to the root container for every object PUT,
      POST, and DELETE. This puts load on the container server, potentially
      leading to timeouts, error limiting, and erroneous 404s (!).
      Now, cache the complete set of 'updating' shards, and find the shard for
      this particular update in the proxy. Add a new config option,
      recheck_updating_shard_ranges, to control the cache time; it defaults to
      one hour. Set to 0 to fall back to previous behavior.
      Note that we should be able to tolerate stale shard data just fine; we
      already have to worry about async pendings that got written down with
      one shard but may not get processed until that shard has itself sharded
      or shrunk into another shard.
      Also note that memcache has a default value limit of 1MiB, which may be
      exceeded if a container has thousands of shards. In that case, set()
      will act like a delete(), causing increased memcache churn but otherwise
      preserving existing behavior. In the future, we may want to add support
      for gzipping the cached shard ranges as they should compress well.
      Change-Id: Ic7a732146ea19a47669114ad5dbee0bacbe66919
      Closes-Bug: 1781291