Commit 7304a162 authored by Bernd Zeimetz's avatar Bernd Zeimetz

Merge branch 'master' into wheezy-backports

parents 8210f3fc f1595ebb
# 2.0 (unreleased master branch)
### New Features
### Breaking Changes
### Enhancements
### Bugfixes
# 1.11 (23-03-2016)
### Enhancements
* **collector** UDP connections are now suffixed with `-udp` in
destination target
* **router** `send statistics to` construct was added to direct internal
statistics to a specific cluster
### Bugfixes
* [Issue #159](https://github.com/grobian/carbon-c-relay/issues/159)
corrupted statistics for file clusters
* [Issue #160](https://github.com/grobian/carbon-c-relay/issues/160)
metricsBlackholed stays zero when blackhole target is used
# 1.10 (09-03-2016)
### Breaking Changes
* **statistics** dispatch\_busy and dispatch\_idle have been replaced with
wallTime\_us and sleepTime\_us
### Bugfixes
* [Issue #152](https://github.com/grobian/carbon-c-relay/issues/152)
crash in aggregator\_expire for data-contained aggregations
# 1.9 (07-03-2016)
### Enhancements
* **statistics** dispatch\_busy is slightly more realistic now
### Bugfixes
* [Issue #153](https://github.com/grobian/carbon-c-relay/issues/153)
aggregator statistics are garbage with `-m`
# 1.8 (23-02-2016)
### New Features
* **relay** new flags `-D` for daemon mode and `-p` for pidfile
creation
### Enhancements
* **dispatcher** server stalling (to slow down too fast writers) is now
based on a random timeout
* **server** write timeout is now large enough to deal with upstream
relay stalling
* **relay** number of workers/dispatchers is now determined in a way
that doesn''t need OpenMP any more
# 1.7 (29-01-2016)
### New Features
* **relay** new flag `-B` to set the listen backlog for TCP and UNIX
connections, [issue #143](https://github.com/grobian/carbon-c-relay/issues/143)
### Enhancements
* **dispatcher** switch from select() to poll() to fix crashes when too
many connections were made to the relay
* Misc (memory) leak fixes
# 1.6 (27-01-2016)
### Breaking Changes
* **relay** startup and shutdown messages are now better in line
### Enhancements
* **relay** fixed segfault when issuing `SIGHUP` under active load
# 1.5 (13-01-2016)
### Enhancements
* **aggregator** metrics are now written directly to dispatchers to
avoid overload of the internal\_submission queue, which is likely to to
happen with many aggregates
* **collector** properly report file-based servers in statistics
* **collector** re-introduce the interal destination in statistics
# 1.4 (04-01-2016)
### New Features
* **collector** when run in debug and submission mode, there is a iostat
like output
### Enhancements
* **relay** reloading config now no longer unconditionally starts the
aggregator
* **aggregator** misc cleanup/free fixes
* **relay** allow reloading aggregator
### Bugfixes
* [Issue #133](https://github.com/grobian/carbon-c-relay/issues/133)
_stub_aggregator metrics seen after a reload
# 1.3 (16-12-2015)
### Enhancements
* **consistent-hash** fix jump\_fnv1a\_ch metric submission, it didn''t
work at all
### Bugfixes
* [Issue #126](https://github.com/grobian/carbon-c-relay/issues/126)
double free crash
* [Issue #131](https://github.com/grobian/carbon-c-relay/issues/131)
segfault using stddev in aggregator
* [Issue #132](https://github.com/grobian/carbon-c-relay/issues/132)
crash with glibc double free message
# 1.2 (10-12-2015)
### New Features
* **consistent-hash** new algorithm jump\_fnv1a\_ch for near perfect
distribution of metrics
* **distributiontest** test program used to see unbalancedness of
clusters for a given input metric see
[graphite-project/carbon#485](https://github.com/graphite-project/carbon/issues/485)
### Enhancements
* **router** fix cluster checking with regards replication count and the
number of servers to allow equal counts
### Bugfixes
* [Issue #126](https://github.com/grobian/carbon-c-relay/issues/126)
prevent calling read() too often
# 1.1 (25-11-2015)
### Enhancements
* **router** fix distribution of any\_of cluster if members have failed
# 1.0 (23-11-2015)
* many improvements
# 0.45 (05-11-2015)
* Many aggregator improvements, more flexible routing support.
# 0.44 (13-08-2015)
* Feature to set hash-keys for fnv1a\_ch.
# 0.43 (27-07-2015)
* Bugfix release for segfault when using any\_of clusters.
# 0.42 (24-07-2015)
* Reduced warning level for submission mode queue pileups. Allow
writing to a file (cluster type). Fix splay on aggregator not to
affect timestamps of input. No more dep on openssl for md5.
# 0.40 (11-05-2015)
* Hefty optimisations on aggregations. Fix for UDP port closure.
# Copyright 2013-2015 Fabian Groffen
# Copyright 2013-2016 Fabian Groffen
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
......@@ -13,11 +13,7 @@
# limitations under the License.
CFLAGS ?= -O2 -Wall
# if your compiler doesn't support OpenMP, comment out this line, or
# define OPENMP_FLAGS to be empty
OPENMP_FLAGS ?= -fopenmp
override CC += $(OPENMP_FLAGS)
CFLAGS ?= -O2 -Wall -Wshadow
GIT_VERSION := $(shell git describe --abbrev=6 --dirty --always || date +%F)
GVCFLAGS += -DGIT_VERSION=\"$(GIT_VERSION)\"
......
......@@ -51,11 +51,11 @@ The route file syntax is as follows:
# comments are allowed in any place and start with a hash (#)
cluster <name>
<forward | any_of [useall] | failover | <carbon_ch | fnv1a_ch> [replication <count>]>
<forward | any_of [useall] | failover | <carbon_ch | fnv1a_ch | jump_fnv1a_ch> [replication <count>]>
<host[:port][=instance] [proto <udp | tcp>]> ...
;
cluster <name>
file
file [ip]
</path/to/file> ...
;
match
......@@ -78,6 +78,9 @@ aggregate
[send to <cluster ...>]
[stop]
;
send statistics to <cluster ...>
[stop]
;
```
Multiple clusters can be defined, and need not to be referenced by a
......@@ -120,7 +123,20 @@ key. For example, usage like
`10.0.0.1:2003=4d79d13554fa1301476c1f9fe968b0ac` would allow to change
port and/or ip address of the server that receives data for the instance
key. Obviously, this way migration of data can be dealt with much more
conveniently.
conveniently. The `jump_fnv1a_ch` cluster is also a consistent hash
cluster like the previous two, but it does not take the server
information into account at all. Whether this is useful to you depends
on your scenario. The jump hash has a much better balancing over the
servers defined in the cluster, at the expense of not being able to
remove any server but the last in order. What this means is that this
hash is fine to use with ever growing clusters where older nodes are
also replaced at some point. If you have a cluster where removal of old
nodes takes place often, the jump hash is not suitable for you. Jump
hash works with servers in an ordered list without gaps. To influence
the ordering, the instance given to the server will be used as sorting
key. Without, the order will be as given in the file. It is a good
practice to fix the order of the servers with instances such that it is
explicit what the right nodes for the jump hash are.
DNS hostnames are resolved to a single address, according to the preference
rules in [RFC 3484](https://www.ietf.org/rfc/rfc3484.txt). The `any_of`
......@@ -185,6 +201,12 @@ possible. Like for match rules, it is possible to define multiple
cluster targets. Also, like match rules, the `stop` keyword applies to
control the flow of metrics in the matching process.
The special `send statistics to` construct is much like a `match` rule
which matches the (internal) statistics produced by the relay. It can
be used to avoid router loops when sending the statistics to a certain
destination. The `send statistics` construct can only be used once, but
multiple destinations can be used then required.
Examples
--------
......@@ -554,21 +576,6 @@ namespace:
client. The idle connections disconnect in the relay here is to guard
against resource drain in such scenarios.
* dispatch\_busy
The number of dispatchers actively doing work at the moment of the
sample. This is just an indication of the work pressure on the relay.
* dispatch\_idle
The number of dispatchers sleeping at the moment of the sample. When
this number nears 0, dispatch\_busy should be high. When the
configured number of worker threads is low, this might mean more
worker threads should be added (if the system allows it) or the relay
is reaching its limits with regard to how much it can process. A
relay with no idle dispatchers will likely appear slow for clients,
for the relay has too much work to serve them instantly.
* dispatch\_wallTime\_us
The number of microseconds spent by the dispatchers to do their work.
......@@ -578,7 +585,20 @@ namespace:
from a socket, cleaning up the input metric, to adding the metric to
the appropriate queues. The larger the configuration, and more
complex in terms of matches, the more time the dispatchers will spend
on the cpu.
on the cpu. But also time they do /not/ spend on the cpu is included
in this number. It is the pure wallclock time the dispatcher was
serving a client.
* dispatch\_sleepTime\_us
The number of microseconds spent by the dispatchers sleeping waiting
for work. When this value gets small (or even zero) the dispatcher
has so much work that it doesn't sleep any more, and likely can't
process the work in a timely fashion any more. This value plus the
wallTime from above sort of sums up to the total uptime taken by this
dispatcher. Therefore, expressing the wallTime as percentage of this
sum gives the busyness percentage draining all the way up to 100% if
sleepTime goes to 0.
* server\_wallTime\_us
......
This diff is collapsed.
/*
* Copyright 2013-2015 Fabian Groffen
* Copyright 2013-2016 Fabian Groffen
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
......@@ -29,6 +29,8 @@ typedef struct _aggregator {
unsigned short expire; /* when incoming metrics are no longer valid */
enum _aggr_timestamp { TS_START, TS_MIDDLE, TS_END } tswhen;
unsigned char bucketcnt;
int disp_conn;
int fd;
size_t received;
size_t sent;
size_t dropped;
......@@ -55,9 +57,9 @@ typedef struct _aggregator {
} *invocations_ht[1 << AGGR_HT_POW_SIZE];
unsigned char entries_needed:1;
unsigned char percentile:7;
pthread_rwlock_t invlock;
struct _aggr_computes *next;
} *computes;
pthread_mutex_t bucketlock;
struct _aggregator *next;
} aggregator;
......@@ -65,15 +67,15 @@ aggregator *aggregator_new(unsigned int interval, unsigned int expire, enum _agg
char aggregator_add_compute(aggregator *s, const char *metric, const char *type);
void aggregator_set_stub(aggregator *s, const char *stubname);
void aggregator_putmetric(aggregator *s, const char *metric, const char *firstspace, size_t nmatch, regmatch_t *pmatch);
int aggregator_start(server *submission);
int aggregator_start(aggregator *aggrs);
void aggregator_stop(void);
size_t aggregator_numaggregators(void);
size_t aggregator_numcomputes(void);
size_t aggregator_get_received(void);
size_t aggregator_get_sent(void);
size_t aggregator_get_dropped(void);
size_t aggregator_get_received_sub(void);
size_t aggregator_get_sent_sub(void);
size_t aggregator_get_dropped_sub(void);
size_t aggregator_numaggregators(aggregator *agrs);
size_t aggregator_numcomputes(aggregator *aggrs);
size_t aggregator_get_received(aggregator *aggrs);
size_t aggregator_get_sent(aggregator *aggrs);
size_t aggregator_get_dropped(aggregator *aggrs);
size_t aggregator_get_received_sub(aggregator *aggrs);
size_t aggregator_get_sent_sub(aggregator *aggrs);
size_t aggregator_get_dropped_sub(aggregator *aggrs);
#endif
This diff is collapsed.
/*
* Copyright 2013-2015 Fabian Groffen
* Copyright 2013-2016 Fabian Groffen
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
......@@ -20,6 +20,7 @@
#include "dispatcher.h"
#include "router.h"
#include "aggregator.h"
#include "server.h"
#include "relay.h"
......@@ -28,9 +29,9 @@ extern int collector_interval;
#define timediff(X, Y) \
(Y.tv_sec > X.tv_sec ? (Y.tv_sec - X.tv_sec) * 1000 * 1000 + ((Y.tv_usec - X.tv_usec)) : Y.tv_usec - X.tv_usec)
void collector_start(dispatcher **d, cluster *c, server *submission, char cum);
void collector_start(dispatcher **d, router *rtr, server *submission, char cum);
void collector_stop(void);
void collector_schedulereload(cluster *c);
void collector_schedulereload(router *rtr);
char collector_reloadcomplete(void);
#endif
/*
* Copyright 2013-2015 Fabian Groffen
* Copyright 2013-2016 Fabian Groffen
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
......@@ -30,7 +30,8 @@
/* This value is hardwired in the carbon sources, and necessary to get
* fair (re)balancing of metrics in the hash ring. Because the value
* seems reasonable, we use the same value for all hash implementations. */
* seems reasonable, we use the same value for carbon and fnv1a hash
* implementations. */
#define HASH_REPLICAS 100
typedef struct _ring_entry {
......@@ -44,6 +45,8 @@ struct _ch_ring {
ch_type type;
unsigned char hash_replicas;
ch_ring_entry *entries;
ch_ring_entry **entrylist; /* only used with jump hash */
int entrycnt;
};
......@@ -77,6 +80,27 @@ fnv1a_hashpos(const char *key, const char *end)
return (unsigned short)((hash >> 16) ^ (hash & (unsigned int)0xFFFF));
}
/**
* Computes the bucket number for key in the range [0, bckcnt). The
* algorithm used is the jump consistent hash by Lamping and Veach.
*/
static unsigned int
jump_bucketpos(unsigned long long int key, int bckcnt)
{
long long int b = -1, j = 0;
while (j < bckcnt) {
b = j;
key = key * 2862933555777941757ULL + 1;
j = (long long int)((double)(b + 1) *
((double)(1LL << 31) / (double)((key >> 33) + 1))
);
}
/* b cannot exceed the range of bckcnt, see while condition */
return (int)b;
}
/**
* Sort comparator for ch_ring_entry structs on pos, ip and instance.
*/
......@@ -134,6 +158,18 @@ entrycmp_fnv1a(const void *l, const void *r)
return 0;
}
/**
* Sort comparator for ch_ring_entry structs on instance only.
*/
static int
entrycmp_jump_fnv1a(const void *l, const void *r)
{
char *si_l = server_instance(((ch_ring_entry *)l)->server);
char *si_r = server_instance(((ch_ring_entry *)r)->server);
return strcmp(si_l ? si_l : "", si_r ? si_r : "");
}
ch_ring *
ch_new(ch_type type)
{
......@@ -142,8 +178,18 @@ ch_new(ch_type type)
if (ret == NULL)
return NULL;
ret->type = type;
ret->hash_replicas = HASH_REPLICAS;
switch (ret->type) {
case CARBON:
case FNV1a:
ret->hash_replicas = HASH_REPLICAS;
break;
default:
ret->hash_replicas = 1;
break;
}
ret->entries = NULL;
ret->entrylist = NULL;
ret->entrycnt = 0;
return ret;
}
......@@ -215,6 +261,13 @@ ch_addnode(ch_ring *ring, server *s)
}
cmp = *entrycmp_fnv1a;
break;
case JUMP_FNV1a:
entries[0].pos = 0;
entries[0].server = s;
entries[0].next = NULL;
entries[0].malloced = 0;
cmp = *entrycmp_jump_fnv1a;
break;
}
/* sort to allow merge joins later down the road */
......@@ -232,7 +285,7 @@ ch_addnode(ch_ring *ring, server *s)
last = NULL;
assert(ring->hash_replicas > 0);
for (w = ring->entries; w != NULL && i < ring->hash_replicas; ) {
if (cmp(&w->pos, &entries[i].pos) <= 0) {
if (cmp(w, &entries[i]) <= 0) {
last = w;
w = w->next;
} else {
......@@ -255,6 +308,28 @@ ch_addnode(ch_ring *ring, server *s)
}
}
if (ring->type == JUMP_FNV1a) {
ch_ring_entry *w;
/* count the ring, pos is purely cosmetic, it isn't used */
for (w = ring->entries, i = 0; w != NULL; w = w->next, i++)
w->pos = i;
ring->entrycnt = i;
/* this is really wasteful, but optimising this isn't worth it
* since it's called only a few times during config parsing */
if (ring->entrylist != NULL)
free(ring->entrylist);
ring->entrylist = malloc(sizeof(ch_ring_entry *) * ring->entrycnt);
for (w = ring->entries, i = 0; w != NULL; w = w->next, i++)
ring->entrylist[i] = w;
if (i == CONN_DESTS_SIZE) {
logerr("ch_addnode: nodes in use exceeds CONN_DESTS_SIZE, "
"increase CONN_DESTS_SIZE in router.h\n");
return NULL;
}
}
return ring;
}
......@@ -284,6 +359,41 @@ ch_get_nodes(
case FNV1a:
pos = fnv1a_hashpos(metric, firstspace);
break;
case JUMP_FNV1a: {
/* this is really a short route, since the jump hash gives
* us a bucket immediately */
unsigned long long int hash;
ch_ring_entry *bcklst[CONN_DESTS_SIZE];
const char *p;
i = ring->entrycnt;
pos = replcnt;
memcpy(bcklst, ring->entrylist, sizeof(bcklst[0]) * i);
fnv1a_64(hash, p, metric, firstspace);
while (i > 0) {
j = jump_bucketpos(hash, i);
(*ret).dest = bcklst[j]->server;
(*ret).metric = strdup(metric);
ret++;
if (--pos == 0)
break;
/* use xorshift to generate a different hash for input
* in the hump hash again */
hash ^= hash >> 12;
hash ^= hash << 25;
hash ^= hash >> 27;
hash *= 2685821657736338717ULL;
/* remove the server we just selected, such that we can
* be sure the next iteration will fetch another server */
bcklst[j] = bcklst[--i];
}
} return;
}
assert(ring->entries);
......@@ -335,6 +445,8 @@ ch_printhashring(ch_ring *ring, FILE *f)
column = 0;
}
}
if (column != 0)
fprintf(f, "\n");
}
unsigned short
......@@ -345,6 +457,11 @@ ch_gethashpos(ch_ring *ring, const char *key, const char *end)
return carbon_hashpos(key, end);
case FNV1a:
return fnv1a_hashpos(key, end);
case JUMP_FNV1a: {
unsigned long long int hash;
fnv1a_64(hash, key, key, end);
return jump_bucketpos(hash, ring->entrycnt);
}
default:
assert(0); /* this shouldn't happen */
}
......@@ -353,7 +470,8 @@ ch_gethashpos(ch_ring *ring, const char *key, const char *end)
}
/**
* Frees the ring structure and its added nodes.
* Frees the ring structure and its added nodes, leaves the referenced
* servers untouched.
*/
void
ch_free(ch_ring *ring)
......@@ -362,9 +480,9 @@ ch_free(ch_ring *ring)
ch_ring_entry *w = NULL;
for (; ring->entries != NULL; ring->entries = ring->entries->next) {
server_shutdown(ring->entries->server);
if (ring->entries->malloced) {
free(ring->entries->server);
if (deletes == NULL) {
w = deletes = ring->entries;
} else {
......@@ -381,5 +499,8 @@ ch_free(ch_ring *ring)
deletes = w;
}
if (ring->entrylist != NULL)
free(ring->entrylist);
free(ring);
}
/*
* Copyright 2013-2015 Fabian Groffen
* Copyright 2013-2016 Fabian Groffen
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
......@@ -27,7 +27,7 @@
#define CH_RING void
#endif
typedef CH_RING ch_ring;
typedef enum { CARBON, FNV1a } ch_type;
typedef enum { CARBON, FNV1a, JUMP_FNV1a } ch_type;
ch_ring *ch_new(ch_type type);
ch_ring *ch_addnode(ch_ring *ring, server *s);
......
carbon-c-relay (1.1-1~bpo70+1) wheezy-backports-sloppy; urgency=medium
carbon-c-relay (1.11-1~bpo7+1) wheezy-backports-sloppy; urgency=medium
* Rebuild for wheezy-backports-sloppy.
* Adding debian/gbp.conf for git-buildpackage.
-- Bernd Zeimetz <bzed@debian.org> Mon, 30 Nov 2015 13:45:11 +0100
-- Bernd Zeimetz <bzed@debian.org> Mon, 04 Apr 2016 19:37:34 +0200
carbon-c-relay (1.11-1) unstable; urgency=medium
* [388ca67] Merge tag 'upstream/1.11'
Upstream version 1.11
* [2d6be49] Refresh patches.
-- Bernd Zeimetz <bzed@debian.org> Tue, 29 Mar 2016 20:53:58 +0200
carbon-c-relay (1.7-1) unstable; urgency=medium
* [ef4ac7e] Merge tag 'upstream/1.7'
Upstream version 1.7
-- Bernd Zeimetz <bzed@debian.org> Sun, 07 Feb 2016 15:05:09 +0100
carbon-c-relay (1.3-1) unstable; urgency=medium
* [dcdd982] Merge tag 'upstream/1.3'
Upstream version 1.3
* [a16ad80] Refreshing patches
-- Bernd Zeimetz <bzed@debian.org> Sat, 02 Jan 2016 14:55:46 +0100
carbon-c-relay (1.1-1) unstable; urgency=medium
......
commit d0c296dc860c206ab69a5d42fe8b187d3844486d
Author: Fabian Groffen <fabian.groffen@booking.com>
Date: Wed Nov 25 15:33:29 2015 +0100
router_readconfig: fix replication count vs servers check
diff --git a/issues/issue117.conf b/issues/issue117.conf
index 36fc410..759d10a 100644
--- a/issues/issue117.conf
+++ b/issues/issue117.conf
@@ -5,6 +5,13 @@ cluster should-work-fine
127.0.0.3
;
+cluster should-also-work
+ fnv1a_ch replication 3
+ 127.0.0.1
+ 127.0.0.2
+ 127.0.0.3
+ ;
+
cluster really-doesnt-make-sense
fnv1a_ch replication 10
127.0.0.1
diff --git a/router.c b/router.c
index 5930dc6..e5a935c 100644
--- a/router.c
+++ b/router.c
@@ -717,7 +717,7 @@ router_readconfig(cluster **clret, route **rret,
size_t i = 0;
for (w = cl->members.ch->servers; w != NULL; w = w->next)
i++;
- if (i <= cl->members.ch->repl_factor) {
+ if (i < cl->members.ch->repl_factor) {
logerr("invalid cluster '%s': replication count (%zd) is "
"larger than the number of servers (%zd)\n",
name, cl->members.ch->repl_factor, i);
--- a/Makefile
+++ b/Makefile
@@ -19,7 +19,7 @@ CFLAGS ?= -O2 -Wall
OPENMP_FLAGS ?= -fopenmp
override CC += $(OPENMP_FLAGS)
@@ -15,7 +15,7 @@
CFLAGS ?= -O2 -Wall -Wshadow
-GIT_VERSION := $(shell git describe --abbrev=6 --dirty --always || date +%F)
+GIT_VERSION := $(shell dpkg-parsechangelog | awk '/^Version:/ {print $$2}')
......
fix-release-version-info
d0c296dc860c206ab69a5d42fe8b187d3844486d_fix_replication_count.diff
This diff is collapsed.
/*
* Copyright 2013-2015 Fabian Groffen
* Copyright 2013-2016 Fabian Groffen
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
......@@ -29,20 +29,24 @@ int dispatch_addlistener(int sock);
int dispatch_addlistener_udp(int sock);
void dispatch_removelistener(int sock);
int dispatch_addconnection(int sock);
int dispatch_addconnection_aggr(int sock);
dispatcher *dispatch_new_listener(void);
dispatcher *dispatch_new_connection(route *routes, char *allowed_chars);
dispatcher *dispatch_new_connection(router *r, char *allowed_chars);
void dispatch_stop(dispatcher *d);
void dispatch_shutdown(dispatcher *d);
void dispatch_free(dispatcher *d);
size_t dispatch_get_ticks(dispatcher *self);
size_t dispatch_get_metrics(dispatcher *self);
size_t dispatch_get_blackholes(dispatcher *self);
size_t dispatch_get_sleeps(dispatcher *self);
size_t dispatch_get_ticks_sub(dispatcher *self);
size_t dispatch_get_metrics_sub(dispatcher *self);
size_t dispatch_get_blackholes_sub(dispatcher *self);
char dispatch_busy(dispatcher *self);
size_t dispatch_get_sleeps_sub(dispatcher *self);
size_t dispatch_get_accepted_connections(void);
size_t dispatch_get_closed_connections(void);
void dispatch_schedulereload(dispatcher *d, route *r);
void dispatch_hold(dispatcher *d);
void dispatch_schedulereload(dispatcher *d, router *r);
char dispatch_reloadcomplete(dispatcher *d);
......
/*
* Copyright 2013-2015 Fabian Groffen
* Copyright 2013-2016 Fabian Groffen
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
......@@ -24,3 +24,14 @@
hash = FNV1A_32_OFFSET; \
for (p = metric; p < firstspace; p++) \
hash = (hash ^ (unsigned int)*p) * FNV1A_32_PRIME;
#define FNV1A_64_OFFSET 14695981039346656037ULL
#define FNV1A_64_PRIME 1099511628211UL
/**
* 64-bits unsigned FNV1a returning into hash, using p to as variable to
* walk over metric up to firstspace
*/
#define fnv1a_64(hash, p, metric, firstspace) \
hash = FNV1A_64_OFFSET; \
for (p = metric; p < firstspace; p++) \
hash = (hash ^ (unsigned long long int)*p) * FNV1A_64_PRIME;