Merging upstream version 1.15.0.

Signed-off-by: Daniel Baumann's avatarDaniel Baumann <daniel.baumann@progress-linux.org>
parent 7dac1895
<!---
This is a generic issue template. We usually prefer contributors to use one
of 3 other specific issue templates (bug report, feature request, question)
to allow our automation classify those so you can get response faster.
However if your issue doesn't fall into either one of those 3 categories
use this generic template.
--->
#### Summary
---
name: Bug report
about: Create a bug report to help us improve
---
<!---
When creating a bug report please:
- Verify first that your issue is not already reported on GitHub
- Test if the latest release and master branch are affected too.
- Provide a clear and concise description of what the bug is in "Bug report
summary" section.
- Try to provide as much information about your environment (OS distribution,
running in container, etc.) as possible to allow us reproduce this bug faster.
- Write which component is affected. We group our components the same way our
code is structured so basically:
component name = dir in top level directory of repository
- Describe how you found this bug and how we can reproduce it. Preferable with
a minimal test-case scenario. You can paste gist.github.com links for larger
files
- Provide a clear and concise description of what you expected to happen.
-->
##### Bug report summary
##### OS / Environment
##### Netdata version (ouput of `netdata -V`)
##### Component Name
##### Steps To Reproduce
##### Expected behavior
---
name: Feature request
about: Suggest an idea for our project
---
<!---
When creating a feature request please:
- Verify first that your issue is not already reported on GitHub
- Explain new feature briefly in "Feature idea summary" section
- Provide a clear and concise description of what you expect to happen.
--->
##### Feature idea summary
##### Expected behavior
---
name: Question
about: You just want to ask a question? Go on.
---
<!---
When asking a new question please:
- Verify first that your question wasn't asked before on GitHub.
HINT: Use label "question" when searching for such issues.
- Briefly explain what is the problem you are having
- Try to provide as much information about your environment (OS distribution,
running in container, etc.) as possible to allow us reproduce this bug faster.
- Write which component is affected. We group our components the same way our
code is structured so basically:
component name = dir in top level directory of repository
- Provide a clear and concise description of what you expected to happen.
-->
##### Question summary
##### OS / Environment
##### Component Name
##### Expected results
<!--
Describe the change in summary section, including rationale and degin decisions.
Include "Fixes #nnn" if you are fixing an existing issue.
In "Component Name" section write which component is changed in this PR. This
will help us review your PR quicker.
If you have more information you want to add, write them in "Additional
Information" section. This is usually used to help others understand your
motivation behind this change. A step-by-step reproduction of the problem is
helpful if there is no related issue.
-->
##### Summary
##### Component Name
##### Additional Information
---
only: issues
limitPerRun: 30
daysUntilStale: 45
daysUntilClose: 60
exemptLabels:
- bug
- help wanted
- feature request
exemptProjects: true
exemptMilestones: true
staleLabel: stale
markComment: >
Currently netdata team doesn't have enough capacity to work on this issue.
We will be more than glad to accept a pull request with a solution to problem described here.
This issue will be closed after another 60 days of inactivity.
closeComment: >
This issue has been automatically closed due to extended period of inactivity.
Please reopen if it is still valid. Thank you for your contributions.
......@@ -112,6 +112,7 @@ collectors/charts.d.plugin/charts.d.plugin
collectors/node.d.plugin/node.d.plugin
collectors/python.d.plugin/python.d.plugin
collectors/fping.plugin/fping.plugin
collectors/ioping.plugin/ioping.plugin
collectors/go.d.plugin
# installer generated files
......@@ -129,6 +130,7 @@ cmake_install.cmake
.jetbrains*
.DS_Store
webcopylocal*
contrib/debian/changelog
......
This diff is collapsed.
......@@ -4,7 +4,7 @@
- GITHUB_TOKEN - GitHub token with push access to repository
- DOCKER_USERNAME - Username (netdatabot) with write access to docker hub repository
- DOCKER_PASS - Password to docker hub
- DOCKER_PWD - Password to docker hub
- encrypted_8daf19481253_key - key needed by openssl to decrypt GCS credentials file
- encrypted_8daf19481253_iv - IV needed by openssl to decrypt GCS credentials file
- COVERITY_SCAN_TOKEN - Token to allow coverity test analysis uploads
......
#!/bin/bash
#!/usr/bin/env bash
#
# Artifacts creation script.
# This script generates two things:
# 1) The static binary that can run on all linux distros (built-in dependencies etc)
# 2) The distribution source tarbal
#
# Copyright: SPDX-License-Identifier: GPL-3.0-or-later
#
# Author: Paul Emm. Katsoulakis <paul@netdata.cloud>
#
# shellcheck disable=SC2230
set -e
if [ ! -f .gitignore ]; then
echo "Run as ./travis/$(basename "$0") from top level directory of git repository"
exit 1
# If we are not in netdata git repo, at the top level directory, fail
TOP_LEVEL=$(basename "$(git rev-parse --show-toplevel)")
CWD=$(git rev-parse --show-cdup || echo "")
if [ -n "${CWD}" ] || [ ! "${TOP_LEVEL}" == "netdata" ]; then
echo "Run as .travis/$(basename "$0") from top level directory of netdata git repository"
exit 1
fi
if [ ! "${TRAVIS_REPO_SLUG}" == "netdata/netdata" ]; then
......@@ -13,12 +26,9 @@ if [ ! "${TRAVIS_REPO_SLUG}" == "netdata/netdata" ]; then
exit 0
fi;
echo "--- Initialize git configuration ---"
export GIT_MAIL="bot@netdata.cloud"
export GIT_USER="netdatabot"
git config user.email "${GIT_MAIL}"
git config user.name "${GIT_USER}"
git checkout master
git checkout "${1-master}"
git pull
# Everything from this directory will be uploaded to GCS
......
......@@ -23,10 +23,6 @@ if [ ! -f .gitignore ]; then
fi
echo "--- Initialize git configuration ---"
export GIT_MAIL="bot@netdata.cloud"
export GIT_USER="netdatabot"
git config user.email "${GIT_MAIL}"
git config user.name "${GIT_USER}"
git checkout master
git pull
......
......@@ -48,8 +48,6 @@ fi
echo "--- Initialize git configuration ---"
export GIT_MAIL="bot@netdata.cloud"
export GIT_USER="netdatabot"
git config user.email "${GIT_MAIL}"
git config user.name "${GIT_USER}"
git checkout master
git pull
......@@ -62,7 +60,7 @@ echo "---- GENERATE CHANGELOG -----"
git add CHANGELOG.md
echo "---- COMMIT AND PUSH CHANGES ----"
git commit -m "[ci skip] release $GIT_TAG"
git commit -m "[ci skip] release $GIT_TAG" --author "${GIT_USER} <${GIT_MAIL}>"
git tag "$GIT_TAG" -a -m "Automatic tag generation for travis build no. $TRAVIS_BUILD_NUMBER"
git push "https://${GITHUB_TOKEN}:@$(git config --get remote.origin.url | sed -e 's/^https:\/\///')"
git push "https://${GITHUB_TOKEN}:@$(git config --get remote.origin.url | sed -e 's/^https:\/\///')" --tags
......
......@@ -38,8 +38,6 @@ if [ ! "${TRAVIS_REPO_SLUG}" == "netdata/netdata" ]; then
exit 0
fi
git config user.email "${GIT_MAIL}"
git config user.name "${GIT_USER}"
git checkout master
git pull
......@@ -58,7 +56,7 @@ echo "Changelog created! Adding packaging/version(${NEW_VERSION}) and CHANGELOG.
echo "${NEW_VERSION}" > packaging/version
git add packaging/version && echo "1) Added packaging/version to repository" || FAIL=1
git add CHANGELOG.md && echo "2) Added changelog file to repository" || FAIL=1
git commit -m '[ci skip] create nightly packages and update changelog' && echo "3) Committed changes to repository" || FAIL=1
git commit -m '[ci skip] create nightly packages and update changelog' --author "${GIT_USER} <${GIT_MAIL}>" && echo "3) Committed changes to repository" || FAIL=1
git push "https://${GITHUB_TOKEN}:@${PUSH_URL}" && echo "4) Pushed changes to remote ${PUSH_URL}" || FAIL=1
# In case of a failure, wrap it up and bail out cleanly
......
This source diff could not be displayed because it is too large. You can view the blob instead.
This diff is collapsed.
This diff is collapsed.
......@@ -311,6 +311,24 @@ RRD_PLUGIN_FILES = \
database/rrdvar.h \
$(NULL)
if ENABLE_DBENGINE
RRD_PLUGIN_FILES += \
database/engine/rrdengine.c \
database/engine/rrdengine.h \
database/engine/rrddiskprotocol.h \
database/engine/datafile.c \
database/engine/datafile.h \
database/engine/journalfile.c \
database/engine/journalfile.h \
database/engine/rrdenginelib.c \
database/engine/rrdenginelib.h \
database/engine/rrdengineapi.c \
database/engine/rrdengineapi.h \
database/engine/pagecache.c \
database/engine/pagecache.h \
$(NULL)
endif
API_PLUGIN_FILES = \
web/api/badges/web_buffer_svg.c \
web/api/badges/web_buffer_svg.h \
......@@ -412,6 +430,13 @@ BACKENDS_PLUGIN_FILES = \
backends/prometheus/backend_prometheus.h \
$(NULL)
KINESIS_BACKEND_FILES = \
backends/aws_kinesis/aws_kinesis.c \
backends/aws_kinesis/aws_kinesis.h \
backends/aws_kinesis/aws_kinesis_put_record.cc \
backends/aws_kinesis/aws_kinesis_put_record.h \
$(NULL)
DAEMON_FILES = \
daemon/common.c \
daemon/common.h \
......@@ -470,14 +495,23 @@ NETDATA_COMMON_LIBS = \
$(OPTIONAL_MATH_LIBS) \
$(OPTIONAL_ZLIB_LIBS) \
$(OPTIONAL_UUID_LIBS) \
$(OPTIONAL_UV_LIBS) \
$(OPTIONAL_LZ4_LIBS) \
$(OPTIONAL_JUDY_LIBS) \
$(OPTIONAL_SSL_LIBS) \
$(NULL)
# TODO: Find more graceful way to add libs for AWS Kinesis
sbin_PROGRAMS += netdata
netdata_SOURCES = $(NETDATA_FILES)
netdata_LDADD = \
$(NETDATA_COMMON_LIBS) \
$(NULL)
if ENABLE_BACKEND_KINESIS
netdata_LINK = $(CXXLD) $(CXXFLAGS) $(LDFLAGS) -o $@
else
netdata_LINK = $(CCLD) $(CFLAGS) $(LDFLAGS) -o $@
endif
if ENABLE_PLUGIN_APPS
plugins_PROGRAMS += apps.plugin
......@@ -531,3 +565,8 @@ if ENABLE_PLUGIN_XENSTAT
$(OPTIONAL_XENSTAT_LIBS) \
$(NULL)
endif
if ENABLE_BACKEND_KINESIS
netdata_SOURCES += $(KINESIS_BACKEND_FILES)
netdata_LDADD += $(OPTIONAL_KINESIS_LIBS)
endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -8,6 +8,7 @@ SUBDIRS = \
json \
opentsdb \
prometheus \
aws_kinesis \
$(NULL)
dist_noinst_DATA = \
......
This diff is collapsed.
......@@ -32,24 +32,28 @@ X seconds (though, it can send them per second if you need it to).
- **prometheus** is described at [prometheus page](prometheus/) since it pulls data from netdata.
- **AWS Kinesis Data Streams**
metrics are sent to the service in `JSON` format.
2. Only one backend may be active at a time.
3. Netdata can filter metrics (at the chart level), to send only a subset of the collected metrics.
4. Netdata supports three modes of operation for all backends:
- `as-collected` sends to backends the metrics as they are collected, in the units they are collected.
So, counters are sent as counters and gauges are sent as gauges, much like all data collectors do.
- `as-collected` sends to backends the metrics as they are collected, in the units they are collected.
So, counters are sent as counters and gauges are sent as gauges, much like all data collectors do.
For example, to calculate CPU utilization in this format, you need to know how to convert kernel ticks to percentage.
- `average` sends to backends normalized metrics from the netdata database.
In this mode, all metrics are sent as gauges, in the units netdata uses. This abstracts data collection
and simplifies visualization, but you will not be able to copy and paste queries from other sources to convert units.
For example, CPU utilization percentage is calculated by netdata, so netdata will convert ticks to percentage and
- `average` sends to backends normalized metrics from the netdata database.
In this mode, all metrics are sent as gauges, in the units netdata uses. This abstracts data collection
and simplifies visualization, but you will not be able to copy and paste queries from other sources to convert units.
For example, CPU utilization percentage is calculated by netdata, so netdata will convert ticks to percentage and
send the average percentage to the backend.
- `sum` or `volume`: the sum of the interpolated values shown on the netdata graphs is sent to the backend.
So, if netdata is configured to send data to the backend every 10 seconds, the sum of the 10 values shown on the
- `sum` or `volume`: the sum of the interpolated values shown on the netdata graphs is sent to the backend.
So, if netdata is configured to send data to the backend every 10 seconds, the sum of the 10 values shown on the
netdata charts will be used.
Time-series databases suggest to collect the raw values (`as-collected`). If you plan to invest on building your monitoring around a time-series database and you already know (or you will invest in learning) how to convert units and normalize the metrics in Grafana or other visualization tools, we suggest to use `as-collected`.
......@@ -66,9 +70,9 @@ of `netdata.conf` from your netdata):
```
[backend]
enabled = yes | no
type = graphite | opentsdb | json
type = graphite | opentsdb | json | kinesis
host tags = list of TAG=VALUE
destination = space separated list of [PROTOCOL:]HOST[:PORT] - the first working will be used
destination = space separated list of [PROTOCOL:]HOST[:PORT] - the first working will be used, or a region for kinesis
data source = average | sum | as collected
prefix = netdata
hostname = my-name
......@@ -82,7 +86,7 @@ of `netdata.conf` from your netdata):
- `enabled = yes | no`, enables or disables sending data to a backend
- `type = graphite | opentsdb | json`, selects the backend type
- `type = graphite | opentsdb | json | kinesis`, selects the backend type
- `destination = host1 host2 host3 ...`, accepts **a space separated list** of hostnames,
IPs (IPv4 and IPv6) and ports to connect to.
......@@ -105,7 +109,7 @@ of `netdata.conf` from your netdata):
```
Example IPv6 and IPv4 together:
```
destination = [ffff:...:0001]:2003 10.11.12.1:2003
```
......@@ -118,6 +122,8 @@ of `netdata.conf` from your netdata):
time-series database when it becomes available again. It can also be used to monitor / trace / debug
the metrics netdata generates.
For kinesis backend `destination` should be set to an AWS region (for example, `us-east-1`).
- `data source = as collected`, or `data source = average`, or `data source = sum`, selects the kind of
data that will be sent to the backend.
......@@ -170,7 +176,7 @@ netdata provides 5 charts:
1. **Buffered metrics**, the number of metrics netdata added to the buffer for dispatching them to the
backend server.
2. **Buffered data size**, the amount of data (in KB) netdata added the buffer.
3. ~~**Backend latency**, the time the backend server needed to process the data netdata sent.
......@@ -178,7 +184,7 @@ netdata provides 5 charts:
(this chart has been removed, because it only measures the time netdata needs to give the data
to the O/S - since the backend servers do not ack the reception, netdata does not have any means
to measure this properly).
4. **Backend operations**, the number of operations performed by netdata.
5. **Backend thread CPU usage**, the CPU resources consumed by the netdata thread, that is responsible
......
# SPDX-License-Identifier: GPL-3.0-or-later
AUTOMAKE_OPTIONS = subdir-objects
MAINTAINERCLEANFILES = $(srcdir)/Makefile.in
dist_noinst_DATA = \
README.md \
$(NULL)
dist_libconfig_DATA = \
aws_kinesis.conf \
$(NULL)
\ No newline at end of file
# Using netdata with AWS Kinesis Data Streams
## Prerequisites
To use AWS Kinesis as a backend AWS SDK for C++ should be [installed](https://docs.aws.amazon.com/en_us/sdk-for-cpp/v1/developer-guide/setup.html) first. `libcrypto`, `libssl`, and `libcurl` are also required to compile netdata with Kinesis support enabled. Next, netdata should be re-installed from the source. The installer will detect that the required libraries are now available.
If AWS SDK for C++ is being installed from sources, it is useful to set `-DBUILD_ONLY="kinesis"`. Otherwise, the building process could take a very long time.
## Configuration
To enable data sending to the kinesis backend set the following options in `netdata.conf`:
```
[backend]
enabled = yes
type = kinesis
destination = us-east-1
```
set the `destination` option to an AWS region.
In the netdata configuration directory run `./edit-config aws_kinesis.conf` and set AWS credentials and stream name:
```
# AWS credentials
aws_access_key_id = your_access_key_id
aws_secret_access_key = your_secret_access_key
# destination stream
stream name = your_stream_name
```
Alternatively, AWS credentials can be set for the *netdata* user using AWS SDK for C++ [standard methods](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/credentials.html).
A partition key for every record is computed automatically by the netdata with the purpose to distribute records across available shards evenly.
[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fbackends%2Faws_kinesis%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)]()
// SPDX-License-Identifier: GPL-3.0-or-later
#define BACKENDS_INTERNALS
#include "aws_kinesis.h"
#define CONFIG_FILE_LINE_MAX ((CONFIG_MAX_NAME + CONFIG_MAX_VALUE + 1024) * 2)
// ----------------------------------------------------------------------------
// kinesis backend
// read the aws_kinesis.conf file
int read_kinesis_conf(const char *path, char **access_key_id_p, char **secret_access_key_p, char **stream_name_p)
{
char *access_key_id = *access_key_id_p;
char *secret_access_key = *secret_access_key_p;
char *stream_name = *stream_name_p;
if(unlikely(access_key_id)) freez(access_key_id);
if(unlikely(secret_access_key)) freez(secret_access_key);
if(unlikely(stream_name)) freez(stream_name);
access_key_id = NULL;
secret_access_key = NULL;
stream_name = NULL;
int line = 0;
char filename[FILENAME_MAX + 1];
snprintfz(filename, FILENAME_MAX, "%s/aws_kinesis.conf", path);
char buffer[CONFIG_FILE_LINE_MAX + 1], *s;
debug(D_BACKEND, "BACKEND: opening config file '%s'", filename);
FILE *fp = fopen(filename, "r");
if(!fp) {
return 1;
}
while(fgets(buffer, CONFIG_FILE_LINE_MAX, fp) != NULL) {
buffer[CONFIG_FILE_LINE_MAX] = '\0';
line++;
s = trim(buffer);
if(!s || *s == '#') {
debug(D_BACKEND, "BACKEND: ignoring line %d of file '%s', it is empty.", line, filename);
continue;
}
char *name = s;
char *value = strchr(s, '=');
if(unlikely(!value)) {
error("BACKEND: ignoring line %d ('%s') of file '%s', there is no = in it.", line, s, filename);
continue;
}
*value = '\0';
value++;
name = trim(name);
value = trim(value);
if(unlikely(!name || *name == '#')) {
error("BACKEND: ignoring line %d of file '%s', name is empty.", line, filename);
continue;
}
if(!value) value = "";
// strip quotes
if(*value == '"' || *value == '\'') {
value++;
s = value;
while(*s) s++;
if(s != value) s--;
if(*s == '"' || *s == '\'') *s = '\0';
}
if(name[0] == 'a' && name[4] == 'a' && !strcmp(name, "aws_access_key_id")) {
access_key_id = strdupz(value);
}
else if(name[0] == 'a' && name[4] == 's' && !strcmp(name, "aws_secret_access_key")) {
secret_access_key = strdupz(value);
}
else if(name[0] == 's' && !strcmp(name, "stream name")) {
stream_name = strdupz(value);
}
}
fclose(fp);
if(unlikely(!stream_name || !*stream_name)) {
error("BACKEND: stream name is a mandatory Kinesis parameter but it is not configured");
return 1;
}
*access_key_id_p = access_key_id;
*secret_access_key_p = secret_access_key;
*stream_name_p = stream_name;
return 0;
}
# AWS Kinesis Data Streams backend configuration
#
# All options in this file are mandatory
# AWS credentials
aws_access_key_id =
aws_secret_access_key =
# destination stream
stream name =
\ No newline at end of file
// SPDX-License-Identifier: GPL-3.0-or-later
#ifndef NETDATA_BACKEND_KINESIS_H
#define NETDATA_BACKEND_KINESIS_H
#include "backends/backends.h"
#include "aws_kinesis_put_record.h"
#define KINESIS_PARTITION_KEY_MAX 256
#define KINESIS_RECORD_MAX 1024 * 1024
extern int read_kinesis_conf(const char *path, char **auth_key_id_p, char **secure_key_p, char **stream_name_p);
#endif //NETDATA_BACKEND_KINESIS_H
// SPDX-License-Identifier: GPL-3.0-or-later
#include <aws/core/Aws.h>
#include <aws/core/client/ClientConfiguration.h>
#include <aws/core/auth/AWSCredentials.h>
#include <aws/core/utils/Outcome.h>
#include <aws/kinesis/KinesisClient.h>
#include <aws/kinesis/model/PutRecordRequest.h>
#include "aws_kinesis_put_record.h"
using namespace Aws;
SDKOptions options;
Kinesis::KinesisClient *client;