Merging upstream version 1.11.1+dfsg

Signed-off-by: Daniel Baumann's avatarDaniel Baumann <daniel.baumann@progress-linux.org>
parent b3735ccd
Pipeline #29974 failed with stage
in 3 minutes and 58 seconds
......@@ -4,9 +4,9 @@ exclude_paths:
- collectors/python.d.plugin/python_modules/pyyaml3/**
- collectors/python.d.plugin/python_modules/urllib3/**
- collectors/python.d.plugin/python_modules/third_party/**
- web/css/**
- web/lib/**
- web/old/**
- web/gui/src/**
- collectors/node.d.plugin/node_modules/**
- web/gui/css/**
- web/gui/lib/**
- web/gui/old/**
- web/gui/src/**
- tests/**
......@@ -2,30 +2,47 @@
* @ktsaou
# Ownership by directory structure
.travis/ @paulfantom
# backends/
.travis/ @paufantom
build/ @paulfantom
collectors/python.d.plugin/ @l2isbad @Ferroin
contrib/ @paulfantom
daemon/ @ktsaou
database/ @ktsaou
backends/ @ktsaou @vlvkobal
backends/graphite/ @ktsaou @vlvkobal
backends/json/ @ktsaou @vlvkobal
backends/opentsdb/ @ktsaou @vlvkobal
backends/prometheus/ @ktsaou @vlvkobal @paulfantom
collectors/ @ktsaou @vlvkobal
collectors/charts.d.plugin/ @ktsaou @paulfantom
collectors/freebsd.plugin/ @vlvkobal
collectors/macos.plugin/ @vlvkobal
collectors/node.d.plugin/ @ktsaou @gmosx
collectors/node.d.plugin/fronius/ @ktsaou @gmosx @ccremer
collectors/node.d.plugin/snmp/ @ktsaou @gmosx @cakrit
collectors/node.d.plugin/stiebeleltron/ @ktsaou @gmosx @ccremer
collectors/python.d.plugin/ @l2isbad
daemon/ @ktsaou @vlvkobal
database/ @ktsaou @mfundul
docker/ @paulfantom
# health/
# installer/
# libnetdata/
# makeself/
health/ @ktsaou @mfundul
health/health.d/ @ktsaou @cakrit
health/notifications/ @ktsaou @Ferroin
installer/ @ktsaou @paulfantom
libnetdata/ @ktsaou @vlvkobal
makeself/ @ktsaou @paulfantom
packaging/ @paulfantom
# registry/
# streaming/
# system/
# tests/
# web/
registry/ @ktsaou @gmosx
streaming/ @ktsaou @mfundul
web/ @ktsaou @vlvkobal @gmosx
# Ownership by filetype (overwrites ownership by directory)
*.md @ktsaou @cakrit
*.am @paulfantom @ktsaou
# Ownership of specific files
CHANGELOG.md @netdatabot
.travis.yml @paulfantom
.lgtm.yml @paulfantom
.eslintrc @paulfantom
.eslintignore @paulfantom
.csslintrc @paulfantom
.codeclimate.yml @paulfantom
.codacy.yml @paulfantom
# Ownership by filetype (overwrites ownership by directory)
*.am @paulfantom @ktsaou
*.c *.h @ktsaou @vlvkobal
CHANGELOG.md @netdatabot
......@@ -143,3 +143,8 @@ sitespeed-result/
# tests and temp files
python.d/python-modules-installer.sh
# documentation generated files
htmldoc/src
htmldoc/build
htmldoc/mkdocs.yml
......@@ -18,6 +18,7 @@ path_classifiers:
- collectors/node.d.plugin/node_modules/net-snmp.js
- collectors/node.d.plugin/node_modules/pixl-xml.js
- web/gui/lib/
- web/gui/src/
- web/gui/css/
test:
- tests/
......@@ -29,40 +29,43 @@ installations of netdata. Jobs are run on following operating systems:
- CentOS 7 (containerized)
- alpine (containerized)
### Release
### Packaging
This stage is executed only on "master" brach and allows us to create a new tag just looking at git commit message.
It also has an option to automatically generate changelog based on GitHub labels and sync it with GitHub release.
For the sake of simplicity and to use travis features this stage cannot be integrated with next stage.
Releases are generated by searching for a keyword in last commit message. Keywords are:
- [patch] or [fix] to bump patch number
- [minor], [feature] or [feat] to bump minor number
- [major] or [breaking change] to bump major number
All keywords MUST be surrounded with square braces.
Alternative is to push a tag to master branch.
### Packaging
It executes one script called `releaser.sh` which is responsible for creating a release on GitHub by using
[hub](https://github.com/github/hub). This script is also executing other scripts which can also be used in other
CI jobs:
- `tagger.sh`
- `generate_changelog.sh`
- `build.sh`
- `create_artifacts.sh`
This stage is executed only on "master" branch and it is separated into 3 jobs:
- Update Changelog/Create release
- Nightly tarball and self-extractor build
- Nightly docker images
Alternatively new release can be also created by pushing new tag to master branch.
##### Update Changelog/Create release
##### tagger.sh
This job is running one script called `releaser.sh`, which is responsible for a couple of things. First of all it
automatically updates our CHANGELOG.md file based on GitHub features (mostly labels and pull requests). Apart from
that it can also create a new git tag and a github draft release connected to that tag.
Releases are generated by searching for a keyword in last commit message. Keywords are:
Script responsible to find out what will be the next tag based on a keyword in last commit message. Keywords are:
- `[netdata patch release]` to bump patch number
- `[netdata minor release]` to bump minor number
- `[netdata major release]` to bump major number
- `[netdata release candidate]` to create a new release candidate (appends or modifies suffix `-rcX` of previous tag)
All keywords MUST be surrounded with square brackets.
Tag is then stored in `GIT_TAG` variable.
Alternatively new release can be also created by pushing new tag to master branch.
##### generate_changelog.sh
##### Nightly tarball and self-extractor build AND Nightly docker images
Automatic changelog generator which updates our CHANGELOG.md file based on GitHub features (mostly labels and pull
requests). Internally it uses
[github-changelog-generator](https://github.com/github-changelog-generator/github-changelog-generator) and more
information can be found on that project site.
##### build.sh and create_artifacts.sh
Scripts used to build new container images and provide release artifacts (tar.gz and makeself archives)
### Nightlies
##### Tarball and self-extractor build AND Nightly docker images
As names might suggest those two jobs are responsible for nightly netdata package creation and are run every day (in
cron). Combined they produce:
......@@ -70,11 +73,16 @@ cron). Combined they produce:
- tar.gz archive (soon to be removed)
- self-extracting package
Currently "Nightly tarball and self-extractor build" is using old firehol script and it is planed to be replaced with
new design.
This is achieved by running 2 scripts described earlier:
- `create_artifacts.sh`
- `build.sh`
##### Nightly changelog generation
##### Changelog generation
This job is responsible for regenerating changelog every day by executing `generate_changelog.sh` script. This is done
only once a day due to github rate limiter.
##### Labeler
Once a day we are doing automatic label assignment by executing `labeler.sh`. This script is a temporary workaround until
we start using GitHub Actions. For more information what it is currently doing go to its code.
#!/bin/bash
set -e
# Decrypt our private files; changes to this file should be inspected
# closely to ensure they do not create information leaks
eval key="\${encrypted_${1}_key}"
eval iv="\${encrypted_${1}_iv}"
if [ ! "$key" ]
then
echo "No aes key present - skipping decryption"
exit 0
fi
for i in .travis/*.enc
do
u=$(echo $i | sed -e 's/.enc$//')
openssl aes-256-cbc -K "$key" -iv "$iv" -in $i -out $u -d
done
if [ -f .travis/travis_rsa ]
then
echo "ssh key present - loading to agent"
# add key, then remove to prevent leaks
chmod 600 .travis/travis_rsa
ssh-add .travis/travis_rsa
rm -f .travis/travis_rsa
touch /tmp/ssh-key-loaded
else
echo "No ssh key present - skipping agent start"
fi
#!/bin/bash
set -e
# Deploy tar-files and checksums to the firehol website
if [ ! -f /tmp/ssh-key-loaded ]
then
echo "No ssh key decrypted - skipping deployment to website"
exit 0
fi
case "$TRAVIS_BRANCH" in
master|stable-*)
:
;;
*)
echo "Not on master or stable-* branch - skipping deployment to website"
exit 0
;;
esac
if [ "$TRAVIS_PULL_REQUEST" = "true" ]
then
echo "Building pull request - skipping deployment to website"
exit 0
fi
if [ "$TRAVIS_TAG" != "" ]
then
echo "Building tag - skipping deployment to website"
exit 0
fi
if [ "$TRAVIS_OS_NAME" != "linux" ]
then
echo "Building non-linux version - skipping deployment to website"
exit 0
fi
if [ "$CC" != "gcc" ]
then
echo "Building non-gcc version - skipping deployment to website"
exit 0
fi
ssh-keyscan -H firehol.org >> ~/.ssh/known_hosts
ssh travis@firehol.org mkdir -p uploads/netdata/$TRAVIS_BRANCH/
scp -p *.tar.gz travis@firehol.org:uploads/netdata/$TRAVIS_BRANCH/
scp -p *.tar.gz.sha travis@firehol.org:uploads/netdata/$TRAVIS_BRANCH/
scp -p *.tar.gz.asc travis@firehol.org:uploads/netdata/$TRAVIS_BRANCH/
scp -p *.gz.run travis@firehol.org:uploads/netdata/$TRAVIS_BRANCH/
scp -p *.gz.run.sha travis@firehol.org:uploads/netdata/$TRAVIS_BRANCH/
scp -p *.gz.run.asc travis@firehol.org:uploads/netdata/$TRAVIS_BRANCH/
ssh travis@firehol.org touch uploads/netdata/$TRAVIS_BRANCH/complete.txt
#!/bin/bash
# shellcheck disable=SC2230
# WARNING: This script is deprecated and placed here until @paulfantom figures out how to fully replace it
if [ ! -f .gitignore ]
then
echo "Run as ./travis/$(basename "$0") from top level directory of git repository"
exit 1
fi
eval "$(ssh-agent -s)"
./.travis/decrypt-if-have-key decb6f6387c4
export KEYSERVER=ipv4.pool.sks-keyservers.net
./packaging/gpg-recv-key phil@firehol.org "0762 9FF7 89EA 6156 012F 9F50 C406 9602 1359 9237"
./packaging/gpg-recv-key costa@tsaousis.gr "4DFF 624A E564 3B51 2872 1F40 29CA 3358 89B9 A863"
# Run the commit hooks in case the developer didn't
git diff 4b825dc642cb6eb9a060e54bf8d69288fbee4904 | ./packaging/check-files -
fakeroot ./packaging/git-build
# Make sure stdout is in blocking mode. If we don't, then conda create will barf during downloads.
# See https://github.com/travis-ci/travis-ci/issues/4704#issuecomment-348435959 for details.
python -c 'import os,sys,fcntl; flags = fcntl.fcntl(sys.stdout, fcntl.F_GETFL); fcntl.fcntl(sys.stdout, fcntl.F_SETFL, flags&~os.O_NONBLOCK);'
echo "--- Create tarball ---"
make dist
echo "--- Create self-extractor ---"
./makeself/build-x86_64-static.sh
echo "--- Create checksums ---"
for i in *.tar.gz; do sha512sum -b "$i" > "$i.sha"; done #FIXME remove?
for i in *.gz.run; do sha512sum -b "$i" > "$i.sha"; done #FIXME remove?
sha256sum -b ./*.tar.gz ./*.gz.run > "sha256sums.txt"
./.travis/deploy-if-have-key
......@@ -10,8 +10,8 @@ fi
ORGANIZATION=$(echo "$TRAVIS_REPO_SLUG" | awk -F '/' '{print $1}')
PROJECT=$(echo "$TRAVIS_REPO_SLUG" | awk -F '/' '{print $2}')
GIT_MAIL="pawel+bot@netdata.cloud"
GIT_USER="netdatabot"
GIT_MAIL=${GIT_MAIL:-"pawel+bot@netdata.cloud"}
GIT_USER=${GIT_USER:-"netdatabot"}
echo "--- Initialize git configuration ---"
git config user.email "${GIT_MAIL}"
......@@ -32,5 +32,5 @@ docker run -it -v "$(pwd)":/project markmandel/github-changelog-generator:latest
echo "--- Uploading changelog ---"
git add CHANGELOG.md
git commit -m '[ci skip] Automatic changelog update'
git commit -m '[ci skip] Automatic changelog update' || exit 0
git push "https://${GITHUB_TOKEN}:@$(git config --get remote.origin.url | sed -e 's/^https:\/\///')"
#!/bin/bash
# This is a simple script which should apply labels to unlabelled issues from last 3 days.
# It will soon be deprecated by GitHub Actions so no futher development on it is planned.
if [ "$GITHUB_TOKEN" == "" ]; then
echo "GITHUB_TOKEN is needed"
exit 1
fi
# Download hub
HUB_VERSION=${HUB_VERSION:-"2.5.1"}
wget "https://github.com/github/hub/releases/download/v${HUB_VERSION}/hub-linux-amd64-${HUB_VERSION}.tgz" -O "/tmp/hub-linux-amd64-${HUB_VERSION}.tgz"
tar -C /tmp -xvf "/tmp/hub-linux-amd64-${HUB_VERSION}.tgz" &>/dev/null
export PATH=$PATH:"/tmp/hub-linux-amd64-${HUB_VERSION}/bin"
echo "Looking up available labels"
LABELS_FILE=/tmp/exclude_labels
hub issue labels > $LABELS_FILE
for STATE in "open" "closed"; do
for ISSUE in $(hub issue -f "%I %l%n" -s "$STATE" -d "$(date +%F -d '3 days ago')" | grep -v -f $LABELS_FILE); do
echo "Processing $STATE issue no. $ISSUE"
URL="https://api.github.com/repos/netdata/netdata/issues/$ISSUE"
BODY="$(curl "${URL}" | jq .body 2>/dev/null)"
case "${BODY}" in
*"# Question summary"* ) curl -H "Authorization: token $GITHUB_TOKEN" -d '{"labels":["question"]}' -X PATCH "${URL}" ;;
*"# Bug report summary"* ) curl -H "Authorization: token $GITHUB_TOKEN" -d '{"labels":["bug"]}' -X PATCH "${URL}" ;;
* ) curl -H "Authorization: token $GITHUB_TOKEN" -d '{"labels":["needs triage"]}' -X PATCH "${URL}" ;;
esac
done
done
......@@ -21,49 +21,24 @@
# Requirements:
# - GITHUB_TOKEN variable set with GitHub token. Access level: repo.public_repo
# - docker
# - git-semver python package (pip install git-semver)
set -e
if [ ! -f .gitignore ]
then
echo "Run as ./travis/$(basename "$0") from top level directory of git repository"
exit 1
if [ ! -f .gitignore ]; then
echo "Run as ./travis/$(basename "$0") from top level directory of git repository"
exit 1
fi
echo "---- GENERATING CHANGELOG -----"
./.travis/generate_changelog.sh
export GIT_MAIL="pawel+bot@netdata.cloud"
export GIT_USER="netdatabot"
echo "--- Initialize git configuration ---"
git config user.email "${GIT_MAIL}"
git config user.name "${GIT_USER}"
echo "---- FIGURING OUT TAGS ----"
# Check if current commit is tagged or not
GIT_TAG=$(git tag --points-at)
if [ -z "${GIT_TAG}" ]; then
git semver
# Figure out next tag based on commit message
GIT_TAG=HEAD
echo "Last commit message: $TRAVIS_COMMIT_MESSAGE"
case "${TRAVIS_COMMIT_MESSAGE}" in
*"[netdata patch release]"* ) GIT_TAG="v$(git semver --next-patch)" ;;
*"[netdata minor release]"* ) GIT_TAG="v$(git semver --next-minor)" ;;
*"[netdata major release]"* ) GIT_TAG="v$(git semver --next-major)" ;;
*) echo "Keyword not detected. Doing nothing" ;;
esac
# Tag it!
if [ "$GIT_TAG" != "HEAD" ]; then
echo "Assigning a new tag: $GIT_TAG"
git tag "$GIT_TAG" -a -m "Automatic tag generation for travis build no. $TRAVIS_BUILD_NUMBER"
# git is able to push due to configuration already being initialized in `generate_changelog.sh` script
git push "https://${GITHUB_TOKEN}:@$(git config --get remote.origin.url | sed -e 's/^https:\/\///')" --tags
fi
fi
if [ "${GIT_TAG}" == "HEAD" ]; then
echo "Not creating a release since neither of two conditions was met:"
echo " - keyword in commit message"
echo " - commit is tagged"
exit 0
fi
# tagger.sh is sourced since we need environment variables it sets
#shellcheck source=/dev/null
source .travis/tagger.sh || exit 0
echo "---- CREATING TAGGED DOCKER CONTAINERS ----"
export REPOSITORY="netdata/netdata"
......@@ -80,4 +55,20 @@ tar -C /tmp -xvf "/tmp/hub-linux-amd64-${HUB_VERSION}.tgz"
export PATH=$PATH:"/tmp/hub-linux-amd64-${HUB_VERSION}/bin"
# Create a release draft
hub release create --draft -a "netdata-${GIT_TAG}.tar.gz" -a "netdata-${GIT_TAG}.gz.run" -a "sha256sums.txt" -m "${GIT_TAG}" "${GIT_TAG}"
if [ -z ${GIT_TAG+x} ]; then
echo "Variable GIT_TAG is not set. Something went terribly wrong! Exiting."
exit 1
fi
if [ "${GIT_TAG}" != "$(git tag --points-at)" ]; then
echo "ERROR! Current commit is not tagged. Stopping release creation."
exit 1
fi
if [ -z ${RC+x} ]; then
hub release create --prerelease --draft -a "netdata-${GIT_TAG}.tar.gz" -a "netdata-${GIT_TAG}.gz.run" -a "sha256sums.txt" -m "${GIT_TAG}" "${GIT_TAG}"
else
hub release create --draft -a "netdata-${GIT_TAG}.tar.gz" -a "netdata-${GIT_TAG}.gz.run" -a "sha256sums.txt" -m "${GIT_TAG}" "${GIT_TAG}"
fi
# Changelog needs to be created AFTER new release to avoid problems with circular dependencies and wrong entries in changelog file
echo "---- GENERATING CHANGELOG -----"
./.travis/generate_changelog.sh
#!/bin/bash
# SPDX-License-Identifier: MIT
# Copyright (C) 2018 Pawel Krupa (@paulfantom) - All Rights Reserved
# Permission to copy and modify is granted under the MIT license
#
# Original script is available at https://github.com/paulfantom/travis-helper/blob/master/releasing/releaser.sh
#
# Tags are generated by searching for a keyword in last commit message. Keywords are:
# - [patch] or [fix] to bump patch number
# - [minor], [feature] or [feat] to bump minor number
# - [major] or [breaking change] to bump major number
# All keywords MUST be surrounded with square braces.
#
# Requirements:
# - GITHUB_TOKEN variable set with GitHub token. Access level: repo.public_repo
# - git-semver python package (pip install git-semver)
set -e
if [ ! -f .gitignore ]; then
echo "Run as ./travis/$(basename "$0") from top level directory of git repository"
exit 1
fi
# Embed new version in files which need it.
# This wouldn't be needed if we could use `git tag` everywhere.
function embed_version {
VERSION="$1"
MAJOR=$(echo "$GIT_TAG" | cut -d . -f 1 | cut -d v -f 2)
MINOR=$(echo "$GIT_TAG" | cut -d . -f 2)
PATCH=$(echo "$GIT_TAG" | cut -d . -f 3 | cut -d '-' -f 1)
sed -i "s/\\[VERSION_MAJOR\\], \\[.*\\]/\\[VERSION_MAJOR\\], \\[$MAJOR\\]/" configure.ac
sed -i "s/\\[VERSION_MINOR\\], \\[.*\\]/\\[VERSION_MINOR\\], \\[$MINOR\\]/" configure.ac
sed -i "s/\\[VERSION_PATCH\\], \\[.*\\]/\\[VERSION_PATCH\\], \\[$PATCH\\]/" configure.ac
git add configure.ac
}
# Figure out what will be new release candidate tag based only on previous ones.
# This assumes that RELEASES are in format of "v0.1.2" and prereleases (RCs) are using "v0.1.2-rc0"
function release_candidate {
LAST_TAG=$(git semver)
if [[ $LAST_TAG =~ -rc* ]]; then
LAST_RELEASE=$(echo "$LAST_TAG" | cut -d'-' -f 1)
LAST_RC=$(echo "$LAST_TAG" | cut -d'c' -f 2)
RC=$((LAST_RC + 1))
else
LAST_RELEASE=$LAST_TAG
RC=0
fi
GIT_TAG="v$LAST_RELEASE-rc$RC"
export GIT_TAG
}
# Check if current commit is tagged or not
GIT_TAG=$(git tag --points-at)
if [ -z "${GIT_TAG}" ]; then
git semver
# Figure out next tag based on commit message
echo "Last commit message: $TRAVIS_COMMIT_MESSAGE"
case "${TRAVIS_COMMIT_MESSAGE}" in
*"[netdata patch release]"*) GIT_TAG="v$(git semver --next-patch)" ;;
*"[netdata minor release]"*) GIT_TAG="v$(git semver --next-minor)" ;;
*"[netdata major release]"*) GIT_TAG="v$(git semver --next-major)" ;;
*"[netdata release candidate]"*) release_candidate ;;
*) echo "Keyword not detected. Exiting..."; exit 1;;
esac
# Tag it!
if [ "$GIT_TAG" != "HEAD" ]; then
echo "Assigning a new tag: $GIT_TAG"
embed_version "$GIT_TAG"
git commit -m "[ci skip] release $GIT_TAG"
git tag "$GIT_TAG" -a -m "Automatic tag generation for travis build no. $TRAVIS_BUILD_NUMBER"
git push "https://${GITHUB_TOKEN}:@$(git config --get remote.origin.url | sed -e 's/^https:\/\///')"
git push "https://${GITHUB_TOKEN}:@$(git config --get remote.origin.url | sed -e 's/^https:\/\///')" --tags
fi
else
embed_version "$GIT_TAG"
git commit -m "[ci skip] release $GIT_TAG"
git push "https://${GITHUB_TOKEN}:@$(git config --get remote.origin.url | sed -e 's/^https:\/\///')"
fi
export GIT_TAG
# Contributing
Thank you for considering contributing to Netdata.
We love to receive contributions. Maintaining a platform for monitoring everything imaginable requires a broad understanding of a plethora of technologies, systems and applications. We rely on community contributions and user feedback to continue providing the best monitoring solution out there.
There are many ways to contribute, with varying requirements of skills:
## All NetData Users
### Give Netdata a GitHub star
This is the minimum open-source users should contribute back to the projects they use. Github stars help the project gain visibility, stand out. So, if you use Netdata, consider pressing that button. **It really matters**.
### Spread the word
Community growth allows the project to attract new talent willing to contribute. This talent is then developing new features and improves the project. These new features and improvements attract more users and so on. It is a loop. So, post about netdata, present it to local meetups you attend, let your online social network or twitter, facebook, reddit, etc. know you are using it. **The more people involved, the faster the project evolves**.
### Provide feedback
Is there anything that bothers you about netdata? Did you experience an issue while installing it or using it? Would you like to see it evolve to you need? Let us know. [Open a github issue](https://github.com/netdata/netdata/issues) to discuss it. Feedback is very important for open-source projects. We can't commit we will do everything, but your feedback influences our road-map significantly. **We rely on your feedback to make Netdata better**.
#### Help the developers understand what they have to do
NetData is all about simplicity and meaningful presentation. It's impossible for a handful of people to know which metrics really matter when monitoring a particular software or hardware component you are interested in. Be specific about what should be collected, how the information should be presented in the dashboard and which alarms make sense in most situations.
## Experienced Users
### Help other users
As the project grows, an increasing share of our time is spent on supporting this community of users in terms of answering questions, of helping users understand how netdata works and find their way with it. Helping other users is crucial. It allows the developers and maintainers of the project to focus on improving it.
### Improve documentation
Most of our documentation is in markdown (.md) files inside the netdata GitHub project. What remains in our Wiki will soon be moved in there as well. Don't be afraid to edit any of these documents and submit a GitHub Pull Request with your corrections/additions.
## Developers
We expect most contributions to be for new data collection plugins. You can read about how external plugins work [here](collectors/plugins.d/). Additional instructions are available for [Node.js plugins](collectors/node.d.plugin) and [Python plugis](collectors/python.d.plugin).
Of course we appreciate contributions for any other part of the NetData agent, including the [daemon](daemon), [backends for long term archiving](backends/), innovative ways of using the [REST API](web/api) to create cool [Custom Dashboards](web/gui/custom/) or to include NetData charts in other applications, similarly to what can be done with [Confluence](web/gui/confluence/).
### Contributions Ground Rules
#### Code of Conduct and CLA
We expect all contributors to abide by the [Contributor Covenant Code of Conduct](CODE_OF_CONDUCT.md). For a pull request to be accepted, you will also need to accept the [netdata contributors license agreement](CONTRIBUTORS.md), as part of the PR process.
#### Performance and efficiency
Everything on Netdata is about efficiency. We need netdata to always be the most lightweight monitoring solution available. We will reject to merge PRs that are not optimal in resource utilization and efficiency.
Of course there are cases that such technical excellence is either not reasonable or not feasible. In these cases, we may require the feature or code submitted to be by disabled by default.
#### Meaningful metrics
Unlike other monitoring solutions, Netdata requires all metrics collected to have some structure attached to them. So, Netdata metrics have a name, units, belong to a chart that has a title, a family, a context, belong to an application, etc.
This structure is what makes netdata different. Most other monitoring solution collect bulk metrics in terms of name-value pairs and then expect their users to give meaning to these metrics during visualization. This does not work. It is neither practical nor reasonable to give to someone 2000 metrics and let him/her visualize them in a meaningful way.
So, netdata requires all metrics to have a meaning at the time they are collected. We will reject to merge PRs that loosely collect just a "bunch of metrics", but we are very keen to help you fix this.
#### Automated Testing
Netdata is a very large application to have automated testing end-to-end. But automated testing is required for crucial functions of it.
Generally, all pull requests should be coupled with automated testing scenarios. However since we do not currently have a framework in place for testing everything little bit of it, we currently require automated tests for parts of Netdata that seem too risky to be changed without automated testing.
Of course, manual testing is always required.
#### Netdata is a distributed application
Netdata is a distributed monitoring application. A few basic features can become quite complicated for such applications. We may reject features that alter or influence the nature of netdata, though we usually discuss the requirements with contributors and help them adapt their code to be better suited for Netdata.
#### Operating systems supported
Netdata should be running everywhere, on every production system out there.
Although we focus on **supported operating systems**, we still want Netdata to run even on non-supported systems. This, of course, may require some more manual work from the users (to prepare their environment, or enable certain flags, etc).
If your contributions limit the number of operating systems supported we will request from you to improve it.
#### Documentation
Your contributions should be bundled with related documentation to help users understand how to use the features you introduce.
#### Maintenance
When you contribute code to Netdata, you are automatically accepting that you will be responsible for maintaining that code in the future. So, if users need help, or report bugs, we will invite you to the related github issues to help them or fix the issues or bugs of your contributions.
......@@ -10,7 +10,7 @@ This agreement is part of the legal framework of the open-source ecosystem
that adds some red tape, but protects both the contributor and the project.
To understand why this is needed, please read [a well-written chapter from
Karl Fogel’s Producing Open Source Software on CLAs](http://producingoss.com/en/copyright-assignment.html).
Karl Fogel’s Producing Open Source Software on CLAs](https://producingoss.com/en/copyright-assignment.html).
By signing this agreement, you do not change your rights to use your own
contributions for any other purpose.
......
# SPDX-License-Identifier: GPL-3.0-or-later
AUTOMAKE_OPTIONS=foreign subdir-objects 1.10
AUTOMAKE_OPTIONS=foreign subdir-objects 1.11
ACLOCAL_AMFLAGS = -I build/m4
MAINTAINERCLEANFILES= \
......@@ -44,6 +44,7 @@ EXTRA_DIST = \
CODE_OF_CONDUCT.md \
LICENSE \
REDISTRIBUTED.md \
CONTRIBUTING.md \
$(NULL)
SUBDIRS = \
......@@ -61,6 +62,27 @@ dist_noinst_DATA= \
netdata.cppcheck \
netdata.spec \