Commit f8e0454c authored by Raphaël Hertzog's avatar Raphaël Hertzog

New upstream version 0.1.28

parent 1cdde8cc
repos:
- repo: git://github.com/pre-commit/pre-commit-hooks
sha: 6b1aa19425b35f8c34f43ada8f1045b028dc19a5
sha: v1.1.1
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
......@@ -13,6 +14,6 @@
- id: requirements-txt-fixer
- id: name-tests-test
- repo: git://github.com/asottile/reorder_python_imports
sha: 3d86483455ab5bd06cc1069fdd5ac57be5463f10
sha: v0.3.5
hooks:
- id: reorder-python-imports
......@@ -87,8 +87,23 @@ Eg: ``--rule this_rule.yaml``
``--config`` allows you to specify the location of the configuration. By default, it is will look for config.yaml in the current directory.
## Third Party Tools And Extras
### Bitsensor Kibana plugin
[Configure and test rules via a Kibana plugin](https://bitsensor.io/blog/elastalert-kibana-plugin-centralized-logging-with-integrated-alerting)
### Kibana plugin
![img](https://www.bitsensor.io/assets/img/screenshots/template.gif)
Available at the [ElastAlert Kibana plugin repository](https://github.com/bitsensor/elastalert-kibana-plugin)
### Docker
A [Dockerized version](https://github.com/bitsensor/elastalert) of ElastAlert including a REST api is build from `master` to `bitsensor/elastalert:latest`.
```bash
git clone https://github.com/bitsensor/elastalert.git; cd elastalert
docker run -d -p 3030:3030 \
-v `pwd`/config/elastalert.yaml:/opt/elastalert/config.yaml \
-v `pwd`/config/config.json:/opt/elastalert-server/config/config.json \
-v `pwd`/rules:/opt/elastalert/rules \
-v `pwd`/rule_templates:/opt/elastalert/rule_templates \
--net="host" \
--name elastalert bitsensor/elastalert:latest
```
## Documentation
......
# Change Log
## v0.1.28
### Added
- Added support for Stride formatting of simple HTML tags
- Added support for custom titles in Opsgenie alerts
- Added a denominator to percentage match based alerts
### Fixed
- Fixed a bug with Stomp alerter connections
- Removed escaping of some characaters in Slack messages
## v0.1.27
# Added
- Added support for a value other than <MISSING VALUE> in formatted alerts
### Fixed
- Fixed a failed creation of elastalert indicies when using Elasticsearch 6
- Truncate Telegram alerts to avoid API errors
## v0.1.26
### Added
- Added support for Elasticsearch 6
- Added support for mentions in Hipchat
### Fixed
- Fixed an issue where a nested field lookup would crash if one of the intermediate fields was null
## v0.1.25
### Fixed
- Fixed a bug causing new term rule to break unless you passed a start time
- Add a slight clarification on the localhost:9200 reported in es_debug_trace
## v0.1.24
### Fixed
- Pinned pytest
- create-index reads index name from config.yaml
- top_count_keys now works for context on a flatline rule type
- Fixed JIRA behavior for issues with statuses that have spaces in the name
## v0.1.22
### Added
- Added Stride alerter
- Allow custom string formatters for aggregation percentage
- Added a field to disable rules from config
- Added support for subaggregations for the metric rule type
### Fixed
- Fixed a bug causing create-index to fail if missing config.yaml
- Fixed a bug when using ES5 with query_key and top_count_keys
- Allow enhancements to set and clear arbitrary JIRA fields
- Fixed a bug causing timestamps to be formatted in scientific notation
- Stop attempting to initialize alerters in debug mode
- Changed default alert ordering so that JIRA tickets end up in other alerts
- Fixed a bug when using Stomp alerter with complex query_key
- Fixed a bug preventing hipchat room ID from being an integer
- Fixed a bug causing duplicate alerts when using spike with alert_on_new_data
- Minor fixes to summary table formatting
- Fixed elastalert-test-rule when using new term rule type
## v0.1.21
### Fixed
......
......@@ -223,7 +223,10 @@ or its subdirectories.
``--es_debug`` will enable logging for all queries made to Elasticsearch.
``--es_debug_trace`` will enable logging curl commands for all queries made to Elasticsearch to a file.
``--es_debug_trace <trace.log>`` will enable logging curl commands for all queries made to Elasticsearch to the
specified log file. ``--es_debug_trace`` is passed through to `elasticsearch.py
<http://elasticsearch-py.readthedocs.io/en/master/index.html#logging>`_ which logs `localhost:9200`
instead of the actual ``es_host``:``es_port``.
``--end <timestamp>`` will force ElastAlert to stop querying after the given time, instead of the default,
querying to the present time. This really only makes sense when running standalone. The timestamp is formatted
......
This diff is collapsed.
This diff is collapsed.
......@@ -66,6 +66,7 @@ alerts_mapping = {
'command': alerts.CommandAlerter,
'sns': alerts.SnsAlerter,
'hipchat': alerts.HipChatAlerter,
'stride': alerts.StrideAlerter,
'ms_teams': alerts.MsTeamsAlerter,
'slack': alerts.SlackAlerter,
'pagerduty': alerts.PagerDutyAlerter,
......@@ -342,8 +343,10 @@ def load_modules(rule, args=None):
rule['type'] = rule['type'](rule, args)
except (KeyError, EAException) as e:
raise EAException('Error initializing rule %s: %s' % (rule['name'], e)), None, sys.exc_info()[2]
# Instantiate alert
rule['alert'] = load_alerts(rule, alert_field=rule['alert'])
# Instantiate alerts only if we're not in debug mode
# In debug mode alerts are not actually sent so don't bother instantiating them
if not args or not args.debug:
rule['alert'] = load_alerts(rule, alert_field=rule['alert'])
def isyaml(filename):
......@@ -399,7 +402,7 @@ def load_alerts(rule, alert_field):
alert_field = [alert_field]
alert_field = [normalize_config(x) for x in alert_field]
alert_field = sorted(alert_field, key=lambda (a, b): alerts_order.get(a, -1))
alert_field = sorted(alert_field, key=lambda (a, b): alerts_order.get(a, 1))
# Convert all alerts into Alerter objects
alert_field = [create_alert(a, b) for a, b in alert_field]
......@@ -459,6 +462,9 @@ def load_rules(args):
for rule_file in rule_files:
try:
rule = load_configuration(rule_file, conf, args)
# By setting "is_enabled: False" in rule file, a rule is easily disabled
if 'is_enabled' in rule and not rule['is_enabled']:
continue
if rule['name'] in names:
raise EAException('Duplicate rule named %s' % (rule['name']))
except EAException as e:
......
......@@ -10,10 +10,10 @@ import time
import elasticsearch.helpers
import yaml
from auth import Auth
from envparse import Env
from elasticsearch import RequestsHttpConnection
from elasticsearch.client import Elasticsearch
from elasticsearch.client import IndicesClient
from envparse import Env
env = Env(ES_USE_SSL=bool)
......@@ -73,9 +73,11 @@ def main():
ca_certs = data.get('ca_certs')
client_cert = data.get('client_cert')
client_key = data.get('client_key')
index = args.index if args.index is not None else data.get('writeback_index')
old_index = args.old_index if args.old_index is not None else None
else:
username = args.username if args.username else data.get('es_username')
password = args.password if args.password else data.get('es_password')
username = args.username if args.username else None
password = args.password if args.password else None
aws_region = args.aws_region
host = args.host if args.host else raw_input('Enter Elasticsearch host: ')
port = args.port if args.port else int(raw_input('Enter Elasticsearch port: '))
......@@ -92,9 +94,14 @@ def main():
url_prefix = (args.url_prefix if args.url_prefix is not None
else raw_input('Enter optional Elasticsearch URL prefix (prepends a string to the URL of every request): '))
send_get_body_as = args.send_get_body_as
ca_certs = data.get('ca_certs')
client_cert = data.get('client_cert')
client_key = data.get('client_key')
ca_certs = None
client_cert = None
client_key = None
index = args.index if args.index is not None else raw_input('New index name? (Default elastalert_status) ')
if not index:
index = 'elastalert_status'
old_index = (args.old_index if args.old_index is not None
else raw_input('Name of existing index to copy? (Default None) '))
timeout = args.timeout
auth = Auth()
......@@ -117,45 +124,66 @@ def main():
ca_certs=ca_certs,
client_key=client_key)
silence_mapping = {'silence': {'properties': {'rule_name': {'index': 'not_analyzed', 'type': 'string'},
esversion = es.info()["version"]["number"]
print("Elastic Version:" + esversion.split(".")[0])
elasticversion = int(esversion.split(".")[0])
if(elasticversion > 5):
mapping = {'type': 'keyword'}
else:
mapping = {'index': 'not_analyzed', 'type': 'string'}
print("Mapping used for string:"+str(mapping))
silence_mapping = {'silence': {'properties': {'rule_name': mapping,
'until': {'type': 'date', 'format': 'dateOptionalTime'},
'@timestamp': {'format': 'dateOptionalTime', 'type': 'date'}}}}
ess_mapping = {'elastalert_status': {'properties': {'rule_name': {'index': 'not_analyzed', 'type': 'string'},
ess_mapping = {'elastalert_status': {'properties': {'rule_name': mapping,
'@timestamp': {'format': 'dateOptionalTime', 'type': 'date'}}}}
es_mapping = {'elastalert': {'properties': {'rule_name': {'index': 'not_analyzed', 'type': 'string'},
es_mapping = {'elastalert': {'properties': {'rule_name': mapping,
'@timestamp': {'format': 'dateOptionalTime', 'type': 'date'},
'alert_time': {'format': 'dateOptionalTime', 'type': 'date'},
'match_time': {'format': 'dateOptionalTime', 'type': 'date'},
'match_body': {'enabled': False, 'type': 'object'},
'aggregate_id': {'index': 'not_analyzed', 'type': 'string'}}}}
past_mapping = {'past_elastalert': {'properties': {'rule_name': {'index': 'not_analyzed', 'type': 'string'},
'aggregate_id': mapping}}}
past_mapping = {'past_elastalert': {'properties': {'rule_name': mapping,
'match_body': {'enabled': False, 'type': 'object'},
'@timestamp': {'format': 'dateOptionalTime', 'type': 'date'},
'aggregate_id': {'index': 'not_analyzed', 'type': 'string'}}}}
'aggregate_id': mapping}}}
error_mapping = {'elastalert_error': {'properties': {'data': {'type': 'object', 'enabled': False},
'@timestamp': {'format': 'dateOptionalTime', 'type': 'date'}}}}
index = args.index if args.index is not None else raw_input('New index name? (Default elastalert_status) ')
if not index:
index = 'elastalert_status'
old_index = (args.old_index if args.old_index is not None
else raw_input('Name of existing index to copy? (Default None) '))
es_index = IndicesClient(es)
if es_index.exists(index):
print('Index ' + index + ' already exists. Skipping index creation.')
return None
es.indices.create(index)
if (elasticversion > 5):
es.indices.create(index)
es.indices.create(index+'_status')
es.indices.create(index+'_silence')
es.indices.create(index+'_error')
es.indices.create(index+'_past')
else:
es.indices.create(index)
# To avoid a race condition. TODO: replace this with a real check
time.sleep(2)
es.indices.put_mapping(index=index, doc_type='elastalert', body=es_mapping)
es.indices.put_mapping(index=index, doc_type='elastalert_status', body=ess_mapping)
es.indices.put_mapping(index=index, doc_type='silence', body=silence_mapping)
es.indices.put_mapping(index=index, doc_type='elastalert_error', body=error_mapping)
es.indices.put_mapping(index=index, doc_type='past_elastalert', body=past_mapping)
print('New index %s created' % index)
if(elasticversion > 5):
es.indices.put_mapping(index=index, doc_type='elastalert', body=es_mapping)
es.indices.put_mapping(index=index+'_status', doc_type='elastalert_status', body=ess_mapping)
es.indices.put_mapping(index=index+'_silence', doc_type='silence', body=silence_mapping)
es.indices.put_mapping(index=index+'_error', doc_type='elastalert_error', body=error_mapping)
es.indices.put_mapping(index=index+'_past', doc_type='past_elastalert', body=past_mapping)
print('New index %s created' % index)
else:
es.indices.put_mapping(index=index, doc_type='elastalert', body=es_mapping)
es.indices.put_mapping(index=index, doc_type='elastalert_status', body=ess_mapping)
es.indices.put_mapping(index=index, doc_type='silence', body=silence_mapping)
es.indices.put_mapping(index=index, doc_type='elastalert_error', body=error_mapping)
es.indices.put_mapping(index=index, doc_type='past_elastalert', body=past_mapping)
print('New index %s created' % index)
if old_index:
print("Copying all data from old index '{0}' to new index '{1}'".format(old_index, index))
......
This diff is collapsed.
......@@ -54,7 +54,8 @@ class RuleType(object):
ts = self.rules.get('timestamp_field')
if ts in event:
event[ts] = dt_to_ts(event[ts])
self.matches.append(event)
self.matches.append(copy.deepcopy(event))
def get_match_str(self, match):
""" Returns a string that gives more context about a match.
......@@ -404,7 +405,6 @@ class SpikeRule(RuleType):
def clear_windows(self, qk, event):
# Reset the state and prevent alerts until windows filled again
self.cur_windows[qk].clear()
self.ref_windows[qk].clear()
self.first_event.pop(qk)
self.skip_checks[qk] = event[self.ts_field] + self.rules['timeframe'] * 2
......@@ -593,7 +593,7 @@ class NewTermsRule(RuleType):
window_size = datetime.timedelta(**self.rules.get('terms_window_size', {'days': 30}))
field_name = {"field": "", "size": 2147483647} # Integer.MAX_VALUE
query_template = {"aggs": {"values": {"terms": field_name}}}
if args and args.start:
if args and hasattr(args, 'start') and args.start:
end = ts_to_dt(args.start)
else:
end = ts_now()
......@@ -974,13 +974,43 @@ class MetricAggregationRule(BaseAggregationRule):
return {self.metric_key: {self.rules['metric_agg_type']: {'field': self.rules['metric_agg_key']}}}
def check_matches(self, timestamp, query_key, aggregation_data):
metric_val = aggregation_data[self.metric_key]['value']
if self.crossed_thresholds(metric_val):
match = {self.rules['timestamp_field']: timestamp,
self.metric_key: metric_val}
if query_key is not None:
match[self.rules['query_key']] = query_key
self.add_match(match)
if "compound_query_key" in self.rules:
self.check_matches_recursive(timestamp, query_key, aggregation_data, self.rules['compound_query_key'], dict())
else:
metric_val = aggregation_data[self.metric_key]['value']
if self.crossed_thresholds(metric_val):
match = {self.rules['timestamp_field']: timestamp,
self.metric_key: metric_val}
if query_key is not None:
match[self.rules['query_key']] = query_key
self.add_match(match)
def check_matches_recursive(self, timestamp, query_key, aggregation_data, compound_keys, match_data):
if len(compound_keys) < 1:
# shouldn't get to this point, but checking for safety
return
match_data[compound_keys[0]] = aggregation_data['key']
if 'bucket_aggs' in aggregation_data:
for result in aggregation_data['bucket_aggs']['buckets']:
self.check_matches_recursive(timestamp,
query_key,
result,
compound_keys[1:],
match_data)
else:
metric_val = aggregation_data[self.metric_key]['value']
if self.crossed_thresholds(metric_val):
match_data[self.rules['timestamp_field']] = timestamp
match_data[self.metric_key] = metric_val
# add compound key to payload to allow alerts to trigger for every unique occurence
compound_value = [match_data[key] for key in self.rules['compound_query_key']]
match_data[self.rules['query_key']] = ",".join(compound_value)
self.add_match(match_data)
def crossed_thresholds(self, metric_value):
if metric_value is None:
......@@ -1005,10 +1035,12 @@ class PercentageMatchRule(BaseAggregationRule):
self.rules['aggregation_query_element'] = self.generate_aggregation_query()
def get_match_str(self, match):
message = 'Percentage violation, value: %s (min: %s max : %s) \n\n' % (
match['percentage'],
percentage_format_string = self.rules.get('percentage_format_string', None)
message = 'Percentage violation, value: %s (min: %s max : %s) of %s items\n\n' % (
percentage_format_string % (match['percentage']) if percentage_format_string else match['percentage'],
self.rules.get('min_percentage'),
self.rules.get('max_percentage')
self.rules.get('max_percentage'),
match['denominator']
)
return message
......@@ -1041,7 +1073,7 @@ class PercentageMatchRule(BaseAggregationRule):
else:
match_percentage = (match_bucket_count * 1.0) / (total_count * 1.0) * 100
if self.percentage_violation(match_percentage):
match = {self.rules['timestamp_field']: timestamp, 'percentage': match_percentage}
match = {self.rules['timestamp_field']: timestamp, 'percentage': match_percentage, 'denominator': total_count}
if query_key is not None:
match[self.rules['query_key']] = query_key
self.add_match(match)
......
......@@ -180,6 +180,7 @@ properties:
alert_text_args: {type: array, items: {type: string}}
alert_text_kw: {type: object}
alert_text_type: {enum: [alert_text_only, exclude_fields]}
alert_missing_value: {type: string}
timestamp_field: {type: string}
field: {}
......@@ -214,11 +215,18 @@ properties:
### HipChat
hipchat_auth_token: {type: string}
hipchat_room_id: {type: string}
hipchat_room_id: {type: [string, integer]}
hipchat_domain: {type: string}
hipchat_ignore_ssl_errors: {type: boolean}
hipchat_notify: {type: boolean}
hipchat_from: {type: string}
hipchat_mentions: {type: array, items: {type: string}}
### Stride
stride_access_token: {type: string}
stride_cloud_id: {type: string}
stride_converstation_id: {type: string}
stride_ignore_ssl_errors: {type: boolean}
### Slack
slack_webhook_url: *arrayOfString
......@@ -250,6 +258,7 @@ properties:
victorops_api_key: {type: string}
victorops_routing_key: {type: string}
victorops_message_type: {enum: [INFO, WARNING, ACKNOWLEDGEMENT, CRITICAL, RECOVERY]}
victorops_entity_id: {type: string}
victorops_entity_display_name: {type: string}
### Telegram
......@@ -265,4 +274,3 @@ properties:
### Simple
simple_webhook_url: *arrayOfString
simple_proxy: {type: string}
......@@ -204,7 +204,11 @@ class MockElastAlerter(object):
""" Creates an ElastAlert instance and run's over for a specific rule using either real or mock data. """
# Load and instantiate rule
load_modules(rule)
# Pass an args containing the context of whether we're alerting or not
# It is needed to prevent unnecessary initialization of unused alerters
load_modules_args = argparse.Namespace()
load_modules_args.debug = not args.alert
load_modules(rule, load_modules_args)
conf['rules'] = [rule]
# If using mock data, make sure it's sorted and find appropriate time range
......
......@@ -64,6 +64,9 @@ def _find_es_dict_by_key(lookup_dict, term):
subkey = ''
while len(subkeys) > 0:
if not dict_cursor:
return {}, None
subkey += subkeys.pop(0)
if subkey in dict_cursor:
......@@ -233,7 +236,7 @@ def unix_to_dt(ts):
def dt_to_unix(dt):
return total_seconds(dt - datetime.datetime(1970, 1, 1, tzinfo=dateutil.tz.tzutc()))
return int(total_seconds(dt - datetime.datetime(1970, 1, 1, tzinfo=dateutil.tz.tzutc())))
def dt_to_unixms(dt):
......
......@@ -2,7 +2,7 @@ coverage
flake8
pre-commit
pylint<1.4
pytest
pytest<3.3.0
setuptools
sphinx_rtd_theme
tox<2.0
......@@ -8,7 +8,7 @@ from setuptools import setup
base_dir = os.path.dirname(__file__)
setup(
name='elastalert',
version='0.1.21',
version='0.1.28',
description='Runs custom filters on Elasticsearch and alerts on matches',
author='Quentin Long',
author_email='qlo@yelp.com',
......
This diff is collapsed.
......@@ -940,6 +940,17 @@ def test_rule_changes(ea):
assert len(ea.rules) == 3
assert not any(['new' in rule for rule in ea.rules])
# A new rule with is_enabled=False wont load
new_hashes = copy.copy(new_hashes)
new_hashes.update({'rules/rule4.yaml': 'asdf'})
with mock.patch('elastalert.elastalert.get_rule_hashes') as mock_hashes:
with mock.patch('elastalert.elastalert.load_configuration') as mock_load:
mock_load.return_value = {'filter': [], 'name': 'rule4', 'new': 'stuff', 'is_enabled': False, 'rule_file': 'rules/rule4.yaml'}
mock_hashes.return_value = new_hashes
ea.load_rule_changes()
assert len(ea.rules) == 3
assert not any(['new' in rule for rule in ea.rules])
# An old rule which didn't load gets reloaded
new_hashes = copy.copy(new_hashes)
new_hashes['rules/rule4.yaml'] = 'qwerty'
......
......@@ -44,6 +44,7 @@ test_rule = {'es_host': 'test_host',
test_args = mock.Mock()
test_args.config = 'test_config'
test_args.rule = None
test_args.debug = False
def test_import_rules():
......@@ -229,6 +230,20 @@ def test_load_ssl_env_true():
assert rules['use_ssl'] is True
def test_load_disabled_rules():
test_rule_copy = copy.deepcopy(test_rule)
test_rule_copy['is_enabled'] = False
test_config_copy = copy.deepcopy(test_config)
with mock.patch('elastalert.config.yaml_loader') as mock_open:
mock_open.side_effect = [test_config_copy, test_rule_copy]
with mock.patch('os.listdir') as mock_ls:
mock_ls.return_value = ['testrule.yaml']
rules = load_rules(test_args)
# The rule is not loaded for it has "is_enabled=False"
assert len(rules['rules']) == 0
def test_compound_query_key():
test_rule_copy = copy.deepcopy(test_rule)
test_rule_copy.pop('use_count_query')
......
......@@ -407,6 +407,58 @@ def test_spike_terms():
assert rule.matches[0]['username'] == 'userD'
def test_spike_terms_query_key_alert_on_new_data():
rules = {'spike_height': 1.5,
'timeframe': datetime.timedelta(minutes=10),
'spike_type': 'both',
'use_count_query': False,
'timestamp_field': 'ts',
'query_key': 'username',
'use_term_query': True,
'alert_on_new_data': True}
terms1 = {ts_to_dt('2014-01-01T00:01:00Z'): [{'key': 'userA', 'doc_count': 10}]}
terms2 = {ts_to_dt('2014-01-01T00:06:00Z'): [{'key': 'userA', 'doc_count': 10}]}
terms3 = {ts_to_dt('2014-01-01T00:11:00Z'): [{'key': 'userA', 'doc_count': 10}]}
terms4 = {ts_to_dt('2014-01-01T00:21:00Z'): [{'key': 'userA', 'doc_count': 20}]}
terms5 = {ts_to_dt('2014-01-01T00:26:00Z'): [{'key': 'userA', 'doc_count': 20}]}
terms6 = {ts_to_dt('2014-01-01T00:31:00Z'): [{'key': 'userA', 'doc_count': 20}]}
terms7 = {ts_to_dt('2014-01-01T00:36:00Z'): [{'key': 'userA', 'doc_count': 20}]}
terms8 = {ts_to_dt('2014-01-01T00:41:00Z'): [{'key': 'userA', 'doc_count': 20}]}
rule = SpikeRule(rules)
# Initial input
rule.add_terms_data(terms1)
assert len(rule.matches) == 0
# No spike for UserA because windows not filled
rule.add_terms_data(terms2)
assert len(rule.matches) == 0
rule.add_terms_data(terms3)
assert len(rule.matches) == 0
rule.add_terms_data(terms4)
assert len(rule.matches) == 0
# Spike
rule.add_terms_data(terms5)
assert len(rule.matches) == 1
rule.matches[:] = []
# There will be no more spikes since all terms have the same doc_count
rule.add_terms_data(terms6)
assert len(rule.matches) == 0
rule.add_terms_data(terms7)
assert len(rule.matches) == 0
rule.add_terms_data(terms8)
assert len(rule.matches) == 0
def test_blacklist():
events = [{'@timestamp': ts_to_dt('2014-09-26T12:34:56Z'), 'term': 'good'},
{'@timestamp': ts_to_dt('2014-09-26T12:34:57Z'), 'term': 'bad'},
......@@ -1080,6 +1132,30 @@ def test_metric_aggregation():
assert rule.matches[0]['qk'] == 'qk_val'
def test_metric_aggregation_complex_query_key():
rules = {'buffer_time': datetime.timedelta(minutes=5),
'timestamp_field': '@timestamp',
'metric_agg_type': 'avg',
'metric_agg_key': 'cpu_pct',
'compound_query_key': ['qk', 'sub_qk'],
'query_key': 'qk,sub_qk',
'max_threshold': 0.8}
query = {"bucket_aggs": {"buckets": [
{"cpu_pct_avg": {"value": 0.91}, "key": "sub_qk_val1"},
{"cpu_pct_avg": {"value": 0.95}, "key": "sub_qk_val2"},
{"cpu_pct_avg": {"value": 0.89}, "key": "sub_qk_val3"}]
}, "key": "qk_val"}
rule = MetricAggregationRule(rules)
rule.check_matches(datetime.datetime.now(), 'qk_val', query)
assert len(rule.matches) == 3
assert rule.matches[0]['qk'] == 'qk_val'
assert rule.matches[1]['qk'] == 'qk_val'
assert rule.matches[0]['sub_qk'] == 'sub_qk_val1'
assert rule.matches[1]['sub_qk'] == 'sub_qk_val2'
def test_percentage_match():
rules = {'match_bucket_filter': {'term': 'term_val'},
'buffer_time': datetime.timedelta(minutes=5),
......@@ -1129,5 +1205,8 @@ def test_percentage_match():
rules['query_key'] = 'qk'
rule = PercentageMatchRule(rules)
rule.check_matches(datetime.datetime.now(), 'qk_val', create_percentage_match_agg(76, 24))
rule.check_matches(datetime.datetime.now(), 'qk_val', create_percentage_match_agg(76.666666667, 24))
assert rule.matches[0]['qk'] == 'qk_val'
assert '76.1589403974' in rule.get_match_str(rule.matches[0])
rules['percentage_format_string'] = '%.2f'
assert '76.16' in rule.get_match_str(rule.matches[0])
# -*- coding: utf-8 -*-
from elastalert.util import lookup_es_key, set_es_key, add_raw_postfix, replace_dots_in_field_names
from elastalert.util import (
parse_deadline,
parse_duration,
)
from datetime import datetime
from datetime import timedelta
import mock
import pytest
from datetime import (
datetime,
timedelta,
)
from dateutil.parser import parse as dt
from elastalert.util import add_raw_postfix
from elastalert.util import lookup_es_key
from elastalert.util import parse_deadline
from elastalert.util import parse_duration
from elastalert.util import replace_dots_in_field_names
from elastalert.util import set_es_key
@pytest.mark.parametrize('spec, expected_delta', [
('hours=2', timedelta(hours=2)),
('hours=2', timedelta(hours=2)),
('minutes=30', timedelta(minutes=30)),
('seconds=45', timedelta(seconds=45)),
])
......@@ -24,7 +25,7 @@ def test_parse_duration(spec, expected_delta):
@pytest.mark.parametrize('spec, expected_deadline', [
('hours=2', dt('2017-07-07T12:00:00.000Z')),
('hours=2', dt('2017-07-07T12:00:00.000Z')),
('minutes=30', dt('2017-07-07T10:30:00.000Z')),
('seconds=45', dt('2017-07-07T10:00:45.000Z')),
])
......@@ -64,12 +65,15 @@ def test_looking_up_missing_keys(ea):
'Message': '12345',
'Fields': {
'severity': 'large',
'user': 'jimmay'
'user': 'jimmay',
'null': None
}
}
assert lookup_es_key(record, 'Fields.ts') is None
assert lookup_es_key(record, 'Fields.null.foo') is None
def test_looking_up_nested_keys(ea):
expected = 12467267
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment