Skip to content
Snippets Groups Projects
Commit 41a38ae1 authored by Scott Kitterman's avatar Scott Kitterman
Browse files

New upstream version 1.3.0

parent c23aec9e
No related branches found
No related tags found
No related merge requests found
Metadata-Version: 2.1
Name: tinycss2
Version: 1.2.1
Version: 1.3.0
Summary: A tiny CSS parser
Keywords: css,parser
Author-email: Simon Sapin <simon.sapin@exyr.org>
Maintainer-email: CourtBouillon <contact@courtbouillon.org>
Requires-Python: >=3.7
Requires-Python: >=3.8
Description-Content-Type: text/x-rst
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
......@@ -14,10 +14,11 @@ Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Classifier: Topic :: Text Processing
......@@ -25,8 +26,7 @@ Requires-Dist: webencodings >=0.4
Requires-Dist: sphinx ; extra == "doc"
Requires-Dist: sphinx_rtd_theme ; extra == "doc"
Requires-Dist: pytest ; extra == "test"
Requires-Dist: isort ; extra == "test"
Requires-Dist: flake8 ; extra == "test"
Requires-Dist: ruff ; extra == "test"
Project-URL: Changelog, https://github.com/Kozea/tinycss2/releases
Project-URL: Code, https://github.com/Kozea/tinycss2/
Project-URL: Documentation, https://doc.courtbouillon.org/tinycss2/
......@@ -45,7 +45,7 @@ CSS but doesn't know specific rules, properties or values supported in various
CSS modules.
* Free software: BSD license
* For Python 3.7+, tested on CPython and PyPy
* For Python 3.8+, tested on CPython and PyPy
* Documentation: https://doc.courtbouillon.org/tinycss2
* Changelog: https://github.com/Kozea/tinycss2/releases
* Code, issues, tests: https://github.com/Kozea/tinycss2
......
......@@ -7,7 +7,7 @@ CSS but doesn't know specific rules, properties or values supported in various
CSS modules.
* Free software: BSD license
* For Python 3.7+, tested on CPython and PyPy
* For Python 3.8+, tested on CPython and PyPy
* Documentation: https://doc.courtbouillon.org/tinycss2
* Changelog: https://github.com/Kozea/tinycss2/releases
* Code, issues, tests: https://github.com/Kozea/tinycss2
......
......@@ -21,6 +21,7 @@ functions.
.. autofunction:: parse_stylesheet
.. autofunction:: parse_rule_list
.. autofunction:: parse_one_rule
.. autofunction:: parse_blocks_contents
.. autofunction:: parse_declaration_list
.. autofunction:: parse_one_declaration
.. autofunction:: parse_component_value_list
......
......@@ -2,6 +2,15 @@ Changelog
=========
Version 1.3.0
-------------
Released on 2024-04-23.
* Support CSS nesting
* Deprecate parse_declaration_list, use parse_blocks_contents instead
Version 1.2.1
-------------
......
......@@ -59,22 +59,22 @@ Parsing a list of declarations is possible from a list of tokens (given by the
string (given by the ``style`` attribute of an HTML element, for example).
The high-level function used to parse declarations is
:func:`tinycss2.parse_declaration_list`.
:func:`tinycss2.parse_blocks_contents`.
.. code-block:: python
rules = tinycss2.parse_stylesheet('body div {width: 50%;height: 50%}')
tinycss2.parse_declaration_list(rules[0].content)
tinycss2.parse_blocks_contents(rules[0].content)
# [<Declaration width: …>, <Declaration height: …>]
tinycss2.parse_declaration_list('width: 50%;height: 50%')
tinycss2.parse_blocks_contents('width: 50%;height: 50%')
# [<Declaration width: …>, <Declaration height: …>]
You can then get the name and value of each declaration:
.. code-block:: python
declarations = tinycss2.parse_declaration_list('width: 50%;height: 50%')
declarations = tinycss2.parse_blocks_contents('width: 50%;height: 50%')
declarations[0].name, declarations[0].value
# ('width', [<WhitespaceToken>, <PercentageToken 50%>])
......
......@@ -52,6 +52,9 @@ html_theme_options = {
'collapse_navigation': False,
}
# Favicon URL
html_favicon = 'https://www.courtbouillon.org/static/images/favicon.png'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
......
......@@ -45,15 +45,12 @@ You can launch tests using the following command::
venv/bin/python -m pytest
tinycss2 also uses isort_ to check imports and flake8_ to check the coding
style::
tinycss2 also uses ruff_ to check the coding style::
venv/bin/python -m isort . --check --diff
venv/bin/python -m flake8 --exclude tests/css-parsing-tests
venv/bin/python -m ruff check
.. _pytest: https://docs.pytest.org/
.. _isort: https://pycqa.github.io/isort/
.. _flake8: https://flake8.pycqa.org/
.. _ruff: https://docs.astral.sh/ruff/
Documentation
......
......@@ -8,7 +8,7 @@ description = 'A tiny CSS parser'
keywords = ['css', 'parser']
authors = [{name = 'Simon Sapin', email = 'simon.sapin@exyr.org'}]
maintainers = [{name = 'CourtBouillon', email = 'contact@courtbouillon.org'}]
requires-python = '>=3.7'
requires-python = '>=3.8'
readme = {file = 'README.rst', content-type = 'text/x-rst'}
license = {file = 'LICENSE'}
dependencies = ['webencodings >=0.4']
......@@ -20,10 +20,11 @@ classifiers = [
'Programming Language :: Python',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3 :: Only',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
'Programming Language :: Python :: 3.11',
'Programming Language :: Python :: 3.12',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy',
'Topic :: Text Processing',
......@@ -40,7 +41,7 @@ Donation = 'https://opencollective.com/courtbouillon'
[project.optional-dependencies]
doc = ['sphinx', 'sphinx_rtd_theme']
test = ['pytest', 'isort', 'flake8']
test = ['pytest', 'ruff']
[tool.flit.sdist]
exclude = ['.*']
......@@ -56,7 +57,9 @@ include = ['tests/*', 'tinycss2/*']
exclude_lines = ['pragma: no cover', 'def __repr__', 'raise NotImplementedError']
omit = ['.*']
[tool.isort]
default_section = 'FIRSTPARTY'
multi_line_output = 4
extend_skip = ['tests/css-parsing-tests']
\ No newline at end of file
[tool.ruff]
extend-exclude = ['tests/css-parsing-tests']
[tool.ruff.lint]
select = ['E', 'W', 'F', 'I', 'N', 'RUF']
ignore = ['RUF001', 'RUF002', 'RUF003']
......@@ -67,6 +67,12 @@ associated with the expected result.
The Unicode input is represented by a JSON string,
the output as an array of declarations_ and at-rules_.
``blocks_contents.json``
Tests `Parse a block’s contents
<http://dev.w3.org/csswg/css-syntax-3/#parse-block-contents>`_.
The Unicode input is represented by a JSON string,
the output as an array of declarations_, at-rules_ and `qualified rules`_.
``one_declaration.json``
Tests `Parse a declaration
<http://dev.w3.org/csswg/css-syntax-3/#parse-a-declaration>`_.
......
[
";; /**/ ; ;", [],
"a:b; c:d 42!important;\n", [
["declaration", "a", [["ident", "b"]], false],
["declaration", "c", [["ident", "d"], " ", ["number", "42", 42, "integer"]], true]
],
"z;a:b", [
["error", "invalid"],
["declaration", "a", [["ident", "b"]], false]
],
"z:x!;a:b", [
["declaration", "z", [["ident", "x"], "!"], false],
["declaration", "a", [["ident", "b"]], false]
],
"a:b; c+:d", [
["declaration", "a", [["ident", "b"]], false],
["error", "invalid"]
],
"@import 'foo.css'; a:b; @import 'bar.css'", [
["at-rule", "import", [" ", ["string", "foo.css"]], null],
["declaration", "a", [["ident", "b"]], false],
["at-rule", "import", [" ", ["string", "bar.css"]], null]
],
"@media screen { div{;}} a:b;; @media print{div{", [
["at-rule", "media", [" ", ["ident", "screen"], " "], [" ", ["ident", "div"], ["{}", ";"]]],
["declaration", "a", [["ident", "b"]], false],
["at-rule", "media", [" ", ["ident", "print"]], [["ident", "div"], ["{}"]]]
],
"@ media screen { div{;}} a:b;; @media print{div{", [
["qualified rule", ["@", " ", ["ident", "media"], " ", ["ident", "screen"], " "], [" ", ["ident", "div"], ["{}", ";"]]],
["declaration", "a", [["ident", "b"]], false],
["at-rule", "media", [" ", ["ident", "print"]], [["ident", "div"], ["{}"]]]
],
"z:x;a b{c:d;;e:f}", [
["declaration", "z", [["ident", "x"]], false],
["qualified rule", [["ident", "a"], " ", ["ident", "b"]], [["ident", "c"], ":", ["ident", "d"], ";", ";", ["ident", "e"], ":", ["ident", "f"]]]
],
"a {c:1}", [
["qualified rule", [["ident", "a"], " "], [["ident", "c"], ":", ["number", "1", 1, "integer"]]]
],
"a:hover {c:1}", [
["qualified rule", [["ident", "a"], ":", ["ident", "hover"], " "], [["ident", "c"], ":", ["number", "1", 1, "integer"]]]
],
"z:x;a b{c:d}e:f", [
["declaration", "z", [["ident", "x"]], false],
["qualified rule", [["ident", "a"], " ", ["ident", "b"]], [["ident", "c"], ":", ["ident", "d"]]],
["declaration", "e", [["ident", "f"]], false]
],
"", []
]
......@@ -35,12 +35,6 @@
], false],
"foo:important", ["declaration", "foo", [
["ident", "important"]
], false],
"foo: 9000 @bar{ !important", ["declaration", "foo", [
" ", ["number", "9000", 9000, "integer"], " ", ["at-keyword", "bar"], ["{}",
" ", "!", ["ident", "important"]
]
], false]
]
......@@ -4,19 +4,19 @@ import pprint
from pathlib import Path
import pytest
from tinycss2 import (
parse_component_value_list, parse_declaration_list,
parse_one_component_value, parse_one_declaration, parse_one_rule,
parse_rule_list, parse_stylesheet, parse_stylesheet_bytes, serialize)
from tinycss2.ast import (
AtKeywordToken, AtRule, Comment, CurlyBracketsBlock, Declaration,
DimensionToken, FunctionBlock, HashToken, IdentToken, LiteralToken,
NumberToken, ParenthesesBlock, ParseError, PercentageToken, QualifiedRule,
SquareBracketsBlock, StringToken, UnicodeRangeToken, URLToken,
WhitespaceToken)
from webencodings import Encoding, lookup
from tinycss2 import ( # isort:skip
parse_blocks_contents, parse_component_value_list, parse_declaration_list,
parse_one_component_value, parse_one_declaration, parse_one_rule, parse_rule_list,
parse_stylesheet, parse_stylesheet_bytes, serialize)
from tinycss2.ast import ( # isort:skip
AtKeywordToken, AtRule, Comment, CurlyBracketsBlock, Declaration, DimensionToken,
FunctionBlock, HashToken, IdentToken, LiteralToken, NumberToken, ParenthesesBlock,
ParseError, PercentageToken, QualifiedRule, SquareBracketsBlock, StringToken,
UnicodeRangeToken, URLToken, WhitespaceToken)
from tinycss2.color3 import RGBA, parse_color
from tinycss2.nth import parse_nth
from webencodings import Encoding, lookup
def generic(func):
......@@ -39,8 +39,8 @@ def to_json():
type(None): lambda _: None,
str: lambda s: s,
int: lambda s: s,
list: lambda l: [to_json(el) for el in l],
tuple: lambda l: [to_json(el) for el in l],
list: lambda li: [to_json(el) for el in li],
tuple: lambda li: [to_json(el) for el in li],
Encoding: lambda e: e.name,
ParseError: lambda e: ['error', e.kind],
......@@ -49,26 +49,25 @@ def to_json():
LiteralToken: lambda t: t.value,
IdentToken: lambda t: ['ident', t.value],
AtKeywordToken: lambda t: ['at-keyword', t.value],
HashToken: lambda t: ['hash', t.value,
'id' if t.is_identifier else 'unrestricted'],
HashToken: lambda t: [
'hash', t.value, 'id' if t.is_identifier else 'unrestricted'],
StringToken: lambda t: ['string', t.value],
URLToken: lambda t: ['url', t.value],
NumberToken: lambda t: ['number'] + numeric(t),
PercentageToken: lambda t: ['percentage'] + numeric(t),
DimensionToken: lambda t: ['dimension'] + numeric(t) + [t.unit],
NumberToken: lambda t: ['number', *numeric(t)],
PercentageToken: lambda t: ['percentage', *numeric(t)],
DimensionToken: lambda t: ['dimension', *numeric(t), t.unit],
UnicodeRangeToken: lambda t: ['unicode-range', t.start, t.end],
CurlyBracketsBlock: lambda t: ['{}'] + to_json(t.content),
SquareBracketsBlock: lambda t: ['[]'] + to_json(t.content),
ParenthesesBlock: lambda t: ['()'] + to_json(t.content),
FunctionBlock: lambda t: ['function', t.name] + to_json(t.arguments),
CurlyBracketsBlock: lambda t: ['{}', *to_json(t.content)],
SquareBracketsBlock: lambda t: ['[]', *to_json(t.content)],
ParenthesesBlock: lambda t: ['()', *to_json(t.content)],
FunctionBlock: lambda t: ['function', t.name, *to_json(t.arguments)],
Declaration: lambda d: ['declaration', d.name,
to_json(d.value), d.important],
AtRule: lambda r: ['at-rule', r.at_keyword, to_json(r.prelude),
to_json(r.content)],
QualifiedRule: lambda r: ['qualified rule', to_json(r.prelude),
to_json(r.content)],
Declaration: lambda d: ['declaration', d.name, to_json(d.value), d.important],
AtRule: lambda r: [
'at-rule', r.at_keyword, to_json(r.prelude), to_json(r.content)],
QualifiedRule: lambda r: [
'qualified rule', to_json(r.prelude), to_json(r.content)],
RGBA: lambda v: [round(c, 10) for c in v],
}
......@@ -112,6 +111,11 @@ def test_declaration_list(input):
return parse_declaration_list(input, **SKIP)
@json_test()
def test_blocks_contents(input):
return parse_blocks_contents(input, **SKIP)
@json_test()
def test_one_declaration(input):
return parse_one_declaration(input, skip_comments=True)
......@@ -213,7 +217,7 @@ def test_serialize_rules():
def test_serialize_declarations():
source = 'color: #123; /**/ @top-left {} width:7px !important;'
rules = parse_declaration_list(source)
rules = parse_blocks_contents(source)
assert serialize(rules) == source
......
......@@ -10,9 +10,9 @@ corresponding to these objects.
from .bytes import parse_stylesheet_bytes # noqa
from .parser import ( # noqa
parse_declaration_list, parse_one_component_value, parse_one_declaration,
parse_one_rule, parse_rule_list, parse_stylesheet)
parse_blocks_contents, parse_declaration_list, parse_one_component_value,
parse_one_declaration, parse_one_rule, parse_rule_list, parse_stylesheet)
from .serializer import serialize, serialize_identifier # noqa
from .tokenizer import parse_component_value_list # noqa
VERSION = __version__ = '1.2.1'
VERSION = __version__ = '1.3.0'
......@@ -88,7 +88,7 @@ def parse_b(tokens, a):
def parse_signless_b(tokens, a, b_sign):
token = _next_significant(tokens)
if (token.type == 'number' and token.is_integer and
not token.representation[0] in '-+'):
token.representation[0] not in '-+'):
return parse_end(tokens, a, b_sign * token.int_value)
......
from itertools import chain
from .ast import AtRule, Declaration, ParseError, QualifiedRule
from .tokenizer import parse_component_value_list
......@@ -12,7 +14,6 @@ def _to_token_iterator(input, skip_comments=False):
:returns: An iterator yielding :term:`component values`.
"""
# Accept ASCII-only byte strings on Python 2, with implicit conversion.
if isinstance(input, str):
input = parse_component_value_list(input, skip_comments)
return iter(input)
......@@ -83,7 +84,15 @@ def parse_one_declaration(input, skip_comments=False):
return _parse_declaration(first_token, tokens)
def _parse_declaration(first_token, tokens):
def _consume_remnants(input, nested):
for token in input:
if token == ';':
return
elif nested and token == '}':
return
def _parse_declaration(first_token, tokens, nested=True):
"""Parse a declaration.
Consume :obj:`tokens` until the end of the declaration or the first error.
......@@ -92,6 +101,8 @@ def _parse_declaration(first_token, tokens):
:param first_token: The first component value of the rule.
:type tokens: :term:`iterator`
:param tokens: An iterator yielding :term:`component values`.
:type nested: :obj:`bool`
:param nested: Whether the declaration is nested or top-level.
:returns:
A :class:`~tinycss2.ast.Declaration`
or :class:`~tinycss2.ast.ParseError`.
......@@ -99,41 +110,89 @@ def _parse_declaration(first_token, tokens):
"""
name = first_token
if name.type != 'ident':
return ParseError(name.source_line, name.source_column, 'invalid',
'Expected <ident> for declaration name, got %s.'
% name.type)
_consume_remnants(tokens, nested)
return ParseError(
name.source_line, name.source_column, 'invalid',
f'Expected <ident> for declaration name, got {name.type}.')
colon = _next_significant(tokens)
if colon is None:
return ParseError(name.source_line, name.source_column, 'invalid',
"Expected ':' after declaration name, got EOF")
_consume_remnants(tokens, nested)
return ParseError(
name.source_line, name.source_column, 'invalid',
"Expected ':' after declaration name, got EOF")
elif colon != ':':
return ParseError(colon.source_line, colon.source_column, 'invalid',
"Expected ':' after declaration name, got %s."
% colon.type)
_consume_remnants(tokens, nested)
return ParseError(
colon.source_line, colon.source_column, 'invalid',
"Expected ':' after declaration name, got {colon.type}.")
value = []
state = 'value'
contains_non_whitespace = False
contains_simple_block = False
for i, token in enumerate(tokens):
if state == 'value' and token == '!':
state = 'bang'
bang_position = i
elif state == 'bang' and token.type == 'ident' \
and token.lower_value == 'important':
elif (state == 'bang' and token.type == 'ident'
and token.lower_value == 'important'):
state = 'important'
elif token.type not in ('whitespace', 'comment'):
state = 'value'
if token.type == '{} block':
if contains_non_whitespace:
contains_simple_block = True
else:
contains_non_whitespace = True
else:
contains_non_whitespace = True
value.append(token)
if state == 'important':
del value[bang_position:]
return Declaration(name.source_line, name.source_column, name.value,
name.lower_value, value, state == 'important')
# TODO: Handle custom property names
if contains_simple_block and contains_non_whitespace:
return ParseError(
colon.source_line, colon.source_column, 'invalid',
'Declaration contains {} block')
# TODO: Handle unicode-range
return Declaration(
name.source_line, name.source_column, name.value, name.lower_value,
value, state == 'important')
def _consume_blocks_content(first_token, tokens):
"""Consume declaration or nested rule."""
declaration_tokens = []
semicolon_token = []
if first_token != ';' and first_token.type != '{} block':
for token in tokens:
if token == ';':
semicolon_token.append(token)
break
declaration_tokens.append(token)
if token.type == '{} block':
break
declaration = _parse_declaration(
first_token, iter(declaration_tokens), nested=True)
if declaration.type == 'declaration':
return declaration
else:
tokens = chain(declaration_tokens, semicolon_token, tokens)
return _consume_qualified_rule(first_token, tokens, stop_token=';', nested=True)
def _consume_declaration_in_list(first_token, tokens):
"""Like :func:`_parse_declaration`, but stop at the first ``;``."""
"""Like :func:`_parse_declaration`, but stop at the first ``;``.
Deprecated, use :func:`_consume_blocks_content` instead.
"""
other_declaration_tokens = []
for token in tokens:
if token == ';':
......@@ -142,16 +201,70 @@ def _consume_declaration_in_list(first_token, tokens):
return _parse_declaration(first_token, iter(other_declaration_tokens))
def parse_blocks_contents(input, skip_comments=False, skip_whitespace=False):
"""Parse a block’s contents.
This is used e.g. for the :attr:`~tinycss2.ast.QualifiedRule.content`
of a style rule or ``@page`` rule, or for the ``style`` attribute of an
HTML element.
In contexts that don’t expect any at-rule and/or qualified rule,
all :class:`~tinycss2.ast.AtRule` and/or
:class:`~tinycss2.ast.QualifiedRule` objects should simply be rejected as
invalid.
:type input: :obj:`str` or :term:`iterable`
:param input: A string or an iterable of :term:`component values`.
:type skip_comments: :obj:`bool`
:param skip_comments:
Ignore CSS comments at the top-level of the list.
If the input is a string, ignore all comments.
:type skip_whitespace: :obj:`bool`
:param skip_whitespace:
Ignore whitespace at the top-level of the list.
Whitespace is still preserved
in the :attr:`~tinycss2.ast.Declaration.value` of declarations
and the :attr:`~tinycss2.ast.AtRule.prelude`
and :attr:`~tinycss2.ast.AtRule.content` of at-rules.
:returns:
A list of
:class:`~tinycss2.ast.Declaration`,
:class:`~tinycss2.ast.AtRule`,
:class:`~tinycss2.ast.QualifiedRule`,
:class:`~tinycss2.ast.Comment` (if ``skip_comments`` is false),
:class:`~tinycss2.ast.WhitespaceToken`
(if ``skip_whitespace`` is false),
and :class:`~tinycss2.ast.ParseError` objects
"""
tokens = _to_token_iterator(input, skip_comments)
result = []
for token in tokens:
if token.type == 'whitespace':
if not skip_whitespace:
result.append(token)
elif token.type == 'comment':
if not skip_comments:
result.append(token)
elif token.type == 'at-keyword':
result.append(_consume_at_rule(token, tokens))
elif token != ';':
result.append(_consume_blocks_content(token, tokens))
return result
def parse_declaration_list(input, skip_comments=False, skip_whitespace=False):
"""Parse a :diagram:`declaration list` (which may also contain at-rules).
Deprecated and removed from CSS Syntax Level 3. Use
:func:`parse_blocks_contents` instead.
This is used e.g. for the :attr:`~tinycss2.ast.QualifiedRule.content`
of a style rule or ``@page`` rule,
or for the ``style`` attribute of an HTML element.
of a style rule or ``@page`` rule, or for the ``style`` attribute of an
HTML element.
In contexts that don’t expect any at-rule,
all :class:`~tinycss2.ast.AtRule` objects
should simply be rejected as invalid.
In contexts that don’t expect any at-rule, all
:class:`~tinycss2.ast.AtRule` objects should simply be rejected as invalid.
:type input: :obj:`str` or :term:`iterable`
:param input: A string or an iterable of :term:`component values`.
......@@ -229,6 +342,9 @@ def parse_one_rule(input, skip_comments=False):
def parse_rule_list(input, skip_comments=False, skip_whitespace=False):
"""Parse a non-top-level :diagram:`rule list`.
Deprecated and removed from CSS Syntax. Use :func:`parse_blocks_content`
instead.
This is used for parsing the :attr:`~tinycss2.ast.AtRule.content`
of nested rules like ``@media``.
This differs from :func:`parse_stylesheet` in that
......@@ -332,22 +448,7 @@ def _consume_rule(first_token, tokens):
"""
if first_token.type == 'at-keyword':
return _consume_at_rule(first_token, tokens)
if first_token.type == '{} block':
prelude = []
block = first_token
else:
prelude = [first_token]
for token in tokens:
if token.type == '{} block':
block = token
break
prelude.append(token)
else:
return ParseError(
prelude[-1].source_line, prelude[-1].source_column, 'invalid',
'EOF reached before {} block for a qualified rule.')
return QualifiedRule(first_token.source_line, first_token.source_column,
prelude, block.content)
return _consume_qualified_rule(first_token, tokens)
def _consume_at_rule(at_keyword, tokens):
......@@ -359,6 +460,8 @@ def _consume_at_rule(at_keyword, tokens):
:param at_keyword: The at-rule keyword token starting this rule.
:type tokens: :term:`iterator`
:param tokens: An iterator yielding :term:`component values`.
:type nested: :obj:`bool`
:param nested: Whether the at-rule is nested or top-level.
:returns:
A :class:`~tinycss2.ast.QualifiedRule`,
or :class:`~tinycss2.ast.ParseError`.
......@@ -368,10 +471,58 @@ def _consume_at_rule(at_keyword, tokens):
content = None
for token in tokens:
if token.type == '{} block':
# TODO: handle nested at-rules
# https://drafts.csswg.org/css-syntax-3/#consume-at-rule
content = token.content
break
elif token == ';':
break
prelude.append(token)
return AtRule(at_keyword.source_line, at_keyword.source_column,
at_keyword.value, at_keyword.lower_value, prelude, content)
return AtRule(
at_keyword.source_line, at_keyword.source_column, at_keyword.value,
at_keyword.lower_value, prelude, content)
def _rule_error(token, name):
"""Create rule parse error raised because of given token."""
return ParseError(
token.source_line, token.source_column, 'invalid',
f'{name} reached before {{}} block for a qualified rule.')
def _consume_qualified_rule(first_token, tokens, nested=False,
stop_token=None):
"""Consume a qualified rule.
Consume just enough of :obj:`tokens` for this rule.
:type first_token: :term:`component value`
:param first_token: The first component value of the rule.
:type tokens: :term:`iterator`
:param tokens: An iterator yielding :term:`component values`.
:type nested: :obj:`bool`
:param nested: Whether the rule is nested or top-level.
:type stop_token: :class:`~tinycss2.ast.Node`
:param stop_token: A token that ends rule parsing when met.
"""
if first_token == stop_token:
return _rule_error(first_token, 'Stop token')
if first_token.type == '{} block':
prelude = []
block = first_token
else:
prelude = [first_token]
for token in tokens:
if token == stop_token:
return _rule_error(token, 'Stop token')
if token.type == '{} block':
block = token
# TODO: handle special case for CSS variables (using "nested")
# https://drafts.csswg.org/css-syntax-3/#consume-qualified-rule
break
prelude.append(token)
else:
return _rule_error(prelude[-1], 'EOF')
return QualifiedRule(
first_token.source_line, first_token.source_column, prelude, block.content)
......@@ -121,8 +121,7 @@ def _serialize_to(nodes, write):
BAD_PAIRS = set(
[(a, b)
for a in ('ident', 'at-keyword', 'hash', 'dimension', '#', '-',
'number')
for a in ('ident', 'at-keyword', 'hash', 'dimension', '#', '-', 'number')
for b in ('ident', 'function', 'url', 'number', 'percentage',
'dimension', 'unicode-range')] +
[(a, b)
......
......@@ -3,11 +3,11 @@ import sys
from webencodings import ascii_lower
from .ast import (
from .ast import ( # isort: skip
AtKeywordToken, Comment, CurlyBracketsBlock, DimensionToken, FunctionBlock,
HashToken, IdentToken, LiteralToken, NumberToken, ParenthesesBlock,
ParseError, PercentageToken, SquareBracketsBlock, StringToken,
UnicodeRangeToken, URLToken, WhitespaceToken)
HashToken, IdentToken, LiteralToken, NumberToken, ParenthesesBlock, ParseError,
PercentageToken, SquareBracketsBlock, StringToken, UnicodeRangeToken, URLToken,
WhitespaceToken)
from .serializer import serialize_string_value, serialize_url
_NUMBER_RE = re.compile(r'[-+]?([0-9]*\.)?[0-9]+([eE][+-]?[0-9]+)?')
......@@ -108,11 +108,9 @@ def parse_component_value_list(css, skip_comments=False):
line, column, value, int_value, repr_, unit))
elif css.startswith('%', pos):
pos += 1
tokens.append(PercentageToken(
line, column, value, int_value, repr_))
tokens.append(PercentageToken(line, column, value, int_value, repr_))
else:
tokens.append(NumberToken(
line, column, value, int_value, repr_))
tokens.append(NumberToken(line, column, value, int_value, repr_))
elif c == '@':
pos += 1
if pos < length and _is_ident_start(css, pos):
......@@ -175,12 +173,10 @@ def parse_component_value_list(css, skip_comments=False):
pos = css.find('*/', pos + 2)
if pos == -1:
if not skip_comments:
tokens.append(
Comment(line, column, css[token_start_pos + 2:]))
tokens.append(Comment(line, column, css[token_start_pos + 2:]))
break
if not skip_comments:
tokens.append(
Comment(line, column, css[token_start_pos + 2:pos]))
tokens.append(Comment(line, column, css[token_start_pos + 2:pos]))
pos += 2
elif css.startswith('<!--', pos):
tokens.append(LiteralToken(line, column, '<!--'))
......@@ -219,8 +215,7 @@ def _is_ident_start(css, pos):
pos += 1
return (
# Name-start code point or hyphen:
(pos < len(css) and (
_is_name_start(css, pos) or css[pos] == '-')) or
(pos < len(css) and (_is_name_start(css, pos) or css[pos] == '-')) or
# Valid escape:
(css.startswith('\\', pos) and not css.startswith('\\\n', pos)))
elif css[pos] == '\\':
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment