...
 
Commits (5)
+ Version 2.18 (04.07.2017)
+ Version 2.19 (2018.09.19)
- PR #277: Fix parsing of floating point literals
- PR #254: Add support for parsing empty structs
- PR #240: Fix enum formatting in generated C code (also #216)
- PR #222: Add support for #pragma in struct declarations
+ Version 2.18 (2017.07.04)
- PR #161 & #184: Update bundled PLY version to 3.10
- PR #158: Add support for the __int128 type.
- PR #169: Handle more tricky TYPEID in declarators.
- PR #178: Add columns to the coord of each node
+ Version 2.17 (29.10.2016)
+ Version 2.17 (2016.10.29)
- Again functionality identical to 2.15 and 2.16; the difference is that the
tarball now contains Python files with properly set permissions.
+ Version 2.16 (18.10.2016)
+ Version 2.16 (2016.10.18)
- Functionally identical to 2.15, but fixes a packaging problem that caused
failed installation (_build_tables wasn't rerun in the pycparser/ dir).
+ Version 2.15 (18.10.2016)
+ Version 2.15 (2016.10.18)
- PR #121: Update bundled PLY version to 3.8
- Issue #117: Fix parsing of extra semi-colons inside structure declarations.
......@@ -28,7 +35,7 @@
- Issue #116: Fix line numbers recorded for empty and compound statements.
- Minor performance improvement to the invalid string literal regex.
+ Version 2.14 (09.06.2015)
+ Version 2.14 (2015.06.09)
- Added CParser parameter to specify output directory for generated parsing
tables (#84).
......@@ -36,7 +43,7 @@
is no longer recommended, now that Clang has binary builds available for
Windows.
+ Version 2.13 (12.05.2015)
+ Version 2.13 (2015.05.12)
- Added support for offsetof() the way gcc implements it (special builtin
that takes a type as an argument).
......@@ -45,13 +52,13 @@
like Git and SQLite without modifications to pycparser.
- Added support for empty initializer lists (#79).
+ Version 2.12 (21.04.2015)
+ Version 2.12 (2015.04.21)
- This is a fix release for 2.11; the memory optimization with __slots__ on
Coord and AST nodes didn't take weakrefs into account, which broke cffi and
its many dependents (issue #76). Fixed by adding __weakref__ to __slots__.
+ Version 2.11 (21.04.2015)
+ Version 2.11 (2015.04.21)
- Add support for C99 6.5.3.7 p7 - qualifiers within array dimensions in
function declarations. Started with issue #21 (reported with initial patch
......@@ -65,7 +72,7 @@
- Reduce memory usage of AST nodes (issue #72).
- Parsing order of nested pointer declarations fixed (issue #68).
+ Version 2.10 (03.08.2013)
+ Version 2.10 (2013.08.03)
- A number of improvements in the handling of typedef-name ambiguities,
contributed by Sye van der Veen in GitHub issue #1:
......@@ -81,13 +88,13 @@
- Relax the lexer a bit w.r.t. some integer suffixes and $ in identifier names
(which is supported by some other compilers).
+ Version 2.09.1 (29.12.2012)
+ Version 2.09.1 (2012.12.29)
- No actual functionality changes.
- The source distribution was re-packaged to contain the pre-generated Lex and
Yacc tables of PLY.
+ Version 2.09 (27.12.2012)
+ Version 2.09 (2012.12.27)
- The pycparser project has moved to Bitbucket. For this version, issue
numbers still refer to the old Googlecode project, unless stated otherwise.
......@@ -104,7 +111,7 @@
- Issues #86 and #87: improve location reporting for parse errors.
- Issue #89: fix C generation for K&R-style function definitions.
+ Version 2.08 (10.08.2012)
+ Version 2.08 (2012.08.10)
- Issue 73: initial support for #pragma directives. Consume them without
errors and ignore (no tokens are returned). Line numbers are preserved.
......@@ -118,7 +125,7 @@
can also be used as a utility.
- Issue 74: some Windows include paths were handled incorrectly.
+ Version 2.07 (16.06.2012)
+ Version 2.07 (2012.06.16)
- Issue 54: added an optional parser argument to parse_file
- Issue 59: added some more fake headers for C99
......@@ -126,7 +133,7 @@
- Issue 57: support for C99 hexadecimal float constants
- Made running tests that call on 'cpp' a bit more robust.
+ Version 2.06 (04.02.2012)
+ Version 2.06 (2012.02.04)
- Issue 48: gracefully handle parsing of empty files
- Issues 49 & 50: handle more escaped chars in paths to #line - "..\..\test.h".
......@@ -139,7 +146,7 @@
- Improved the AST created for switch statements, making it closer to the
semantic meaning than to the grammar.
+ Version 2.05 (16.10.2011)
+ Version 2.05 (2011.10.16)
- Added support for the C99 ``_Bool`` type and ``stdbool.h`` header file
- Expanded ``examples/explore_ast.py`` with more details on working with the
......@@ -150,7 +157,7 @@
* Fixed spacing issue for some type declarations
* Issue 47: display empty statements (lone ';') correctly after parsing
+ Version 2.04 (21.05.2011)
+ Version 2.04 (2011.05.21)
- License changed from LGPL to BSD
- Bug fixes:
......@@ -161,7 +168,7 @@
- Added C99 integer types to fake headers
- Added unit tests for the c-to-c.py example
+ Version 2.03 (06.03.2011)
+ Version 2.03 (2011.03.06)
- Bug fixes:
......@@ -178,7 +185,7 @@
- Removed support for Python 2.5. ``pycparser`` supports Python 2
from 2.6 and on, and Python 3.
+ Version 2.02 (10.12.2010)
+ Version 2.02 (2010.12.10)
* The name of a ``NamedInitializer`` node was turned into a sequence of nodes
instead of an attribute, to make it discoverable by the AST node visitor.
......@@ -190,13 +197,13 @@
is done with a simple parser.
* Fixed issue 12: installation problems
+ Version 2.00 (31.10.2010)
+ Version 2.00 (2010.10.31)
* Support for C99 (read
`this wiki page <http://code.google.com/p/pycparser/wiki/C99support>`_
for more information).
+ Version 1.08 (09.10.2010)
+ Version 1.08 (2010.10.09)
* Bug fixes:
......@@ -204,12 +211,12 @@
+ Issues 6 & 7: Concatenation of string literals
+ Issue 9: Support for unnamed bitfields in structs
+ Version 1.07 (18.05.2010)
+ Version 1.07 (2010.05.18)
* Python 3.1 compatibility: ``pycparser`` was modified to run
on Python 3.1 as well as 2.6
+ Version 1.06 (10.04.2010)
+ Version 1.06 (2010.04.10)
* Bug fixes:
......@@ -220,33 +227,33 @@
* Linux compatibility: fixed end-of-line and ``cpp`` path issues to allow
all tests and examples run on Linux
+ Version 1.05 (16.10.2009)
+ Version 1.05 (2009.10.16)
* Fixed the ``parse_file`` auxiliary function to handle multiple arguments to
``cpp`` correctly
+ Version 1.04 (22.05.2009)
+ Version 1.04 (2009.05.22)
* Added the ``fake_libc_include`` directory to allow parsing of C code that
uses standard C library include files without dependency on a real C
library.
* Tested with Python 2.6 and PLY 3.2
+ Version 1.03 (31.01.2009)
+ Version 1.03 (2009.01.31)
* Accept enumeration lists with a comma after the last item (C99 feature).
+ Version 1.02 (16.01.2009)
+ Version 1.02 (2009.01.16)
* Fixed problem of parsing struct/enum/union names that were named similarly
to previously defined ``typedef`` types.
+ Version 1.01 (09.01.2009)
+ Version 1.01 (2009.01.09)
* Fixed subprocess invocation in the helper function parse_file - now
it's more portable
+ Version 1.0 (15.11.2008)
+ Version 1.0 (2008.11.15)
* Initial release
* Support for ANSI C89
......@@ -6,6 +6,7 @@ include README.*
include LICENSE
include CHANGES
include setup.*
exclude setup.pyc
recursive-exclude tests yacctab.* lextab.*
recursive-exclude examples yacctab.* lextab.*
Metadata-Version: 1.1
Metadata-Version: 1.2
Name: pycparser
Version: 2.18
Version: 2.19
Summary: C parser in Python
Home-page: https://github.com/eliben/pycparser
Author: Eli Bendersky
......@@ -13,5 +13,12 @@ Description:
C compilers or analysis tools.
Platform: Cross Platform
Classifier: Development Status :: 5 - Production/Stable
Classifier: License :: OSI Approved :: BSD License
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*
===============
pycparser v2.18
pycparser v2.19
===============
:Author: `Eli Bendersky <http://eli.thegreenplace.net>`_
:Author: `Eli Bendersky <https://eli.thegreenplace.net/>`_
.. contents::
......@@ -79,7 +79,7 @@ Installing
Prerequisites
-------------
* **pycparser** was tested on Python 2.7, 3.4 and 3.5, on both Linux and
* **pycparser** was tested on Python 2.7, 3.4-3.6, on both Linux and
Windows. It should work on any later version (in both the 2.x and 3.x lines)
as well.
......@@ -87,6 +87,13 @@ Prerequisites
uses is PLY, which is bundled in ``pycparser/ply``. The current PLY version is
3.10, retrieved from `<http://www.dabeaz.com/ply/>`_
Note that **pycparser** (and PLY) uses docstrings for grammar specifications.
Python installations that strip docstrings (such as when using the Python
``-OO`` option) will fail to instantiate and use **pycparser**. You can try to
work around this problem by making sure the PLY parsing tables are pre-generated
in normal mode; this isn't an officially supported/tested mode of operation,
though.
Installation process
--------------------
......@@ -96,7 +103,7 @@ setup script will then place the ``pycparser`` module into ``site-packages`` in
your Python's installation library.
Alternatively, since **pycparser** is listed in the `Python Package Index
<http://pypi.python.org/pypi/pycparser>`_ (PyPI), you can install it using your
<https://pypi.org/project/pycparser/>`_ (PyPI), you can install it using your
favorite Python packaging/distribution tool, for example with::
> pip install pycparser
......@@ -151,14 +158,21 @@ the source is a previously defined type. This is essential in order to be able
to parse C correctly.
See `this blog post
<http://eli.thegreenplace.net/2015/on-parsing-c-type-declarations-and-fake-headers>`_
<https://eli.thegreenplace.net/2015/on-parsing-c-type-declarations-and-fake-headers>`_
for more details.
Basic usage
-----------
Take a look at the ``examples`` directory of the distribution for a few examples
of using **pycparser**. These should be enough to get you started.
Take a look at the |examples|_ directory of the distribution for a few examples
of using **pycparser**. These should be enough to get you started. Please note
that most realistic C code samples would require running the C preprocessor
before passing the code to **pycparser**; see the previous sections for more
details.
.. |examples| replace:: ``examples``
.. _examples: examples
Advanced usage
--------------
......
pycparser (2.18-3) UNRELEASED; urgency=medium
pycparser (2.19-1) unstable; urgency=medium
[ Stefano Rivera ]
* New upstream release.
* Refresh patches.
* Declare Rules-Requires-Root: no.
* Bump Standards-Version to 4.2.1, no changes needed.
[ Ondřej Nový ]
* d/tests: Use AUTOPKGTEST_TMP instead of ADTTMP
* d/control: Remove ancient X-Python-Version field
* d/control: Remove ancient X-Python3-Version field
[ Stefano Rivera ]
* Declare Rules-Requires-Root: no.
* Bump Standards-Version to 4.2.1, no changes needed.
-- Ondřej Nový <onovy@debian.org> Tue, 17 Apr 2018 13:22:24 +0200
-- Stefano Rivera <stefanor@debian.org> Sat, 13 Oct 2018 13:08:05 +0200
pycparser (2.18-2) unstable; urgency=medium
......
......@@ -14,7 +14,7 @@ Forwarded: not-needed
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/pycparser/c_lexer.py b/pycparser/c_lexer.py
index d9941c1..2c05cbb 100644
index de8445e..ffa921a 100644
--- a/pycparser/c_lexer.py
+++ b/pycparser/c_lexer.py
@@ -9,8 +9,8 @@
......@@ -29,7 +29,7 @@ index d9941c1..2c05cbb 100644
class CLexer(object):
diff --git a/pycparser/c_parser.py b/pycparser/c_parser.py
index f84d6bc..b1ece10 100644
index 0e6e755..eee9738 100644
--- a/pycparser/c_parser.py
+++ b/pycparser/c_parser.py
@@ -8,7 +8,7 @@
......@@ -42,13 +42,13 @@ index f84d6bc..b1ece10 100644
from . import c_ast
from .c_lexer import CLexer
diff --git a/setup.py b/setup.py
index 2ceb7b8..c83f6c8 100644
index 62eddc2..9a6df29 100644
--- a/setup.py
+++ b/setup.py
@@ -49,7 +49,7 @@ setup(
classifiers = [
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 3',],
@@ -60,7 +60,7 @@ setup(
'Programming Language :: Python :: 3.6',
],
python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*",
- packages=['pycparser', 'pycparser.ply'],
+ packages=['pycparser'],
package_data={'pycparser': ['*.cfg']},
......
......@@ -4,7 +4,7 @@
# Example of using pycparser.c_generator, serving as a simplistic translator
# from C to AST and back to C.
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#------------------------------------------------------------------------------
from __future__ import print_function
......
......@@ -2,7 +2,7 @@
// Statically-allocated memory manager
//
// by Eli Bendersky (eliben@gmail.com)
//
//
// This code is in the public domain.
//----------------------------------------------------------------
#include "memmgr.h"
......@@ -11,7 +11,7 @@ typedef ulong Align;
union mem_header_union
{
struct
struct
{
// Pointer to the next block in the free list
//
......@@ -19,7 +19,7 @@ union mem_header_union
// Size of the block (in quantas of sizeof(mem_header_t))
//
ulong size;
ulong size;
} s;
// Used to align headers in memory to a boundary
......@@ -80,9 +80,9 @@ static mem_header_t* get_mem_from_pool(ulong nquantas)
// Allocations are done in 'quantas' of header size.
// The search for a free block of adequate size begins at the point 'freep'
// The search for a free block of adequate size begins at the point 'freep'
// where the last block was found.
// If a too-big block is found, it is split and the tail is returned (this
// If a too-big block is found, it is split and the tail is returned (this
// way the header of the original needs only to have its size adjusted).
// The pointer returned to the user points to the free space within the block,
// which begins one quanta after the header.
......@@ -100,7 +100,7 @@ void* memmgr_alloc(ulong nbytes)
// First alloc call, and no free list yet ? Use 'base' for an initial
// denegerate block of size 0, which points to itself
//
//
if ((prevp = freep) == 0)
{
base.s.next = freep = prevp = &base;
......@@ -110,7 +110,7 @@ void* memmgr_alloc(ulong nbytes)
for (p = prevp->s.next; ; prevp = p, p = p->s.next)
{
// big enough ?
if (p->s.size >= nquantas)
if (p->s.size >= nquantas)
{
// exactly ?
if (p->s.size == nquantas)
......@@ -151,7 +151,7 @@ void* memmgr_alloc(ulong nbytes)
}
// Scans the free list, starting at freep, looking the the place to insert the
// Scans the free list, starting at freep, looking the the place to insert the
// free block. This is either between two existing blocks or at the end of the
// list. In any case, if the block being freed is adjacent to either neighbor,
// the adjacent blocks are combined.
......@@ -169,9 +169,9 @@ void memmgr_free(void* ap)
//
for (p = freep; !(block > p && block < p->s.next); p = p->s.next)
{
// Since the free list is circular, there is one link where a
// higher-addressed block points to a lower-addressed block.
// This condition checks if the block should be actually
// Since the free list is circular, there is one link where a
// higher-addressed block points to a lower-addressed block.
// This condition checks if the block should be actually
// inserted between them
//
if (p >= p->s.next && (block > p || block < p->s.next))
......
......@@ -2,45 +2,45 @@
// Statically-allocated memory manager
//
// by Eli Bendersky (eliben@gmail.com)
//
//
// This code is in the public domain.
//----------------------------------------------------------------
#ifndef MEMMGR_H
#define MEMMGR_H
//
// Memory manager: dynamically allocates memory from
// Memory manager: dynamically allocates memory from
// a fixed pool that is allocated statically at link-time.
//
// Usage: after calling memmgr_init() in your
//
// Usage: after calling memmgr_init() in your
// initialization routine, just use memmgr_alloc() instead
// of malloc() and memmgr_free() instead of free().
// Naturally, you can use the preprocessor to define
// malloc() and free() as aliases to memmgr_alloc() and
// memmgr_free(). This way the manager will be a drop-in
// Naturally, you can use the preprocessor to define
// malloc() and free() as aliases to memmgr_alloc() and
// memmgr_free(). This way the manager will be a drop-in
// replacement for the standard C library allocators, and can
// be useful for debugging memory allocation problems and
// be useful for debugging memory allocation problems and
// leaks.
//
// Preprocessor flags you can define to customize the
// Preprocessor flags you can define to customize the
// memory manager:
//
// DEBUG_MEMMGR_FATAL
// Allow printing out a message when allocations fail
//
// DEBUG_MEMMGR_SUPPORT_STATS
// Allow printing out of stats in function
// memmgr_print_stats When this is disabled,
// Allow printing out of stats in function
// memmgr_print_stats When this is disabled,
// memmgr_print_stats does nothing.
//
// Note that in production code on an embedded system
// Note that in production code on an embedded system
// you'll probably want to keep those undefined, because
// they cause printf to be called.
//
// POOL_SIZE
// Size of the pool for new allocations. This is
// effectively the heap size of the application, and can
// be changed in accordance with the available memory
// Size of the pool for new allocations. This is
// effectively the heap size of the application, and can
// be changed in accordance with the available memory
// resources.
//
// MIN_POOL_ALLOC_QUANTAS
......@@ -49,19 +49,19 @@
// minimize pool fragmentation in case of multiple allocations
// and deallocations, it is advisable to not allocate
// blocks that are too small.
// This flag sets the minimal ammount of quantas for
// This flag sets the minimal ammount of quantas for
// an allocation. If the size of a ulong is 4 and you
// set this flag to 16, the minimal size of an allocation
// will be 4 * 2 * 16 = 128 bytes
// If you have a lot of small allocations, keep this value
// low to conserve memory. If you have mostly large
// allocations, it is best to make it higher, to avoid
// low to conserve memory. If you have mostly large
// allocations, it is best to make it higher, to avoid
// fragmentation.
//
// Notes:
// 1. This memory manager is *not thread safe*. Use it only
// for single thread/task applications.
//
//
#define DEBUG_MEMMGR_SUPPORT_STATS 1
......
......@@ -29,7 +29,7 @@
# explain_c_declaration(c_decl, expand_struct=True)
# => p is a struct P containing {x is a int, y is a int}
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#-----------------------------------------------------------------
import copy
......
......@@ -3,7 +3,7 @@
#
# Basic example of parsing a file and dumping its parsed AST.
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#-----------------------------------------------------------------
from __future__ import print_function
......
......@@ -9,7 +9,7 @@
# information from the AST.
# It helps to have the pycparser/_c_ast.cfg file in front of you.
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#-----------------------------------------------------------------
from __future__ import print_function
......
......@@ -4,7 +4,7 @@
# Using pycparser for printing out all the calls of some function
# in a C file.
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#-----------------------------------------------------------------
from __future__ import print_function
......
......@@ -7,7 +7,7 @@
# This is a simple example of traversing the AST generated by
# pycparser. Call it from the root directory of pycparser.
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#-----------------------------------------------------------------
from __future__ import print_function
......
......@@ -3,7 +3,7 @@
#
# Tiny example of rewriting a AST node
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#-----------------------------------------------------------------
from __future__ import print_function
......
......@@ -4,7 +4,7 @@
# Simple example of serializing AST
#
# Hart Chu [https://github.com/CtheSky]
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#-----------------------------------------------------------------
from __future__ import print_function
......
......@@ -5,7 +5,7 @@
# the 'real' cpp if you're on Linux/Unix) and "fake" libc includes
# to parse a file that includes standard C headers.
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#-----------------------------------------------------------------
import sys
......
......@@ -5,7 +5,7 @@
# of 'cpp'. The same can be achieved with Clang instead of gcc. If you have
# Clang installed, simply replace 'gcc' with 'clang' here.
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#-------------------------------------------------------------------------------
import sys
......
Metadata-Version: 1.1
Metadata-Version: 1.2
Name: pycparser
Version: 2.18
Version: 2.19
Summary: C parser in Python
Home-page: https://github.com/eliben/pycparser
Author: Eli Bendersky
......@@ -13,5 +13,12 @@ Description:
C compilers or analysis tools.
Platform: Cross Platform
Classifier: Development Status :: 5 - Production/Stable
Classifier: License :: OSI Approved :: BSD License
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*
......@@ -61,11 +61,13 @@ utils/fake_libc_include/_ansi.h
utils/fake_libc_include/_fake_defines.h
utils/fake_libc_include/_fake_typedefs.h
utils/fake_libc_include/_syslist.h
utils/fake_libc_include/aio.h
utils/fake_libc_include/alloca.h
utils/fake_libc_include/ar.h
utils/fake_libc_include/argz.h
utils/fake_libc_include/assert.h
utils/fake_libc_include/complex.h
utils/fake_libc_include/cpio.h
utils/fake_libc_include/ctype.h
utils/fake_libc_include/dirent.h
utils/fake_libc_include/dlfcn.h
......@@ -77,7 +79,11 @@ utils/fake_libc_include/fcntl.h
utils/fake_libc_include/features.h
utils/fake_libc_include/fenv.h
utils/fake_libc_include/float.h
utils/fake_libc_include/fmtmsg.h
utils/fake_libc_include/fnmatch.h
utils/fake_libc_include/ftw.h
utils/fake_libc_include/getopt.h
utils/fake_libc_include/glob.h
utils/fake_libc_include/grp.h
utils/fake_libc_include/iconv.h
utils/fake_libc_include/ieeefp.h
......@@ -90,9 +96,14 @@ utils/fake_libc_include/limits.h
utils/fake_libc_include/locale.h
utils/fake_libc_include/malloc.h
utils/fake_libc_include/math.h
utils/fake_libc_include/monetary.h
utils/fake_libc_include/mqueue.h
utils/fake_libc_include/ndbm.h
utils/fake_libc_include/netdb.h
utils/fake_libc_include/newlib.h
utils/fake_libc_include/nl_types.h
utils/fake_libc_include/paths.h
utils/fake_libc_include/poll.h
utils/fake_libc_include/process.h
utils/fake_libc_include/pthread.h
utils/fake_libc_include/pwd.h
......@@ -104,6 +115,7 @@ utils/fake_libc_include/search.h
utils/fake_libc_include/semaphore.h
utils/fake_libc_include/setjmp.h
utils/fake_libc_include/signal.h
utils/fake_libc_include/spawn.h
utils/fake_libc_include/stdarg.h
utils/fake_libc_include/stdbool.h
utils/fake_libc_include/stddef.h
......@@ -111,24 +123,34 @@ utils/fake_libc_include/stdint.h
utils/fake_libc_include/stdio.h
utils/fake_libc_include/stdlib.h
utils/fake_libc_include/string.h
utils/fake_libc_include/strings.h
utils/fake_libc_include/stropts.h
utils/fake_libc_include/syslog.h
utils/fake_libc_include/tar.h
utils/fake_libc_include/termios.h
utils/fake_libc_include/tgmath.h
utils/fake_libc_include/time.h
utils/fake_libc_include/trace.h
utils/fake_libc_include/ulimit.h
utils/fake_libc_include/unctrl.h
utils/fake_libc_include/unistd.h
utils/fake_libc_include/utime.h
utils/fake_libc_include/utmp.h
utils/fake_libc_include/utmpx.h
utils/fake_libc_include/wchar.h
utils/fake_libc_include/wctype.h
utils/fake_libc_include/wordexp.h
utils/fake_libc_include/zlib.h
utils/fake_libc_include/X11/Intrinsic.h
utils/fake_libc_include/X11/Xlib.h
utils/fake_libc_include/X11/_X11_fake_defines.h
utils/fake_libc_include/X11/_X11_fake_typedefs.h
utils/fake_libc_include/arpa/inet.h
utils/fake_libc_include/asm-generic/int-ll64.h
utils/fake_libc_include/linux/socket.h
utils/fake_libc_include/linux/version.h
utils/fake_libc_include/mir_toolkit/client_types.h
utils/fake_libc_include/net/if.h
utils/fake_libc_include/netinet/in.h
utils/fake_libc_include/netinet/tcp.h
utils/fake_libc_include/openssl/err.h
......@@ -137,14 +159,20 @@ utils/fake_libc_include/openssl/hmac.h
utils/fake_libc_include/openssl/ssl.h
utils/fake_libc_include/openssl/x509v3.h
utils/fake_libc_include/sys/ioctl.h
utils/fake_libc_include/sys/ipc.h
utils/fake_libc_include/sys/mman.h
utils/fake_libc_include/sys/msg.h
utils/fake_libc_include/sys/poll.h
utils/fake_libc_include/sys/resource.h
utils/fake_libc_include/sys/select.h
utils/fake_libc_include/sys/sem.h
utils/fake_libc_include/sys/shm.h
utils/fake_libc_include/sys/socket.h
utils/fake_libc_include/sys/stat.h
utils/fake_libc_include/sys/statvfs.h
utils/fake_libc_include/sys/sysctl.h
utils/fake_libc_include/sys/time.h
utils/fake_libc_include/sys/times.h
utils/fake_libc_include/sys/types.h
utils/fake_libc_include/sys/uio.h
utils/fake_libc_include/sys/un.h
......
......@@ -4,13 +4,14 @@
# This package file exports some convenience functions for
# interacting with pycparser
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#-----------------------------------------------------------------
__all__ = ['c_lexer', 'c_parser', 'c_ast']
__version__ = '2.18'
__version__ = '2.19'
from subprocess import Popen, PIPE
import io
from subprocess import check_output
from .c_parser import CParser
......@@ -38,11 +39,7 @@ def preprocess_file(filename, cpp_path='cpp', cpp_args=''):
try:
# Note the use of universal_newlines to treat all newlines
# as \n for Python's purpose
#
pipe = Popen( path_list,
stdout=PIPE,
universal_newlines=True)
text = pipe.communicate()[0]
text = check_output(path_list, universal_newlines=True)
except OSError as e:
raise RuntimeError("Unable to invoke 'cpp'. " +
'Make sure its path was passed correctly\n' +
......@@ -85,7 +82,7 @@ def parse_file(filename, use_cpp=False, cpp_path='cpp', cpp_args='',
if use_cpp:
text = preprocess_file(filename, cpp_path, cpp_args)
else:
with open(filename, 'rU') as f:
with io.open(filename) as f:
text = f.read()
if parser is None:
......
......@@ -7,7 +7,7 @@
# The design of this module was inspired by astgen.py from the
# Python 2.5 code-base.
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#-----------------------------------------------------------------
import pprint
......@@ -63,6 +63,7 @@ class NodeCfg(object):
contents: a list of contents - attributes and child nodes
See comment at the top of the configuration file for details.
"""
def __init__(self, name, contents):
self.name = name
self.all_entries = []
......@@ -84,6 +85,8 @@ class NodeCfg(object):
def generate_source(self):
src = self._gen_init()
src += '\n' + self._gen_children()
src += '\n' + self._gen_iter()
src += '\n' + self._gen_attr_names()
return src
......@@ -131,6 +134,33 @@ class NodeCfg(object):
return src
def _gen_iter(self):
src = ' def __iter__(self):\n'
if self.all_entries:
for child in self.child:
src += (
' if self.%(child)s is not None:\n' +
' yield self.%(child)s\n') % (dict(child=child))
for seq_child in self.seq_child:
src += (
' for child in (self.%(child)s or []):\n'
' yield child\n') % (dict(child=seq_child))
if not (self.child or self.seq_child):
# Empty generator
src += (
' return\n' +
' yield\n')
else:
# Empty generator
src += (
' return\n' +
' yield\n')
return src
def _gen_attr_names(self):
src = " attr_names = (" + ''.join("%r, " % nm for nm in self.attr) + ')'
return src
......@@ -150,7 +180,7 @@ r'''#-----------------------------------------------------------------
#
# AST Node classes.
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#-----------------------------------------------------------------
......@@ -159,11 +189,38 @@ r'''#-----------------------------------------------------------------
_PROLOGUE_CODE = r'''
import sys
def _repr(obj):
"""
Get the representation of an object, with dedicated pprint-like format for lists.
"""
if isinstance(obj, list):
return '[' + (',\n '.join((_repr(e).replace('\n', '\n ') for e in obj))) + '\n]'
else:
return repr(obj)
class Node(object):
__slots__ = ()
""" Abstract base class for AST nodes.
"""
def __repr__(self):
""" Generates a python representation of the current node
"""
result = self.__class__.__name__ + '('
indent = ''
separator = ''
for name in self.__slots__[:-2]:
result += separator
result += indent
result += name + '=' + (_repr(getattr(self, name)).replace('\n', '\n ' + (' ' * (len(name) + len(self.__class__.__name__)))))
separator = ','
indent = '\n ' + (' ' * len(self.__class__.__name__))
result += indent + ')'
return result
def children(self):
""" A sequence of all children that are Nodes
"""
......@@ -253,26 +310,29 @@ class NodeVisitor(object):
* Modeled after Python's own AST visiting facilities
(the ast module of Python 3.0)
"""
_method_cache = None
def visit(self, node):
""" Visit a node.
"""
method = 'visit_' + node.__class__.__name__
visitor = getattr(self, method, self.generic_visit)
if self._method_cache is None:
self._method_cache = {}
visitor = self._method_cache.get(node.__class__.__name__, None)
if visitor is None:
method = 'visit_' + node.__class__.__name__
visitor = getattr(self, method, self.generic_visit)
self._method_cache[node.__class__.__name__] = visitor
return visitor(node)
def generic_visit(self, node):
""" Called if no explicit visitor function exists for a
node. Implements preorder visiting of the node.
"""
for c_name, c in node.children():
for c in node:
self.visit(c)
'''
if __name__ == "__main__":
import sys
ast_gen = ASTCodeGenerator('_c_ast.cfg')
ast_gen.generate(open('c_ast.py', 'w'))
......@@ -6,7 +6,7 @@
# Also generates AST code from the configuration file.
# Should be called from the pycparser directory.
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#-----------------------------------------------------------------
......
......@@ -9,7 +9,7 @@
# <name>** - a sequence of child nodes
# <name> - an attribute
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#-----------------------------------------------------------------
......
......@@ -3,7 +3,7 @@
#
# Some utilities used by the parser to create a friendlier AST.
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#------------------------------------------------------------------------------
......
This diff is collapsed.
......@@ -3,7 +3,7 @@
#
# C code generator from pycparser AST nodes.
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#------------------------------------------------------------------------------
from . import c_ast
......@@ -39,7 +39,7 @@ class CGenerator(object):
def visit_ID(self, n):
return n.name
def visit_Pragma(self, n):
ret = '#pragma'
if n.string:
......@@ -135,18 +135,20 @@ class CGenerator(object):
return ', '.join(visited_subexprs)
def visit_Enum(self, n):
s = 'enum'
if n.name: s += ' ' + n.name
if n.values:
s += ' {'
for i, enumerator in enumerate(n.values.enumerators):
s += enumerator.name
if enumerator.value:
s += ' = ' + self.visit(enumerator.value)
if i != len(n.values.enumerators) - 1:
s += ', '
s += '}'
return s
return self._generate_struct_union_enum(n, name='enum')
def visit_Enumerator(self, n):
if not n.value:
return '{indent}{name},\n'.format(
indent=self._make_indent(),
name=n.name,
)
else:
return '{indent}{name} = {value},\n'.format(
indent=self._make_indent(),
name=n.name,
value=self.visit(n.value),
)
def visit_FuncDef(self, n):
decl = self.visit(n.decl)
......@@ -268,43 +270,58 @@ class CGenerator(object):
return '...'
def visit_Struct(self, n):
return self._generate_struct_union(n, 'struct')
return self._generate_struct_union_enum(n, 'struct')
def visit_Typename(self, n):
return self._generate_type(n.type)
def visit_Union(self, n):
return self._generate_struct_union(n, 'union')
return self._generate_struct_union_enum(n, 'union')
def visit_NamedInitializer(self, n):
s = ''
for name in n.name:
if isinstance(name, c_ast.ID):
s += '.' + name.name
elif isinstance(name, c_ast.Constant):
s += '[' + name.value + ']'
else:
s += '[' + self.visit(name) + ']'
s += ' = ' + self._visit_expr(n.expr)
return s
def visit_FuncDecl(self, n):
return self._generate_type(n)
def _generate_struct_union(self, n, name):
""" Generates code for structs and unions. name should be either
'struct' or union.
def _generate_struct_union_enum(self, n, name):
""" Generates code for structs, unions, and enums. name should be
'struct', 'union', or 'enum'.
"""
if name in ('struct', 'union'):
members = n.decls
body_function = self._generate_struct_union_body
else:
assert name == 'enum'
members = None if n.values is None else n.values.enumerators
body_function = self._generate_enum_body
s = name + ' ' + (n.name or '')
if n.decls:
if members is not None:
# None means no members
# Empty sequence means an empty list of members
s += '\n'
s += self._make_indent()
self.indent_level += 2
s += '{\n'
for decl in n.decls:
s += self._generate_stmt(decl)
s += body_function(members)
self.indent_level -= 2
s += self._make_indent() + '}'
return s
def _generate_struct_union_body(self, members):
return ''.join(self._generate_stmt(decl) for decl in members)
def _generate_enum_body(self, members):
# `[:-2] + '\n'` removes the final `,` from the enumerator list
return ''.join(self.visit(value) for value in members)[:-2] + '\n'
def _generate_stmt(self, n, add_indent=False):
""" Generation from a statement node. This method exists as a wrapper
for individual visit_* methods to handle different treatment of
......@@ -407,5 +424,5 @@ class CGenerator(object):
""" Returns True for nodes that are "simple" - i.e. nodes that always
have higher precedence than operators.
"""
return isinstance(n,( c_ast.Constant, c_ast.ID, c_ast.ArrayRef,
c_ast.StructRef, c_ast.FuncCall))
return isinstance(n, (c_ast.Constant, c_ast.ID, c_ast.ArrayRef,
c_ast.StructRef, c_ast.FuncCall))
......@@ -3,7 +3,7 @@
#
# CLexer class: lexer for the C language
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#------------------------------------------------------------------------------
import re
......@@ -482,4 +482,3 @@ class CLexer(object):
def t_error(self, t):
msg = 'Illegal character %s' % repr(t.value[0])
self._error(msg, t)
......@@ -3,7 +3,7 @@
#
# CParser class: Parser and AST builder for the C language
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#------------------------------------------------------------------------------
import re
......@@ -616,6 +616,59 @@ class CParser(PLYParser):
"""
p[0] = p[1]
# A pragma is generally considered a decorator rather than an actual statement.
# Still, for the purposes of analyzing an abstract syntax tree of C code,
# pragma's should not be ignored and were previously treated as a statement.
# This presents a problem for constructs that take a statement such as labeled_statements,
# selection_statements, and iteration_statements, causing a misleading structure
# in the AST. For example, consider the following C code.
#
# for (int i = 0; i < 3; i++)
# #pragma omp critical
# sum += 1;
#
# This code will compile and execute "sum += 1;" as the body of the for loop.
# Previous implementations of PyCParser would render the AST for this
# block of code as follows:
#
# For:
# DeclList:
# Decl: i, [], [], []
# TypeDecl: i, []
# IdentifierType: ['int']
# Constant: int, 0
# BinaryOp: <
# ID: i
# Constant: int, 3
# UnaryOp: p++
# ID: i
# Pragma: omp critical
# Assignment: +=
# ID: sum
# Constant: int, 1
#
# This AST misleadingly takes the Pragma as the body of the loop and the
# assignment then becomes a sibling of the loop.
#
# To solve edge cases like these, the pragmacomp_or_statement rule groups
# a pragma and its following statement (which would otherwise be orphaned)
# using a compound block, effectively turning the above code into:
#
# for (int i = 0; i < 3; i++) {
# #pragma omp critical
# sum += 1;
# }
def p_pragmacomp_or_statement(self, p):
""" pragmacomp_or_statement : pppragma_directive statement
| statement
"""
if isinstance(p[1], c_ast.Pragma) and len(p) == 3:
p[0] = c_ast.Compound(
block_items=[p[1], p[2]],
coord=self._token_coord(p, 1))
else:
p[0] = p[1]
# In C, declarations can come several in a line:
# int x, *px, romulo = 5;
#
......@@ -855,6 +908,7 @@ class CParser(PLYParser):
| struct_or_union TYPEID
"""
klass = self._select_struct_union_class(p[1])
# None means no list of members
p[0] = klass(
name=p[2],
decls=None,
......@@ -862,22 +916,40 @@ class CParser(PLYParser):
def p_struct_or_union_specifier_2(self, p):
""" struct_or_union_specifier : struct_or_union brace_open struct_declaration_list brace_close
| struct_or_union brace_open brace_close
"""
klass = self._select_struct_union_class(p[1])
p[0] = klass(
name=None,
decls=p[3],
coord=self._token_coord(p, 2))
if len(p) == 4:
# Empty sequence means an empty list of members
p[0] = klass(
name=None,
decls=[],
coord=self._token_coord(p, 2))
else:
p[0] = klass(
name=None,
decls=p[3],
coord=self._token_coord(p, 2))
def p_struct_or_union_specifier_3(self, p):
""" struct_or_union_specifier : struct_or_union ID brace_open struct_declaration_list brace_close
| struct_or_union ID brace_open brace_close
| struct_or_union TYPEID brace_open struct_declaration_list brace_close
| struct_or_union TYPEID brace_open brace_close
"""
klass = self._select_struct_union_class(p[1])
p[0] = klass(
name=p[2],
decls=p[4],
coord=self._token_coord(p, 2))
if len(p) == 5:
# Empty sequence means an empty list of members
p[0] = klass(
name=p[2],
decls=[],
coord=self._token_coord(p, 2))
else:
p[0] = klass(
name=p[2],
decls=p[4],
coord=self._token_coord(p, 2))
def p_struct_or_union(self, p):
""" struct_or_union : STRUCT
......@@ -939,6 +1011,11 @@ class CParser(PLYParser):
"""
p[0] = None
def p_struct_declaration_3(self, p):
""" struct_declaration : pppragma_directive
"""
p[0] = [p[1]]
def p_struct_declarator_list(self, p):
""" struct_declarator_list : struct_declarator
| struct_declarator_list COMMA struct_declarator
......@@ -1405,44 +1482,44 @@ class CParser(PLYParser):
coord=self._token_coord(p, 1))
def p_labeled_statement_1(self, p):
""" labeled_statement : ID COLON statement """
""" labeled_statement : ID COLON pragmacomp_or_statement """
p[0] = c_ast.Label(p[1], p[3], self._token_coord(p, 1))
def p_labeled_statement_2(self, p):
""" labeled_statement : CASE constant_expression COLON statement """
""" labeled_statement : CASE constant_expression COLON pragmacomp_or_statement """
p[0] = c_ast.Case(p[2], [p[4]], self._token_coord(p, 1))
def p_labeled_statement_3(self, p):
""" labeled_statement : DEFAULT COLON statement """
""" labeled_statement : DEFAULT COLON pragmacomp_or_statement """
p[0] = c_ast.Default([p[3]], self._token_coord(p, 1))
def p_selection_statement_1(self, p):
""" selection_statement : IF LPAREN expression RPAREN statement """
""" selection_statement : IF LPAREN expression RPAREN pragmacomp_or_statement """
p[0] = c_ast.If(p[3], p[5], None, self._token_coord(p, 1))
def p_selection_statement_2(self, p):
""" selection_statement : IF LPAREN expression RPAREN statement ELSE statement """
""" selection_statement : IF LPAREN expression RPAREN statement ELSE pragmacomp_or_statement """
p[0] = c_ast.If(p[3], p[5], p[7], self._token_coord(p, 1))
def p_selection_statement_3(self, p):
""" selection_statement : SWITCH LPAREN expression RPAREN statement """
""" selection_statement : SWITCH LPAREN expression RPAREN pragmacomp_or_statement """
p[0] = fix_switch_cases(
c_ast.Switch(p[3], p[5], self._token_coord(p, 1)))
def p_iteration_statement_1(self, p):
""" iteration_statement : WHILE LPAREN expression RPAREN statement """
""" iteration_statement : WHILE LPAREN expression RPAREN pragmacomp_or_statement """
p[0] = c_ast.While(p[3], p[5], self._token_coord(p, 1))
def p_iteration_statement_2(self, p):
""" iteration_statement : DO statement WHILE LPAREN expression RPAREN SEMI """
""" iteration_statement : DO pragmacomp_or_statement WHILE LPAREN expression RPAREN SEMI """
p[0] = c_ast.DoWhile(p[5], p[2], self._token_coord(p, 1))
def p_iteration_statement_3(self, p):
""" iteration_statement : FOR LPAREN expression_opt SEMI expression_opt SEMI expression_opt RPAREN statement """
""" iteration_statement : FOR LPAREN expression_opt SEMI expression_opt SEMI expression_opt RPAREN pragmacomp_or_statement """
p[0] = c_ast.For(p[3], p[5], p[7], p[9], self._token_coord(p, 1))
def p_iteration_statement_4(self, p):
""" iteration_statement : FOR LPAREN declaration expression_opt SEMI expression_opt RPAREN statement """
""" iteration_statement : FOR LPAREN declaration expression_opt SEMI expression_opt RPAREN pragmacomp_or_statement """
p[0] = c_ast.For(c_ast.DeclList(p[3], self._token_coord(p, 1)),
p[4], p[6], p[8], self._token_coord(p, 1))
......@@ -1697,8 +1774,18 @@ class CParser(PLYParser):
""" constant : FLOAT_CONST
| HEX_FLOAT_CONST
"""
if 'x' in p[1].lower():
t = 'float'
else:
if p[1][-1] in ('f', 'F'):
t = 'float'
elif p[1][-1] in ('l', 'L'):
t = 'long double'
else:
t = 'double'
p[0] = c_ast.Constant(
'float', p[1], self._token_coord(p, 1))
t, p[1], self._token_coord(p, 1))
def p_constant_3(self, p):
""" constant : CHAR_CONST
......@@ -1761,22 +1848,3 @@ class CParser(PLYParser):
column=self.clex.find_tok_column(p)))
else:
self._parse_error('At end of input', self.clex.filename)
#------------------------------------------------------------------------------
if __name__ == "__main__":
import pprint
import time, sys
#t1 = time.time()
#parser = CParser(lex_optimize=True, yacc_debug=True, yacc_optimize=False)
#sys.write(time.time() - t1)
#buf = '''
#int (*k)(int);
#'''
## set debuglevel to 2 for debugging
#t = parser.parse(buf, 'x.c', debuglevel=0)
#t.show(showcoord=True)
......@@ -7,8 +7,6 @@
#
# This module implements an ANSI-C style lexical preprocessor for PLY.
# -----------------------------------------------------------------------------
from __future__ import generators
import sys
# Some Python 3 compatibility shims
......
......@@ -309,7 +309,7 @@ class LRParser:
# certain kinds of advanced parsing situations where the lexer and parser interact with
# each other or change states (i.e., manipulation of scope, lexer states, etc.).
#
# See: http://www.gnu.org/software/bison/manual/html_node/Default-Reductions.html#Default-Reductions
# See: https://www.gnu.org/software/bison/manual/html_node/Default-Reductions.html#Default-Reductions
def set_defaulted_states(self):
self.defaulted_states = {}
for state, actions in self.action.items():
......
......@@ -4,10 +4,11 @@
# PLYParser class and other utilites for simplifying programming
# parsers with PLY
#
# Eli Bendersky [http://eli.thegreenplace.net]
# Eli Bendersky [https://eli.thegreenplace.net/]
# License: BSD
#-----------------------------------------------------------------
import warnings
class Coord(object):
""" Coordinates of a syntactic element. Consists of:
......@@ -87,12 +88,28 @@ def template(cls):
See `parameterized` for more information on parameterized rules.
"""
issued_nodoc_warning = False
for attr_name in dir(cls):
if attr_name.startswith('p_'):
method = getattr(cls, attr_name)
if hasattr(method, '_params'):
delattr(cls, attr_name) # Remove template method
_create_param_rules(cls, method)
# Remove the template method
delattr(cls, attr_name)
# Create parameterized rules from this method; only run this if
# the method has a docstring. This is to address an issue when
# pycparser's users are installed in -OO mode which strips
# docstrings away.
# See: https://github.com/eliben/pycparser/pull/198/ and
# https://github.com/eliben/pycparser/issues/197
# for discussion.
if method.__doc__ is not None:
_create_param_rules(cls, method)
elif not issued_nodoc_warning:
warnings.warn(
'parsing methods must have __doc__ for pycparser to work properly',
RuntimeWarning,
stacklevel=2)
issued_nodoc_warning = True
return cls
......
This source diff could not be displayed because it is too large. You can view the blob instead.
[bdist_wheel]
universal = 1
[metadata]
license_file = LICENSE
[egg_info]
tag_build =
tag_date = 0
......
......@@ -10,9 +10,12 @@ except ImportError:
def _run_build_tables(dir):
from subprocess import call
call([sys.executable, '_build_tables.py'],
cwd=os.path.join(dir, 'pycparser'))
from subprocess import check_call
# This is run inside the install staging directory (that had no .pyc files)
# We don't want to generate any.
# https://github.com/eliben/pycparser/pull/135
check_call([sys.executable, '-B', '_build_tables.py'],
cwd=os.path.join(dir, 'pycparser'))
class install(_install):
......@@ -40,15 +43,23 @@ setup(
C compilers or analysis tools.
""",
license='BSD',
version='2.18',
version='2.19',
author='Eli Bendersky',
maintainer='Eli Bendersky',
author_email='eliben@gmail.com',
url='https://github.com/eliben/pycparser',
platforms='Cross Platform',
classifiers = [
'Development Status :: 5 - Production/Stable',
'License :: OSI Approved :: BSD License',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 3',],
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
],
python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*",
packages=['pycparser', 'pycparser.ply'],
package_data={'pycparser': ['*.cfg']},
cmdclass={'install': install, 'sdist': sdist},
......
......@@ -16,11 +16,11 @@ class Test_c_ast(unittest.TestCase):
left=c_ast.Constant(type='int', value='6'),
right=c_ast.ID(name='joe'))
self.failUnless(isinstance(b1.left, c_ast.Constant))
self.assertIsInstance(b1.left, c_ast.Constant)
self.assertEqual(b1.left.type, 'int')
self.assertEqual(b1.left.value, '6')
self.failUnless(isinstance(b1.right, c_ast.ID))
self.assertIsInstance(b1.right, c_ast.ID)
self.assertEqual(b1.right.name, 'joe')
def test_weakref_works_on_nodes(self):
......@@ -38,7 +38,6 @@ class Test_c_ast(unittest.TestCase):
self.assertEqual(weakref.getweakrefcount(coord), 1)
class TestNodeVisitor(unittest.TestCase):
class ConstantVisitor(c_ast.NodeVisitor):
def __init__(self):
......@@ -94,7 +93,57 @@ class TestNodeVisitor(unittest.TestCase):
cv.visit(comp)
self.assertEqual(cv.values,
['5.6', 't', '5.6', 't', 't', '5.6', 't'])
['5.6', 't', '5.6', 't', 't', '5.6', 't'])
def test_repr(self):
c1 = c_ast.Constant(type='float', value='5.6')
c2 = c_ast.Constant(type='char', value='t')
b1 = c_ast.BinaryOp(
op='+',
left=c1,
right=c2)
b2 = c_ast.BinaryOp(
op='-',
left=b1,
right=c2)
comp = c_ast.Compound(
block_items=[b1, b2, c1, c2])
expected = ("Compound(block_items=[BinaryOp(op='+',\n"
" left=Constant(type='float',\n"
" value='5.6'\n"
" ),\n"
" right=Constant(type='char',\n"
" value='t'\n"
" )\n"
" ),\n"
" BinaryOp(op='-',\n"