Commit 2d63e231 authored by Fabian Wolff's avatar Fabian Wolff

New upstream version 5.5.0

parent 725843de
Version 5.5.0 (29 April 2018)
* Fix accidental API breakage that impacts OpenCog.
* Fix memory leak when parsing with null links.
* Python bindings: Add an optional parse-option argument to parse().
* Add an extended version API and use it in "link-parser --version".
* Fix spurious errors if the last dict line is a comment.
* Fix garbage report if EOF encountered in a quoted dict word.
* Fix garbage report if whitespace encountered in a quoted dict word.
* Add a per-command help in link-parser.
* Add a command line completion in link-parser.
* Enable build of word-graph printing support by default.
* Add idiom lookup in link-parser's dict lookup command (!!idiom_here).
* Improve handling of quoted words (e.g. single words in "scare quotes").
* Fix random selection of linkages so that it's actually random.
Version 5.4.4 (11 March 2018)
* Dictionary loading now thread safe.
......
......@@ -32,7 +32,7 @@ EXTRA_DIST = \
docker/docker-parser/Dockerfile \
docker/docker-python/Dockerfile \
docker/docker-server/Dockerfile \
m4/varcheckpoint.m4
m4/varcheckpoint.m4 \
msvc14/LGlib-features.props \
msvc14/LinkGrammarExe.vcxproj \
msvc14/LinkGrammarExe.vcxproj.filters \
......@@ -50,6 +50,7 @@ EXTRA_DIST = \
msvc14/Python3.vcxproj.filters \
msvc14/README.md \
msvc14/make-check.py \
mingw/README.Cygwin \
mingw/README.MSYS \
mingw/README-Cygwin.md \
mingw/README-MSYS.md \
mingw/README-MSYS2.md \
TODO
......@@ -269,6 +269,7 @@ EGREP = @EGREP@
EXEEXT = @EXEEXT@
FGREP = @FGREP@
GREP = @GREP@
HOST_OS = @HOST_OS@
HUNSPELL_CFLAGS = @HUNSPELL_CFLAGS@
HUNSPELL_LIBS = @HUNSPELL_LIBS@
INSTALL = @INSTALL@
......@@ -283,6 +284,7 @@ LDFLAGS = @LDFLAGS@
LEX = @LEX@
LEXLIB = @LEXLIB@
LEX_OUTPUT_ROOT = @LEX_OUTPUT_ROOT@
LG_DEFS = @LG_DEFS@
LG_PYDIR = @LG_PYDIR@
LIBEDIT_CFLAGS = @LIBEDIT_CFLAGS@
LIBEDIT_LIBS = @LIBEDIT_LIBS@
......@@ -352,7 +354,6 @@ SQLITE3_LIBS = @SQLITE3_LIBS@
STRIP = @STRIP@
SWIG = @SWIG@
SWIG_LIB = @SWIG_LIB@
SWIGfound = @SWIGfound@
VERSION = @VERSION@
VERSION_INFO = @VERSION_INFO@
WARN_CFLAGS = @WARN_CFLAGS@
......@@ -452,7 +453,28 @@ EXTRA_DIST = \
docker/docker-parser/Dockerfile \
docker/docker-python/Dockerfile \
docker/docker-server/Dockerfile \
m4/varcheckpoint.m4
m4/varcheckpoint.m4 \
msvc14/LGlib-features.props \
msvc14/LinkGrammarExe.vcxproj \
msvc14/LinkGrammarExe.vcxproj.filters \
msvc14/LinkGrammarJava.vcxproj \
msvc14/LinkGrammarJava.vcxproj.filters \
msvc14/LinkGrammar.sln \
msvc14/LinkGrammar.vcxproj \
msvc14/LinkGrammar.vcxproj.filters \
msvc14/Local.props \
msvc14/confvar.bat \
msvc14/MSVC-common.props \
msvc14/post-build.bat \
msvc14/Python2.vcxproj \
msvc14/Python2.vcxproj.filters \
msvc14/Python3.vcxproj.filters \
msvc14/README.md \
msvc14/make-check.py \
mingw/README-Cygwin.md \
mingw/README-MSYS.md \
mingw/README-MSYS2.md \
TODO
all: all-recursive
......@@ -948,26 +970,6 @@ uninstall-am: uninstall-pkgconfigDATA
.PRECIOUS: Makefile
msvc14/LGlib-features.props \
msvc14/LinkGrammarExe.vcxproj \
msvc14/LinkGrammarExe.vcxproj.filters \
msvc14/LinkGrammarJava.vcxproj \
msvc14/LinkGrammarJava.vcxproj.filters \
msvc14/LinkGrammar.sln \
msvc14/LinkGrammar.vcxproj \
msvc14/LinkGrammar.vcxproj.filters \
msvc14/Local.props \
msvc14/confvar.bat \
msvc14/MSVC-common.props \
msvc14/post-build.bat \
msvc14/Python2.vcxproj \
msvc14/Python2.vcxproj.filters \
msvc14/Python3.vcxproj.filters \
msvc14/README.md \
msvc14/make-check.py \
mingw/README.Cygwin \
mingw/README.MSYS \
TODO
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
......
[ANNOUNCE] Link-Grammar Version 5.4.4 is now available.
I'm pleased to announce that version 5.4.4 is now available. I don't
normally announce minor versions, but this one was almost named 5.5.0.
Which suggests that there were some important changes. Dictionary
loading is now thread safe. Security vulnerabilities are fixed. Parsing
of Russian is now 2x faster than before. Connectors can be individually
given length limits - handy for morphology and phonetic agreement - and
the root reason for the Russian speedup. An assortment of fixes to the
English dictionary, including a reversal of some back-sliding in the
test corpus.
You can download link-grammar from
http://www.abisource.com/downloads/link-grammar/current/
The website is here:
https://www.abisource.com/projects/link-grammar/
WHAT IS LINK GRAMMER?
The Link Grammar Parser is a syntactic parser of English (and other
languages as well), based on Link Grammar, an original theory of English
syntax. Given a sentence, the system assigns to it a syntactic structure,
which consists of a set of labelled links connecting pairs of words.
=================================================================
=================================================================
......
Link Grammar Parser
===================
***Version 5.4.4***
***Version 5.5.0***
The Link Grammar Parser implements the Sleator/Temperley/Lafferty
theory of natural language parsing. This version of the parser is
......@@ -186,7 +186,7 @@ corruption of the dataset during download, and to help ensure that
no malicious changes were made to the code internals by third
parties. The signatures can be checked with the gpg command:
`gpg --verify link-grammar-5.4.4.tar.gz.asc`
`gpg --verify link-grammar-5.5.0.tar.gz.asc`
which should generate output identical to (except for the date):
```
......@@ -201,7 +201,7 @@ verify the check-sums, issue `md5sum -c MD5SUM` at the command line.
Tags in `git` can be verified by performing the following:
```
gpg --recv-keys --keyserver keyserver.ubuntu.com EB6AA534E0C0651C
git tag -v link-grammar-5.4.4
git tag -v link-grammar-5.5.0
```
......@@ -1064,17 +1064,22 @@ and come to the conclusion that one should please some unstated
object, and then turn off the lights. (Perhaps one is pleasing
by turning off the lights?)
### Punctuation, zero-copula, zero-that:
Poorly punctuated sentences cause problems: for example:
```text
"Mike was not first, nor was he last."
"Mike was not first nor was he last."
```
The one without the comma currently fails to parse. How can we
deal with this in a simple, fast, elegant way? Similar questions
for zero-copula and zero-that sentences.
### Bad grammar:
When a sentence fails to parse, look for:
* confused words: its/it's, there/their/they're, to/too, your/you're ...
These could be added at high cost to the dicts.
* missing apostrophes in possessives: "the peoples desires"
* determiner agreement errors: "a books"
* aux verb agreement errors: "to be hooks up"
Poor agreement might be handled by giving a cost to mismatched
lower-case connector letters.
### Zero/phantom words:
An common phenomenon in English is that some words that one might
expect to "properly" be present can disappear under various conditions.
Below is a sampling of these. Some possible solutions are given below.
Expressions such as "Looks good" have an implicit "it" (also called
a zero-it or phantom-it) in them; that is, the sentence should really
parse as "(it) looks good". The dictionary could be simplified by
......@@ -1098,6 +1103,34 @@ Some complex phantom constructions:
See also [github issue #224](https://github.com/opencog/link-grammar/issues/224).
#### Punctuation, zero-copula, zero-that:
Poorly punctuated sentences cause problems: for example:
```text
"Mike was not first, nor was he last."
"Mike was not first nor was he last."
```
The one without the comma currently fails to parse. How can we
deal with this in a simple, fast, elegant way? Similar questions
for zero-copula and zero-that sentences.
#### Context-dependent zero phrases.
Consider an argument between a professor and a dean, and the dean
wants the professor to write a brilliant review. At the end of the
argument, the dean exclaims: "I want the review brilliant!" This
is a predicative adjective; clearly it means "I want the review
[that you write to be] brilliant." However, taken out of context,
such a construction is ungrammatical, as the predicativeness is not
at all apparent, and it reads just as incorrectly as would
"*Hey Joe, can you hand me that review brilliant?"
#### Imperatives as phantoms:
```text
"Push button"
"Push button firmly"
```
The subject is a phantom; the subject is "you".
#### Handling zero/phantom words by explicitly inserting them:
One possible solution is to perform a one-point compactification.
The dictionary contains the phantom words, and their connectors.
Ordinary disjuncts can link to these, but should do so using
......@@ -1115,33 +1148,40 @@ else the linkage is invalid. After parsing, the phantom words can
be inserted into the sentence, with the location deduced from link
lengths.
### Context-dependent zero phrases.
Consider an argument between a professor and a dean, and the dean
wants the professor to write a brilliant review. At the end of the
argument, the dean exclaims: "I want the review brilliant!" This
is a predicative adjective; clearly it means "I want the reveiw
[that you write to be] brilliant." However, taken out of context,
such a construction is ungrammatical, as the predicativeness is not
at all apparant, and it reads just as incorrectly as would
"*Hey Joe, can you hand me that review brilliant?"
### Imperatives:
```text
"Push button"
"Push button firmly"
```
The zero/phantom-word solution, described above, should help with this.
### Bad grammar:
When a sentence fails to parse, look for:
* confused words: its/it's, there/their/they're, to/too, your/you're ...
These could be added at high cost to the dicts.
* missing apostrophes in possessives: "the peoples desires"
* determiner agreement errors: "a books"
* aux verb agreement errors: "to be hooks up"
Poor agreement might be handled by giving a cost to mismatched
lower-case connector letters.
#### Handling zero/phantom words as re-write rules.
A more principled approach to fixing the phantom-word issue is to
borrow the diea of re-writing from the theory of
[operator grammar](https://en.wikipedia.org/wiki/Operator_grammar).
That is, certain phrases and constructions can be (should be)
re-written into thier "proper form", prior to parsing. The re-writing
step would insert the missing words, then the parsing proceeds. One
appeal of such an approach is that re-writing can also handle other
"annoying" phenomena, such as typos (missing apostrophes, e.g. "lets"
vs. "let's", "its" vs. "its") as well as multi-word rewrites (e.g.
"let's" vs. "let us", or "it's" vs. "it is").
Exactly how to implement this is unclear. However, it seems to open
the door to more abstract, semantic analysis. Thus, for example, in
Meaning-Text Theory (MTT), one must move betweeen SSynt to DSynt
structures. Such changes require a graph re-write from the surface
syntax parse (e.g. provided by link-grammar) to the deep-syntactic
structure. By contrast, handling phantom words by graph re-writing
prior to parsing inverts the order of processing. This suggests that
a more holistic approach is needed to graph rewriting: it must somehow
be performed "during" parsing, so that parsing can both guide the
insertion of the phantom words, and, simultanously guide the deep
syntactic rewrites.
Another interesting possibility arises with regards to tokenization.
The current tokenizer is clever, in that it splits not only on
whitespace, but can also strip off prefixes, suffixes, and perform
certain limited kinds of morphological splitting. That is, it currently
has the ability to re-write single-words into sequences of words. It
currently does so in a conservative manner; the letters that compose
a word are preserved, with a few exceptions, such as making spelling
correction suggestions. The above considerations suggest that the
boundary between tokenization and parsing needs to become both more
fluid, and more tightly coupled.
### Poor linkage choices:
Compare "she will be happier than before" to "she will be more happy
......@@ -1203,7 +1243,7 @@ factored results: i.e. the four plausible parses for the first half,
and the four plausible parses for the last half. This would ease
the burden on downstream users of link-grammar.
This approach has at psychological supprt. Humans take long sentences
This approach has at psychological support. Humans take long sentences
and split them into smaller chunks that "hang together" as phrase-
structures, viz compounded sentences. The most likely parse is the
one where each of the quasi sub-sentences is parsed correctly.
......@@ -1214,7 +1254,7 @@ arrives, use that context in place of the left-wall.
This somewhat resembles the application of construction grammar
ideas to the link-grammar dictionary. It also somewhat resembles
Viterbi parsing to some fixed depth. Viz. do a full backward-foreward
Viterbi parsing to some fixed depth. Viz. do a full backward-forward
parse for a phrase, and then, once this is done, take a Viterbi-step.
That is, once the phrase is done, keep only the dangling connectors
to the phrase, place a wall, and then step to the next part of the
......
......@@ -1344,6 +1344,43 @@ AC_DEFUN([AM_AUX_DIR_EXPAND],
am_aux_dir=`cd "$ac_aux_dir" && pwd`
])
# AM_COND_IF -*- Autoconf -*-
# Copyright (C) 2008-2014 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# _AM_COND_IF
# _AM_COND_ELSE
# _AM_COND_ENDIF
# --------------
# These macros are only used for tracing.
m4_define([_AM_COND_IF])
m4_define([_AM_COND_ELSE])
m4_define([_AM_COND_ENDIF])
# AM_COND_IF(COND, [IF-TRUE], [IF-FALSE])
# ---------------------------------------
# If the shell condition COND is true, execute IF-TRUE, otherwise execute
# IF-FALSE. Allow automake to learn about conditional instantiating macros
# (the AC_CONFIG_FOOS).
AC_DEFUN([AM_COND_IF],
[m4_ifndef([_AM_COND_VALUE_$1],
[m4_fatal([$0: no such condition "$1"])])dnl
_AM_COND_IF([$1])dnl
if test -z "$$1_TRUE"; then :
m4_n([$2])[]dnl
m4_ifval([$3],
[_AM_COND_ELSE([$1])dnl
else
$3
])dnl
_AM_COND_ENDIF([$1])dnl
fi[]dnl
])
# AM_CONDITIONAL -*- Autoconf -*-
# Copyright (C) 1997-2014 Free Software Foundation, Inc.
......
......@@ -229,6 +229,7 @@ EGREP = @EGREP@
EXEEXT = @EXEEXT@
FGREP = @FGREP@
GREP = @GREP@
HOST_OS = @HOST_OS@
HUNSPELL_CFLAGS = @HUNSPELL_CFLAGS@
HUNSPELL_LIBS = @HUNSPELL_LIBS@
INSTALL = @INSTALL@
......@@ -243,6 +244,7 @@ LDFLAGS = @LDFLAGS@
LEX = @LEX@
LEXLIB = @LEXLIB@
LEX_OUTPUT_ROOT = @LEX_OUTPUT_ROOT@
LG_DEFS = @LG_DEFS@
LG_PYDIR = @LG_PYDIR@
LIBEDIT_CFLAGS = @LIBEDIT_CFLAGS@
LIBEDIT_LIBS = @LIBEDIT_LIBS@
......@@ -312,7 +314,6 @@ SQLITE3_LIBS = @SQLITE3_LIBS@
STRIP = @STRIP@
SWIG = @SWIG@
SWIG_LIB = @SWIG_LIB@
SWIGfound = @SWIGfound@
VERSION = @VERSION@
VERSION_INFO = @VERSION_INFO@
WARN_CFLAGS = @WARN_CFLAGS@
......
......@@ -258,6 +258,7 @@ EGREP = @EGREP@
EXEEXT = @EXEEXT@
FGREP = @FGREP@
GREP = @GREP@
HOST_OS = @HOST_OS@
HUNSPELL_CFLAGS = @HUNSPELL_CFLAGS@
HUNSPELL_LIBS = @HUNSPELL_LIBS@
INSTALL = @INSTALL@
......@@ -272,6 +273,7 @@ LDFLAGS = @LDFLAGS@
LEX = @LEX@
LEXLIB = @LEXLIB@
LEX_OUTPUT_ROOT = @LEX_OUTPUT_ROOT@
LG_DEFS = @LG_DEFS@
LG_PYDIR = @LG_PYDIR@
LIBEDIT_CFLAGS = @LIBEDIT_CFLAGS@
LIBEDIT_LIBS = @LIBEDIT_LIBS@
......@@ -341,7 +343,6 @@ SQLITE3_LIBS = @SQLITE3_LIBS@
STRIP = @STRIP@
SWIG = @SWIG@
SWIG_LIB = @SWIG_LIB@
SWIGfound = @SWIGfound@
VERSION = @VERSION@
VERSION_INFO = @VERSION_INFO@
WARN_CFLAGS = @WARN_CFLAGS@
......
......@@ -15,7 +15,7 @@
#include <stdatomic.h>
#endif /* HAVE_STDATOMIC_H */
#include <link-grammar/api-structures.h>
#include "link-grammar/api-structures.h"
#include "link-grammar/corpus/corpus.h"
#include "link-grammar/error.h"
#include "jni-client.h"
......
......@@ -190,6 +190,7 @@ EGREP = @EGREP@
EXEEXT = @EXEEXT@
FGREP = @FGREP@
GREP = @GREP@
HOST_OS = @HOST_OS@
HUNSPELL_CFLAGS = @HUNSPELL_CFLAGS@
HUNSPELL_LIBS = @HUNSPELL_LIBS@
INSTALL = @INSTALL@
......@@ -204,6 +205,7 @@ LDFLAGS = @LDFLAGS@
LEX = @LEX@
LEXLIB = @LEXLIB@
LEX_OUTPUT_ROOT = @LEX_OUTPUT_ROOT@
LG_DEFS = @LG_DEFS@
LG_PYDIR = @LG_PYDIR@
LIBEDIT_CFLAGS = @LIBEDIT_CFLAGS@
LIBEDIT_LIBS = @LIBEDIT_LIBS@
......@@ -273,7 +275,6 @@ SQLITE3_LIBS = @SQLITE3_LIBS@
STRIP = @STRIP@
SWIG = @SWIG@
SWIG_LIB = @SWIG_LIB@
SWIGfound = @SWIGfound@
VERSION = @VERSION@
VERSION_INFO = @VERSION_INFO@
WARN_CFLAGS = @WARN_CFLAGS@
......
......@@ -160,6 +160,7 @@ EGREP = @EGREP@
EXEEXT = @EXEEXT@
FGREP = @FGREP@
GREP = @GREP@
HOST_OS = @HOST_OS@
HUNSPELL_CFLAGS = @HUNSPELL_CFLAGS@
HUNSPELL_LIBS = @HUNSPELL_LIBS@
INSTALL = @INSTALL@
......@@ -174,6 +175,7 @@ LDFLAGS = @LDFLAGS@
LEX = @LEX@
LEXLIB = @LEXLIB@
LEX_OUTPUT_ROOT = @LEX_OUTPUT_ROOT@
LG_DEFS = @LG_DEFS@
LG_PYDIR = @LG_PYDIR@
LIBEDIT_CFLAGS = @LIBEDIT_CFLAGS@
LIBEDIT_LIBS = @LIBEDIT_LIBS@
......@@ -243,7 +245,6 @@ SQLITE3_LIBS = @SQLITE3_LIBS@
STRIP = @STRIP@
SWIG = @SWIG@
SWIG_LIB = @SWIG_LIB@
SWIGfound = @SWIGfound@
VERSION = @VERSION@
VERSION_INFO = @VERSION_INFO@
WARN_CFLAGS = @WARN_CFLAGS@
......
......@@ -256,6 +256,7 @@ EGREP = @EGREP@
EXEEXT = @EXEEXT@
FGREP = @FGREP@
GREP = @GREP@
HOST_OS = @HOST_OS@
HUNSPELL_CFLAGS = @HUNSPELL_CFLAGS@
HUNSPELL_LIBS = @HUNSPELL_LIBS@
INSTALL = @INSTALL@
......@@ -270,6 +271,7 @@ LDFLAGS = @LDFLAGS@
LEX = @LEX@
LEXLIB = @LEXLIB@
LEX_OUTPUT_ROOT = @LEX_OUTPUT_ROOT@
LG_DEFS = @LG_DEFS@
LG_PYDIR = @LG_PYDIR@
LIBEDIT_CFLAGS = @LIBEDIT_CFLAGS@
LIBEDIT_LIBS = @LIBEDIT_LIBS@
......@@ -339,7 +341,6 @@ SQLITE3_LIBS = @SQLITE3_LIBS@
STRIP = @STRIP@
SWIG = @SWIG@
SWIG_LIB = @SWIG_LIB@
SWIGfound = @SWIGfound@
VERSION = @VERSION@
VERSION_INFO = @VERSION_INFO@
WARN_CFLAGS = @WARN_CFLAGS@
......
This diff is collapsed.
This diff is collapsed.
......@@ -363,6 +363,7 @@ EGREP = @EGREP@
EXEEXT = @EXEEXT@
FGREP = @FGREP@
GREP = @GREP@
HOST_OS = @HOST_OS@
HUNSPELL_CFLAGS = @HUNSPELL_CFLAGS@
HUNSPELL_LIBS = @HUNSPELL_LIBS@
INSTALL = @INSTALL@
......@@ -377,6 +378,7 @@ LDFLAGS = @LDFLAGS@
LEX = @LEX@
LEXLIB = @LEXLIB@
LEX_OUTPUT_ROOT = @LEX_OUTPUT_ROOT@
LG_DEFS = @LG_DEFS@
LG_PYDIR = @LG_PYDIR@
LIBEDIT_CFLAGS = @LIBEDIT_CFLAGS@
LIBEDIT_LIBS = @LIBEDIT_LIBS@
......@@ -446,7 +448,6 @@ SQLITE3_LIBS = @SQLITE3_LIBS@
STRIP = @STRIP@
SWIG = @SWIG@
SWIG_LIB = @SWIG_LIB@
SWIGfound = @SWIGfound@
VERSION = @VERSION@
VERSION_INFO = @VERSION_INFO@
WARN_CFLAGS = @WARN_CFLAGS@
......
Python bindings for Link Grammar
================================
This directory contains an example program, and a unit test for the
python bindings to Link Grammar.
Description
-----------
A Link Grammar library test is implemented in `tests.py`.
An example program `example.py` is provided.
The example programs `example.py` and `sentence-check.py` illustrates
how the to use the Link Grammar Python bindings.
A unit test for the Link Grammar Python bindings can be found in
in `tests.py`.
Configuring (if needed)
-----------------------
### For Python2
$ configure --enable-python-bindings
### For Python3
$ configure --enable-python3-bindings
The python bindings will be built by default, if the required python
system libraries are detected on the build system. Thus, no special
configuration should be needed. However, configure can be forced with
the following commands.
### For Python2 and Python3
`$ ./configure --enable-python-bindings`
(This is the default if Python development packages are installed.)
### For Python2 or Python3 only
`$ ./configure --enable-python-bindings=2`
Or:<br>
`$ ./configure --enable-python-bindings=3`
### To disable the Python bindings
`$ ./configure --disable-python-bindings`
(This is the default if no Python is installed.)
How to use
----------
(See below under **Testing the installation** for directions on how to set
`PYTHONPATH` in case it is needed.)
The python bindings will be installed automatically into default system
locations, and no additional steps should be needed to use python.
However, in some cases, therere might be a need to manually set the
`PYTHONPATH` environment variable. See the discussion below, in
the section **Testing the installation** .
Parsing simple sentences:
```
$ python
`$ python`
>>> from linkgrammar import Sentence, ParseOptions, Dictionary
>>> sent = Sentence("This is a simple sentence.", Dictionary(), ParseOptions())
......@@ -40,42 +58,44 @@ $ python
| | | | | | |
LEFT-WALL this.p is.v a simple.a sentence.n .
```
Additional examples can be found in `examples.py`.
Additional examples can be found in `examples.py` and `sentence-cehck.py`.
Testing
-------
The test collection `tests.py` should run 56 tests, none of them should fail.
However, 3 tests will get skipped if the library is not configured with a
speller, and one test will get skipped if the library is not configured with
the SAT solver (this is the status for now on native Windows).
The test collection `tests.py` should run 76 tests; none of them should
fail. However, 3 tests will be skipped, if the library is not configured
with a spell guesser, and one test will be skipped if the library is not
configured with the SAT solver (this is currently the case for native
Windows builds).
The following shows how to issue the tests on systems other then natives
Windows/MinGW (for testing on native Windows see msvc14/README under
[Running Python programs](/msvc14/README.md#running-python-programs)
in `msvc14/README.md`).
Note: For less verbosity of the `make` command output you can use the `-s`
flag of make.
The test procedure is outlined below. For native Windows/MinGW, see
the `msvc14/README.md` file:
[Running Python programs in Windows](/msvc14/README.md#running-python-programs).
### Testing the build directory
The following is assumed:
**$SRC_DIR** - Link Grammar source directory.
**$BUILD_DIR** - Link Grammar build directory.
#### By `make`
#### Using `make`
The tests can be run using the `make` command, as follows:
```
$ cd $BUILD_DIR/bindings/python-examples
$ make [-s] check
```
The results of tests.py are in the current directory under in the file
The `make` command can be made less verbose by using the `-s` flag.
The test results are saved in the current directory, in the file
`tests.log`.
Note: To run also the tests in the **$SRC_DIR/tests/** directory, issue
`make check` directly from **$BUILD_DIR**.
To run the tests in the **$SRC_DIR/tests/** directory, issue `make check`
directly from **$BUILD_DIR**.
#### Manually
To run tests.py manually, or to run `example.py`, you have to set the
`PYTHONPATH` environment variable as follows:
To run `tests.py` manually, or to run `example.py`, without installing
the bindings, the `PYTHONPATH` environment variable must be set:
```
PYTHONPATH=$SRC_DIR/bindings/python:$BUILD_DIR/bindings/python:$BUILD_DIR/bindings/python/.libs
```
......@@ -88,7 +108,7 @@ $ python tests.py [-v]
### Testing the installation
This can be done only after `make install`.
#### By `make`
#### Using `make`
```
$ cd $BUILD_DIR/bindings/python-examples
$ make [-s] installcheck
......@@ -101,17 +121,18 @@ Set the `PYTHONPATH` environment variable to the location of the installed
Python's **linkgrammar** module, e.g.:
```
PYTHONPATH=/usr/local/lib/python2.7/site-packages
PYTHONPATH=/usr/local/lib/python2.7/dist-packages
```
(Export it, or prepend it to the `python` command.)
<br>
Note: This is not needed if the package has been configured to install to the
OS standard system locations.
**NOTE:** Make sure you invoke `tests.py` from a directory from which it cannot
find the `data` directory in **$SRCDIR/.** ! This will enforce it to use the
system-installed data directory. Two directory levels under **$SRCDIR**, as
shown below, is fine for that purpose.
Setting the `PYTHONPATH` is not needed if the default package
configuration is used. The default configuration installs the python
bindings into the standard operating system locations.
To correctly test the system installation, make sure that `tests.py` is
invoked from a directory from which the **$SRCDIR/data.** directory
cannot be found. This is needed to ensure that the system-installed data
directory is used. For example:
```
$ cd $SRCDIR/binding/python-examples
......
......@@ -68,9 +68,11 @@ def add_eqcost_linkage_order(original_class):
original_class.original_parse = original_class.parse
def parse(self):
def parse(self, parse_options=None):
"""A decoration for the original Sentence.parse"""
linkages = self.original_parse()
# parse() has an optional single argument for parse options. If it is not given,
# call original_parse() also without arguments in order to test it that way.
linkages = self.original_parse() if parse_options is None else self.original_parse(parse_options)
return eqcost_soretd_parse(linkages)
original_class.parse = parse
......@@ -22,36 +22,50 @@ from __future__ import print_function
import sys
import re
import itertools
import argparse
from linkgrammar import (Sentence, ParseOptions, Dictionary,
LG_TimerExhausted, Clinkgrammar as clg)
print("Version:", clg.linkgrammar_get_version())
LG_Error, LG_TimerExhausted, Clinkgrammar as clg)
def nsuffix(q):
return '' if q == 1 else 's'
class Formatter(argparse.HelpFormatter):
""" Display the "lang" argument as a first one, as in link-parser. """