Now that you have PyNLPIR installed, let’s look at how to use it.
There are two ways to use PyNLPIR: directly using the
ctypes interface provided by PyNLPIR or using PyNLPIR’s helper functions. The
ctypes interface is more extensive, but more rigid. The helper functions are easy-to-use, but don’t provide access to every NLPIR function. You can also use a mixture of the two methods. First, let’s look at the helper functions.
PyNLPIR Helper Functions¶
The helper functions are located in PyNLPIR’s
__init__.py file, so they are accessible by importing
Importing PyNLPIR loads the NLPIR API library automatically:
By default, input is assumed to be unicode or UTF-8 encoded. If you’d like to use
a different encoding (e.g. GBK or BIG5), use the encoding keyword argument
No matter what encoding you’ve specified, you can always pass unicode strings to
PyNLPIR’s helper functions always return unicode strings.
Once you’ve initialized NLPIR, you can start segmenting and analyzing text.
Let’s segment a lengthy sentence:
s = 'NLPIR分词系统前身为2000年发布的ICTCLAS词法分析系统，从2009年开始，为了和以前工作进行大的区隔，并推广NLPIR自然语言处理与信息检索共享平台，调整命名为NLPIR分词系统。' pynlpir.segment(s) # Sample output: [('NLPIR', 'noun'), ('分词', 'verb'), ('系统', 'noun'), ('前身', 'noun'), ('为', 'preposition'), ('2000年', 'time word'), ('发布', 'verb'), . . . ]
If you don’t want part of speech tagging, call
pos_tagging set to
pynlpir.segment(s, pos_tagging=False) # Sample output: ['NLPIR', '分词', '系统', '前身', '为', '2000年', '发布', . . . ]
You can also customize how the part of speech tags are shown. By default,
only the most generic part of speech name is used, i.e. the parent (for example,
'noun' instead of
'transcribed toponym'). If you’d like the
most specific part of speech name instead, i.e. the child, set pos_names
If you want even more information about the part of speech tags, you can set
'all' and a part of speech hierarchy is returned (for example,
By default, part of speech tags are returned in English. If you’d like to see Chinese
'名词' instead of
'noun'), set pos_english to
Getting Key Words¶
Another useful function is
pynlpir.get_key_words(s, weighted=True) [('NLPIR', 2.08), ('系统', 1.74)]
get_key_words() analyzes the given Chinese text string and returns
words that NLPIR considers key words. If weighted is
True, then the key word’s
weight is also returned as a
pynlpir.nlpir provides access to NLPIR’s C functions via
You can call them directly without bothering with the helper functions above.
These functions work almost exactly the same as their C counterparts.
pynlpir.nlpir includes the module-level constants exported by NLPIR that
are needed for calling many of its functions (e.g. encoding and part of speech
constants). See the API page on
pynlpir.nlpir for more information.
The sections below do not provide a comprehensive explanation of how to use NLPIR. NLPIR has its own documentation. The section below provides basic information about how to get started with PyNLPIR assuming you are familiar with NLPIR. If you’re not, be sure to check out the documentation linked to below.
Initializing and Exiting the API¶
Before you can call any other NLPIR functions, you need to initialize the NLPIR API.
This is done by calling
Init(). You have to specify where
Data directory is. PyNLPIR ships with a copy and it’s found in the
top-level of the package directory. So, you can use the module-level constant
PACKAGE_DIR when calling
from pynlpir import nlpir nlpir.Init(nlpir.PACKAGE_DIR)
NLPIR defaults to using GBK encoding. If you don’t plan on passing around GBK-encoded
strings, you’ll want to change the encoding when calling
Once NLPIR is initialized, you can begin using the rest of the NLPIR functions. When
you’re finished, it’s good to call
Exit() in order to exit the
NLPIR API and free the allocated memory:
Now that you’ve finished the tutorial, you should be able to perform basic tasks using PyNLPIR. If you need more information regarding a module, constant, or function, be sure to check out the PyNLPIR API. If you need help, spot a bug, or have a feature request, then please visit PyNLPIR’s GitHub Issues page.