id
int64
0
2.72k
content
stringlengths
5
4.1k
language
stringclasses
4 values
embedding
unknown
2,400
6 Modules If you quit from the Python interpreter and enter it again the definitions you have made functions and variables are lost Therefore if you want to write a somewhat longer program you are better off using a text editor to prepare the input for the interpreter and running it with that file as input instead This is known as creating a script As your program gets longer you may want to split it into several files for easier maintenance You may also want to use a handy function that you ve written in several programs without copying its definition into each program To support this Python has a way to put definitions in a file and use them in a script or in an interactive instance of the interpreter Such a file is called a module definitions from a module can be imported into other modules or into the main module the collection of variables that you have access to in a script executed at the top level and in calculator mode A module is a file containing Python definitions and statements The file name is the module name with the suffix py appended Within a module the module s name as a string is available as the value of the global variable __name__ For instance use your favorite text editor to create a file called fibo py in the current directory with the following contents Fibonacci numbers module def fib n write Fibonacci series up to n a b 0 1 while a n print a end a b b a b print def fib2 n return Fibonacci series up to n result a b 0 1 while a n result append a a b b a b return result Now enter the Python interpreter and import this module with the following command import fibo This does not add the names of the functions defined in fibo directly to the current namespace see Python Scopes and Namespaces for more details it only adds the module name fibo there Using the module name you can access the functions fibo fib 1000 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 fibo fib2 100 0 1 1 2 3 5 8 13 21 34 55 89 fibo __name__ fibo If you intend to use a function often you can assign it to a local name fib fibo fib fib 500 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 6 1 More on Modules A module can contain executable statements as well as function definitions These statements are intended to initialize the module They are executed only the first time the module name is encountered in an import statement 1 They are also run if the file is executed as a script Each module has its own private namespace which is used as the global namespace by all functions defined in the module Thus the author of a module can use global variables in the module without worrying about accidental clashes with a user s global variables On the other hand if you know what you are doing you can touch a module s global variables with the same notation used to refer to its functions modname itemname Modules can import other modules It is customary but not required to place all import statements at the beginning of a module or script for that matter The imported module names if placed at the top level of a module outside any functions or classes are added to the module s global namespace There is a variant of the import statement that imports names from a module directly into the importing module s namespace For example from fibo import fib fib2 fib 500 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 This does not introduce the module name from which the imports are taken in the local namespace so in the example fibo is not defined There is even a variant to import all names that a module defines from fibo import fib 500 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 This imports all names except those beginning with an underscore _ In most cases Python programmers do not use this facility since it introduces an unknown set of names into the interpreter possibly hiding some things you have already defined Note that in general the practice of importing from a module or package is frowned upon since it often causes poorly readable code However it is okay to use it to save typing in interactive sessions If the module name is followed by as then the name follow
en
null
2,401
ing as is bound directly to the imported module import fibo as fib fib fib 500 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 This is effectively importing the module in the same way that import fibo will do with the only difference of it being available as fib It can also be used when utilising from with similar effects from fibo import fib as fibonacci fibonacci 500 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 Note For efficiency reasons each module is only imported once per interpreter session Therefore if you change your modules you must restart the interpreter or if it s just one module you want to test interactively use importlib reload e g import importlib importlib reload modulename 6 1 1 Executing modules as scripts When you run a Python module with python fibo py arguments the code in the module will be executed just as if you imported it but with the __name__ set to __main__ That means that by adding this code at the end of your module if __name__ __main__ import sys fib int sys argv 1 you can make the file usable as a script as well as an importable module because the code that parses the command line only runs if the module is executed as the main file python fibo py 50 0 1 1 2 3 5 8 13 21 34 If the module is imported the code is not run import fibo This is often used either to provide a convenient user interface to a module or for testing purposes running the module as a script executes a test suite 6 1 2 The Module Search Path When a module named spam is imported the interpreter first searches for a built in module with that name These module names are listed in sys builtin_module_names If not found it then searches for a file named spam py in a list of directories given by the variable sys path sys path is initialized from these locations The directory containing the input script or the current directory when no file is specified PYTHONPATH a list of directory names with the same syntax as the shell variable PATH The installation dependent default by convention including a site packages directory handled by the site module More details are at The initialization of the sys path module search path Note On file systems which support symlinks the directory containing the input script is calculated after the symlink is followed In other words the directory containing the symlink is not added to the module search path After initialization Python programs can modify sys path The directory containing the script being run is placed at the beginning of the search path ahead of the standard library path This means that scripts in that directory will be loaded instead of modules of the same name in the library directory This is an error unless the replacement is intended See section Standard Modules for more information 6 1 3 Compiled Python files To speed up loading modules Python caches the compiled version of each module in the __pycache__ directory under the name module version pyc where the version encodes the format of the compiled file it generally contains the Python version number For example in CPython release 3 3 the compiled version of spam py would be cached as __pycache__ spam cpython 33 pyc This naming convention allows compiled modules from different releases and different versions of Python to coexist Python checks the modification date of the source against the compiled version to see if it s out of date and needs to be recompiled This is a completely automatic process Also the compiled modules are platform independent so the same library can be shared among systems with different architectures Python does not check the cache in two circumstances First it always recompiles and does not store the result for the module that s loaded directly from the command line Second it does not check the cache if there is no source module To support a non source compiled only distribution the compiled module must be in the source directory and there must not be a source module Some tips for experts You can use the O or OO switches on the Python command to reduce the size of a compiled module The O switch removes assert stat
en
null
2,402
ements the OO switch removes both assert statements and __doc__ strings Since some programs may rely on having these available you should only use this option if you know what you re doing Optimized modules have an opt tag and are usually smaller Future releases may change the effects of optimization A program doesn t run any faster when it is read from a pyc file than when it is read from a py file the only thing that s faster about pyc files is the speed with which they are loaded The module compileall can create pyc files for all modules in a directory There is more detail on this process including a flow chart of the decisions in PEP 3147 6 2 Standard Modules Python comes with a library of standard modules described in a separate document the Python Library Reference Library Reference hereafter Some modules are built into the interpreter these provide access to operations that are not part of the core of the language but are nevertheless built in either for efficiency or to provide access to operating system primitives such as system calls The set of such modules is a configuration option which also depends on the underlying platform For example the winreg module is only provided on Windows systems One particular module deserves some attention sys which is built into every Python interpreter The variables sys ps1 and sys ps2 define the strings used as primary and secondary prompts import sys sys ps1 sys ps2 sys ps1 C C print Yuck Yuck C These two variables are only defined if the interpreter is in interactive mode The variable sys path is a list of strings that determines the interpreter s search path for modules It is initialized to a default path taken from the environment variable PYTHONPATH or from a built in default if PYTHONPATH is not set You can modify it using standard list operations import sys sys path append ufs guido lib python 6 3 The dir Function The built in function dir is used to find out which names a module defines It returns a sorted list of strings import fibo sys dir fibo __name__ fib fib2 dir sys __breakpointhook__ __displayhook__ __doc__ __excepthook__ __interactivehook__ __loader__ __name__ __package__ __spec__ __stderr__ __stdin__ __stdout__ __unraisablehook__ _clear_type_cache _current_frames _debugmallocstats _framework _getframe _git _home _xoptions abiflags addaudithook api_version argv audit base_exec_prefix base_prefix breakpointhook builtin_module_names byteorder call_tracing callstats copyright displayhook dont_write_bytecode exc_info excepthook exec_prefix executable exit flags float_info float_repr_style get_asyncgen_hooks get_coroutine_origin_tracking_depth getallocatedblocks getdefaultencoding getdlopenflags getfilesystemencodeerrors getfilesystemencoding getprofile getrecursionlimit getrefcount getsizeof getswitchinterval gettrace hash_info hexversion implementation int_info intern is_finalizing last_traceback last_type last_value maxsize maxunicode meta_path modules path path_hooks path_importer_cache platform prefix ps1 ps2 pycache_prefix set_asyncgen_hooks set_coroutine_origin_tracking_depth setdlopenflags setprofile setrecursionlimit setswitchinterval settrace stderr stdin stdout thread_info unraisablehook version version_info warnoptions Without arguments dir lists the names you have defined currently a 1 2 3 4 5 import fibo fib fibo fib dir __builtins__ __name__ a fib fibo sys Note that it lists all types of names variables modules functions etc dir does not list the names of built in functions and variables If you want a list of those they are defined in the standard module builtins import builtins dir builtins ArithmeticError AssertionError AttributeError BaseException BlockingIOError BrokenPipeError BufferError BytesWarning ChildProcessError ConnectionAbortedError ConnectionError ConnectionRefusedError ConnectionResetError DeprecationWarning EOFError Ellipsis EnvironmentError Exception False FileExistsError FileNotFoundError FloatingPointError FutureWarning GeneratorExit IOError ImportError ImportWarning IndentationError IndexError InterruptedError IsADirectoryError Ke
en
null
2,403
yError KeyboardInterrupt LookupError MemoryError NameError None NotADirectoryError NotImplemented NotImplementedError OSError OverflowError PendingDeprecationWarning PermissionError ProcessLookupError ReferenceError ResourceWarning RuntimeError RuntimeWarning StopIteration SyntaxError SyntaxWarning SystemError SystemExit TabError TimeoutError True TypeError UnboundLocalError UnicodeDecodeError UnicodeEncodeError UnicodeError UnicodeTranslateError UnicodeWarning UserWarning ValueError Warning ZeroDivisionError _ __build_class__ __debug__ __doc__ __import__ __name__ __package__ abs all any ascii bin bool bytearray bytes callable chr classmethod compile complex copyright credits delattr dict dir divmod enumerate eval exec exit filter float format frozenset getattr globals hasattr hash help hex id input int isinstance issubclass iter len license list locals map max memoryview min next object oct open ord pow print property quit range repr reversed round set setattr slice sorted staticmethod str sum super tuple type vars zip 6 4 Packages Packages are a way of structuring Python s module namespace by using dotted module names For example the module name A B designates a submodule named B in a package named A Just like the use of modules saves the authors of different modules from having to worry about each other s global variable names the use of dotted module names saves the authors of multi module packages like NumPy or Pillow from having to worry about each other s module names Suppose you want to design a collection of modules a package for the uniform handling of sound files and sound data There are many different sound file formats usually recognized by their extension for example wav aiff au so you may need to create and maintain a growing collection of modules for the conversion between the various file formats There are also many different operations you might want to perform on sound data such as mixing adding echo applying an equalizer function creating an artificial stereo effect so in addition you will be writing a never ending stream of modules to perform these operations Here s a possible structure for your package expressed in terms of a hierarchical filesystem sound Top level package __init__ py Initialize the sound package formats Subpackage for file format conversions __init__ py wavread py wavwrite py aiffread py aiffwrite py auread py auwrite py effects Subpackage for sound effects __init__ py echo py surround py reverse py filters Subpackage for filters __init__ py equalizer py vocoder py karaoke py When importing the package Python searches through the directories on sys path looking for the package subdirectory The __init__ py files are required to make Python treat directories containing the file as packages unless using a namespace package a relatively advanced feature This prevents directories with a common name such as string from unintentionally hiding valid modules that occur later on the module search path In the simplest case __init__ py can just be an empty file but it can also execute initialization code for the package or set the __all__ variable described later Users of the package can import individual modules from the package for example import sound effects echo This loads the submodule sound effects echo It must be referenced with its full name sound effects echo echofilter input output delay 0 7 atten 4 An alternative way of importing the submodule is from sound effects import echo This also loads the submodule echo and makes it available without its package prefix so it can be used as follows echo echofilter input output delay 0 7 atten 4 Yet another variation is to import the desired function or variable directly from sound effects echo import echofilter Again this loads the submodule echo but this makes its function echofilter directly available echofilter input output delay 0 7 atten 4 Note that when using from package import item the item can be either a submodule or subpackage of the package or some other name defined in the package like a function class or variable The import sta
en
null
2,404
tement first tests whether the item is defined in the package if not it assumes it is a module and attempts to load it If it fails to find it an ImportError exception is raised Contrarily when using syntax like import item subitem subsubitem each item except for the last must be a package the last item can be a module or a package but can t be a class or function or variable defined in the previous item 6 4 1 Importing From a Package Now what happens when the user writes from sound effects import Ideally one would hope that this somehow goes out to the filesystem finds which submodules are present in the package and imports them all This could take a long time and importing sub modules might have unwanted side effects that should only happen when the sub module is explicitly imported The only solution is for the package author to provide an explicit index of the package The import statement uses the following convention if a package s __init__ py code defines a list named __all__ it is taken to be the list of module names that should be imported when from package import is encountered It is up to the package author to keep this list up to date when a new version of the package is released Package authors may also decide not to support it if they don t see a use for importing from their package For example the file sound effects __init__ py could contain the following code __all__ echo surround reverse This would mean that from sound effects import would import the three named submodules of the sound effects package Be aware that submodules might become shadowed by locally defined names For example if you added a reverse function to the sound effects __init__ py file the from sound effects import would only import the two submodules echo and surround but not the reverse submodule because it is shadowed by the locally defined reverse function __all__ echo refers to the echo py file surround refers to the surround py file reverse refers to the reverse function now def reverse msg str this name shadows the reverse py submodule return msg 1 in the case of a from sound effects import If __all__ is not defined the statement from sound effects import does not import all submodules from the package sound effects into the current namespace it only ensures that the package sound effects has been imported possibly running any initialization code in __init__ py and then imports whatever names are defined in the package This includes any names defined and submodules explicitly loaded by __init__ py It also includes any submodules of the package that were explicitly loaded by previous import statements Consider this code import sound effects echo import sound effects surround from sound effects import In this example the echo and surround modules are imported in the current namespace because they are defined in the sound effects package when the from import statement is executed This also works when __all__ is defined Although certain modules are designed to export only names that follow certain patterns when you use import it is still considered bad practice in production code Remember there is nothing wrong with using from package import specific_submodule In fact this is the recommended notation unless the importing module needs to use submodules with the same name from different packages 6 4 2 Intra package References When packages are structured into subpackages as with the sound package in the example you can use absolute imports to refer to submodules of siblings packages For example if the module sound filters vocoder needs to use the echo module in the sound effects package it can use from sound effects import echo You can also write relative imports with the from module import name form of import statement These imports use leading dots to indicate the current and parent packages involved in the relative import From the surround module for example you might use from import echo from import formats from filters import equalizer Note that relative imports are based on the name of the current module Since the name of the main mo
en
null
2,405
dule is always __main__ modules intended for use as the main module of a Python application must always use absolute imports 6 4 3 Packages in Multiple Directories Packages support one more special attribute __path__ This is initialized to be a list containing the name of the directory holding the package s __init__ py before the code in that file is executed This variable can be modified doing so affects future searches for modules and subpackages contained in the package While this feature is not often needed it can be used to extend the set of modules found in a package Footnotes 1 In fact function definitions are also statements that are executed the execution of a module level function definition adds the function name to the module s global namespace
en
null
2,406
Generator Objects Generator objects are what Python uses to implement generator iterators They are normally created by iterating over a function that yields values rather than explicitly calling PyGen_New or PyGen_NewWithQualName type PyGenObject The C structure used for generator objects PyTypeObject PyGen_Type The type object corresponding to generator objects int PyGen_Check PyObject ob Return true if ob is a generator object ob must not be NULL This function always succeeds int PyGen_CheckExact PyObject ob Return true if ob s type is PyGen_Type ob must not be NULL This function always succeeds PyObject PyGen_New PyFrameObject frame Return value New reference Create and return a new generator object based on the frame object A reference to frame is stolen by this function The argument must not be NULL PyObject PyGen_NewWithQualName PyFrameObject frame PyObject name PyObject qualname Return value New reference Create and return a new generator object based on the frame object with __name__ and __qualname__ set to name and qualname A reference to frame is stolen by this function The frame argument must not be NULL
en
null
2,407
functools Higher order functions and operations on callable objects Source code Lib functools py The functools module is for higher order functions functions that act on or return other functions In general any callable object can be treated as a function for the purposes of this module The functools module defines the following functions functools cache user_function Simple lightweight unbounded function cache Sometimes called memoize Returns the same as lru_cache maxsize None creating a thin wrapper around a dictionary lookup for the function arguments Because it never needs to evict old values this is smaller and faster than lru_cache with a size limit For example cache def factorial n return n factorial n 1 if n else 1 factorial 10 no previously cached result makes 11 recursive calls 3628800 factorial 5 just looks up cached value result 120 factorial 12 makes two new recursive calls the other 10 are cached 479001600 The cache is threadsafe so that the wrapped function can be used in multiple threads This means that the underlying data structure will remain coherent during concurrent updates It is possible for the wrapped function to be called more than once if another thread makes an additional call before the initial call has been completed and cached New in version 3 9 functools cached_property func Transform a method of a class into a property whose value is computed once and then cached as a normal attribute for the life of the instance Similar to property with the addition of caching Useful for expensive computed properties of instances that are otherwise effectively immutable Example class DataSet def __init__ self sequence_of_numbers self _data tuple sequence_of_numbers cached_property def stdev self return statistics stdev self _data The mechanics of cached_property are somewhat different from property A regular property blocks attribute writes unless a setter is defined In contrast a cached_property allows writes The cached_property decorator only runs on lookups and only when an attribute of the same name doesn t exist When it does run the cached_property writes to the attribute with the same name Subsequent attribute reads and writes take precedence over the cached_property method and it works like a normal attribute The cached value can be cleared by deleting the attribute This allows the cached_property method to run again The cached_property does not prevent a possible race condition in multi threaded usage The getter function could run more than once on the same instance with the latest run setting the cached value If the cached property is idempotent or otherwise not harmful to run more than once on an instance this is fine If synchronization is needed implement the necessary locking inside the decorated getter function or around the cached property access Note this decorator interferes with the operation of PEP 412 key sharing dictionaries This means that instance dictionaries can take more space than usual Also this decorator requires that the __dict__ attribute on each instance be a mutable mapping This means it will not work with some types such as metaclasses since the __dict__ attributes on type instances are read only proxies for the class namespace and those that specify __slots__ without including __dict__ as one of the defined slots as such classes don t provide a __dict__ attribute at all If a mutable mapping is not available or if space efficient key sharing is desired an effect similar to cached_property can also be achieved by stacking property on top of lru_cache See How do I cache method calls for more details on how this differs from cached_property New in version 3 8 Changed in version 3 12 Prior to Python 3 12 cached_property included an undocumented lock to ensure that in multi threaded usage the getter function was guaranteed to run only once per instance However the lock was per property not per instance which could result in unacceptably high lock contention In Python 3 12 this locking is removed functools cmp_to_key func Transform an old style comparison function to a key funct
en
null
2,408
ion Used with tools that accept key functions such as sorted min max heapq nlargest heapq nsmallest itertools groupby This function is primarily used as a transition tool for programs being converted from Python 2 which supported the use of comparison functions A comparison function is any callable that accepts two arguments compares them and returns a negative number for less than zero for equality or a positive number for greater than A key function is a callable that accepts one argument and returns another value to be used as the sort key Example sorted iterable key cmp_to_key locale strcoll locale aware sort order For sorting examples and a brief sorting tutorial see Sorting Techniques New in version 3 2 functools lru_cache user_function functools lru_cache maxsize 128 typed False Decorator to wrap a function with a memoizing callable that saves up to the maxsize most recent calls It can save time when an expensive or I O bound function is periodically called with the same arguments The cache is threadsafe so that the wrapped function can be used in multiple threads This means that the underlying data structure will remain coherent during concurrent updates It is possible for the wrapped function to be called more than once if another thread makes an additional call before the initial call has been completed and cached Since a dictionary is used to cache results the positional and keyword arguments to the function must be hashable Distinct argument patterns may be considered to be distinct calls with separate cache entries For example f a 1 b 2 and f b 2 a 1 differ in their keyword argument order and may have two separate cache entries If user_function is specified it must be a callable This allows the lru_cache decorator to be applied directly to a user function leaving the maxsize at its default value of 128 lru_cache def count_vowels sentence return sum sentence count vowel for vowel in AEIOUaeiou If maxsize is set to None the LRU feature is disabled and the cache can grow without bound If typed is set to true function arguments of different types will be cached separately If typed is false the implementation will usually regard them as equivalent calls and only cache a single result Some types such as str and int may be cached separately even when typed is false Note type specificity applies only to the function s immediate arguments rather than their contents The scalar arguments Decimal 42 and Fraction 42 are be treated as distinct calls with distinct results In contrast the tuple arguments answer Decimal 42 and answer Fraction 42 are treated as equivalent The wrapped function is instrumented with a cache_parameters function that returns a new dict showing the values for maxsize and typed This is for information purposes only Mutating the values has no effect To help measure the effectiveness of the cache and tune the maxsize parameter the wrapped function is instrumented with a cache_info function that returns a named tuple showing hits misses maxsize and currsize The decorator also provides a cache_clear function for clearing or invalidating the cache The original underlying function is accessible through the __wrapped__ attribute This is useful for introspection for bypassing the cache or for rewrapping the function with a different cache The cache keeps references to the arguments and return values until they age out of the cache or until the cache is cleared If a method is cached the self instance argument is included in the cache See How do I cache method calls An LRU least recently used cache works best when the most recent calls are the best predictors of upcoming calls for example the most popular articles on a news server tend to change each day The cache s size limit assures that the cache does not grow without bound on long running processes such as web servers In general the LRU cache should only be used when you want to reuse previously computed values Accordingly it doesn t make sense to cache functions with side effects functions that need to create distinct mutable objects on each call such as
en
null
2,409
generators and async functions or impure functions such as time or random Example of an LRU cache for static web content lru_cache maxsize 32 def get_pep num Retrieve text of a Python Enhancement Proposal resource f https peps python org pep num 04d try with urllib request urlopen resource as s return s read except urllib error HTTPError return Not Found for n in 8 290 308 320 8 218 320 279 289 320 9991 pep get_pep n print n len pep get_pep cache_info CacheInfo hits 3 misses 8 maxsize 32 currsize 8 Example of efficiently computing Fibonacci numbers using a cache to implement a dynamic programming technique lru_cache maxsize None def fib n if n 2 return n return fib n 1 fib n 2 fib n for n in range 16 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 fib cache_info CacheInfo hits 28 misses 16 maxsize None currsize 16 New in version 3 2 Changed in version 3 3 Added the typed option Changed in version 3 8 Added the user_function option Changed in version 3 9 Added the function cache_parameters functools total_ordering Given a class defining one or more rich comparison ordering methods this class decorator supplies the rest This simplifies the effort involved in specifying all of the possible rich comparison operations The class must define one of __lt__ __le__ __gt__ or __ge__ In addition the class should supply an __eq__ method For example total_ordering class Student def _is_valid_operand self other return hasattr other lastname and hasattr other firstname def __eq__ self other if not self _is_valid_operand other return NotImplemented return self lastname lower self firstname lower other lastname lower other firstname lower def __lt__ self other if not self _is_valid_operand other return NotImplemented return self lastname lower self firstname lower other lastname lower other firstname lower Note While this decorator makes it easy to create well behaved totally ordered types it does come at the cost of slower execution and more complex stack traces for the derived comparison methods If performance benchmarking indicates this is a bottleneck for a given application implementing all six rich comparison methods instead is likely to provide an easy speed boost Note This decorator makes no attempt to override methods that have been declared in the class or its superclasses Meaning that if a superclass defines a comparison operator total_ordering will not implement it again even if the original method is abstract New in version 3 2 Changed in version 3 4 Returning NotImplemented from the underlying comparison function for unrecognised types is now supported functools partial func args keywords Return a new partial object which when called will behave like func called with the positional arguments args and keyword arguments keywords If more arguments are supplied to the call they are appended to args If additional keyword arguments are supplied they extend and override keywords Roughly equivalent to def partial func args keywords def newfunc fargs fkeywords newkeywords keywords fkeywords return func args fargs newkeywords newfunc func func newfunc args args newfunc keywords keywords return newfunc The partial is used for partial function application which freezes some portion of a function s arguments and or keywords resulting in a new object with a simplified signature For example partial can be used to create a callable that behaves like the int function where the base argument defaults to two from functools import partial basetwo partial int base 2 basetwo __doc__ Convert base 2 string to an int basetwo 10010 18 class functools partialmethod func args keywords Return a new partialmethod descriptor which behaves like partial except that it is designed to be used as a method definition rather than being directly callable func must be a descriptor or a callable objects which are both like normal functions are handled as descriptors When func is a descriptor such as a normal Python function classmethod staticmethod abstractmethod or another instance of partialmethod calls to __get__ are delegated to the underlying descriptor and an a
en
null
2,410
ppropriate partial object returned as the result When func is a non descriptor callable an appropriate bound method is created dynamically This behaves like a normal Python function when used as a method the self argument will be inserted as the first positional argument even before the args and keywords supplied to the partialmethod constructor Example class Cell def __init__ self self _alive False property def alive self return self _alive def set_state self state self _alive bool state set_alive partialmethod set_state True set_dead partialmethod set_state False c Cell c alive False c set_alive c alive True New in version 3 4 functools reduce function iterable initializer Apply function of two arguments cumulatively to the items of iterable from left to right so as to reduce the iterable to a single value For example reduce lambda x y x y 1 2 3 4 5 calculates 1 2 3 4 5 The left argument x is the accumulated value and the right argument y is the update value from the iterable If the optional initializer is present it is placed before the items of the iterable in the calculation and serves as a default when the iterable is empty If initializer is not given and iterable contains only one item the first item is returned Roughly equivalent to def reduce function iterable initializer None it iter iterable if initializer is None value next it else value initializer for element in it value function value element return value See itertools accumulate for an iterator that yields all intermediate values functools singledispatch Transform a function into a single dispatch generic function To define a generic function decorate it with the singledispatch decorator When defining a function using singledispatch note that the dispatch happens on the type of the first argument from functools import singledispatch singledispatch def fun arg verbose False if verbose print Let me just say end print arg To add overloaded implementations to the function use the register attribute of the generic function which can be used as a decorator For functions annotated with types the decorator will infer the type of the first argument automatically fun register def _ arg int verbose False if verbose print Strength in numbers eh end print arg fun register def _ arg list verbose False if verbose print Enumerate this for i elem in enumerate arg print i elem types UnionType and typing Union can also be used fun register def _ arg int float verbose False if verbose print Strength in numbers eh end print arg from typing import Union fun register def _ arg Union list set verbose False if verbose print Enumerate this for i elem in enumerate arg print i elem For code which doesn t use type annotations the appropriate type argument can be passed explicitly to the decorator itself fun register complex def _ arg verbose False if verbose print Better than complicated end print arg real arg imag To enable registering lambdas and pre existing functions the register attribute can also be used in a functional form def nothing arg verbose False print Nothing fun register type None nothing The register attribute returns the undecorated function This enables decorator stacking pickling and the creation of unit tests for each variant independently fun register float fun register Decimal def fun_num arg verbose False if verbose print Half of your number end print arg 2 fun_num is fun False When called the generic function dispatches on the type of the first argument fun Hello world Hello world fun test verbose True Let me just say test fun 42 verbose True Strength in numbers eh 42 fun spam spam eggs spam verbose True Enumerate this 0 spam 1 spam 2 eggs 3 spam fun None Nothing fun 1 23 0 615 Where there is no registered implementation for a specific type its method resolution order is used to find a more generic implementation The original function decorated with singledispatch is registered for the base object type which means it is used if no better implementation is found If an implementation is registered to an abstract base class virtual subclasses of the base class w
en
null
2,411
ill be dispatched to that implementation from collections abc import Mapping fun register def _ arg Mapping verbose False if verbose print Keys Values for key value in arg items print key value fun a b a b To check which implementation the generic function will choose for a given type use the dispatch attribute fun dispatch float function fun_num at 0x1035a2840 fun dispatch dict note default implementation function fun at 0x103fe0000 To access all registered implementations use the read only registry attribute fun registry keys dict_keys class NoneType class int class object class decimal Decimal class list class float fun registry float function fun_num at 0x1035a2840 fun registry object function fun at 0x103fe0000 New in version 3 4 Changed in version 3 7 The register attribute now supports using type annotations Changed in version 3 11 The register attribute now supports types UnionType and typing Union as type annotations class functools singledispatchmethod func Transform a method into a single dispatch generic function To define a generic method decorate it with the singledispatchmethod decorator When defining a function using singledispatchmethod note that the dispatch happens on the type of the first non self or non cls argument class Negator singledispatchmethod def neg self arg raise NotImplementedError Cannot negate a neg register def _ self arg int return arg neg register def _ self arg bool return not arg singledispatchmethod supports nesting with other decorators such as classmethod Note that to allow for dispatcher register singledispatchmethod must be the outer most decorator Here is the Negator class with the neg methods bound to the class rather than an instance of the class class Negator singledispatchmethod classmethod def neg cls arg raise NotImplementedError Cannot negate a neg register classmethod def _ cls arg int return arg neg register classmethod def _ cls arg bool return not arg The same pattern can be used for other similar decorators staticmethod abstractmethod and others New in version 3 8 functools update_wrapper wrapper wrapped assigned WRAPPER_ASSIGNMENTS updated WRAPPER_UPDATES Update a wrapper function to look like the wrapped function The optional arguments are tuples to specify which attributes of the original function are assigned directly to the matching attributes on the wrapper function and which attributes of the wrapper function are updated with the corresponding attributes from the original function The default values for these arguments are the module level constants WRAPPER_ASSIGNMENTS which assigns to the wrapper function s __module__ __name__ __qualname__ __annotations__ and __doc__ the documentation string and WRAPPER_UPDATES which updates the wrapper function s __dict__ i e the instance dictionary To allow access to the original function for introspection and other purposes e g bypassing a caching decorator such as lru_cache this function automatically adds a __wrapped__ attribute to the wrapper that refers to the function being wrapped The main intended use for this function is in decorator functions which wrap the decorated function and return the wrapper If the wrapper function is not updated the metadata of the returned function will reflect the wrapper definition rather than the original function definition which is typically less than helpful update_wrapper may be used with callables other than functions Any attributes named in assigned or updated that are missing from the object being wrapped are ignored i e this function will not attempt to set them on the wrapper function AttributeError is still raised if the wrapper function itself is missing any attributes named in updated Changed in version 3 2 The __wrapped__ attribute is now automatically added The __annotations__ attribute is now copied by default Missing attributes no longer trigger an AttributeError Changed in version 3 4 The __wrapped__ attribute now always refers to the wrapped function even if that function defined a __wrapped__ attribute see bpo 17482 functools wraps wrapped assigned WRAPPER_ASSIGNME
en
null
2,412
NTS updated WRAPPER_UPDATES This is a convenience function for invoking update_wrapper as a function decorator when defining a wrapper function It is equivalent to partial update_wrapper wrapped wrapped assigned assigned updated updated For example from functools import wraps def my_decorator f wraps f def wrapper args kwds print Calling decorated function return f args kwds return wrapper my_decorator def example Docstring print Called example function example Calling decorated function Called example function example __name__ example example __doc__ Docstring Without the use of this decorator factory the name of the example function would have been wrapper and the docstring of the original example would have been lost partial Objects partial objects are callable objects created by partial They have three read only attributes partial func A callable object or function Calls to the partial object will be forwarded to func with new arguments and keywords partial args The leftmost positional arguments that will be prepended to the positional arguments provided to a partial object call partial keywords The keyword arguments that will be supplied when the partial object is called partial objects are like function objects in that they are callable weak referenceable and can have attributes There are some important differences For instance the __name__ and __doc__ attributes are not created automatically Also partial objects defined in classes behave like static methods and do not transform into bound methods during instance attribute look up
en
null
2,413
Concrete Objects Layer The functions in this chapter are specific to certain Python object types Passing them an object of the wrong type is not a good idea if you receive an object from a Python program and you are not sure that it has the right type you must perform a type check first for example to check that an object is a dictionary use PyDict_Check The chapter is structured like the family tree of Python object types Warning While the functions described in this chapter carefully check the type of the objects which are passed in many of them do not check for NULL being passed instead of a valid object Allowing NULL to be passed in can cause memory access violations and immediate termination of the interpreter Fundamental Objects This section describes Python type objects and the singleton object None Type Objects Creating Heap Allocated Types The None Object Numeric Objects Integer Objects Boolean Objects Floating Point Objects Pack and Unpack functions Pack functions Unpack functions Complex Number Objects Complex Numbers as C Structures Complex Numbers as Python Objects Sequence Objects Generic operations on sequence objects were discussed in the previous chapter this section deals with the specific kinds of sequence objects that are intrinsic to the Python language Bytes Objects Byte Array Objects Type check macros Direct API functions Macros Unicode Objects and Codecs Unicode Objects Unicode Type Unicode Character Properties Creating and accessing Unicode strings Locale Encoding File System Encoding wchar_t Support Built in Codecs Generic Codecs UTF 8 Codecs UTF 32 Codecs UTF 16 Codecs UTF 7 Codecs Unicode Escape Codecs Raw Unicode Escape Codecs Latin 1 Codecs ASCII Codecs Character Map Codecs MBCS codecs for Windows Methods Slots Methods and Slot Functions Tuple Objects Struct Sequence Objects List Objects Container Objects Dictionary Objects Set Objects Function Objects Function Objects Instance Method Objects Method Objects Cell Objects Code Objects Extra information Other Objects File Objects Module Objects Initializing C modules Single phase initialization Multi phase initialization Low level module creation functions Support functions Module lookup Iterator Objects Descriptor Objects Slice Objects Ellipsis Object MemoryView objects Weak Reference Objects Capsules Frame Objects Internal Frames Generator Objects Coroutine Objects Context Variables Objects DateTime Objects Objects for Type Hinting
en
null
2,414
The concurrent package Currently there is only one module in this package concurrent futures Launching parallel tasks
en
null
2,415
The initialization of the sys path module search path A module search path is initialized when Python starts This module search path may be accessed at sys path The first entry in the module search path is the directory that contains the input script if there is one Otherwise the first entry is the current directory which is the case when executing the interactive shell a c command or m module The PYTHONPATH environment variable is often used to add directories to the search path If this environment variable is found then the contents are added to the module search path Note PYTHONPATH will affect all installed Python versions environments Be wary of setting this in your shell profile or global environment variables The site module offers more nuanced techniques as mentioned below The next items added are the directories containing standard Python modules as well as any extension module s that these modules depend on Extension modules are pyd files on Windows and so files on other platforms The directory with the platform independent Python modules is called prefix The directory with the extension modules is called exec_prefix The PYTHONHOME environment variable may be used to set the prefix and exec_prefix locations Otherwise these directories are found by using the Python executable as a starting point and then looking for various landmark files and directories Note that any symbolic links are followed so the real Python executable location is used as the search starting point The Python executable location is called home Once home is determined the prefix directory is found by first looking for python majorversion minorversion zip python311 zip On Windows the zip archive is searched for in home and on Unix the archive is expected to be in lib Note that the expected zip archive location is added to the module search path even if the archive does not exist If no archive was found Python on Windows will continue the search for prefix by looking for Lib os py Python on Unix will look for lib python majorversion minorversion os py lib python3 11 os py On Windows prefix and exec_prefix are the same however on other platforms lib python majorversion minorversion lib dynload lib python3 11 lib dynload is searched for and used as an anchor for exec_prefix On some platforms lib may be lib64 or another value see sys platlibdir and PYTHONPLATLIBDIR Once found prefix and exec_prefix are available at sys prefix and sys exec_prefix respectively Finally the site module is processed and site packages directories are added to the module search path A common way to customize the search path is to create sitecustomize or usercustomize modules as described in the site module documentation Note Certain command line options may further affect path calculations See E I s and S for further details Virtual environments If Python is run in a virtual environment as described at Virtual Environments and Packages then prefix and exec_prefix are specific to the virtual environment If a pyvenv cfg file is found alongside the main executable or in the directory one level above the executable the following variations apply If home is an absolute path and PYTHONHOME is not set this path is used instead of the path to the main executable when deducing prefix and exec_prefix _pth files To completely override sys path create a _pth file with the same name as the shared library or executable python _pth or python311 _pth The shared library path is always known on Windows however it may not be available on other platforms In the _pth file specify one line for each path to add to sys path The file based on the shared library name overrides the one based on the executable which allows paths to be restricted for any program loading the runtime if desired When the file exists all registry and environment variables are ignored isolated mode is enabled and site is not imported unless one line in the file specifies import site Blank paths and lines starting with are ignored Each path may be absolute or relative to the location of the file Import statements other t
en
null
2,416
han to site are not permitted and arbitrary code cannot be specified Note that pth files without leading underscore will be processed normally by the site module when import site has been specified Embedded Python If Python is embedded within another application Py_InitializeFromConfig and the PyConfig structure can be used to initialize Python The path specific details are described at Python Path Configuration Alternatively the older Py_SetPath can be used to bypass the initialization of the module search path See also Finding modules for detailed Windows notes Using Python on Unix platforms for Unix details
en
null
2,417
2to3 Automated Python 2 to 3 code translation 2to3 is a Python program that reads Python 2 x source code and applies a series of fixers to transform it into valid Python 3 x code The standard library contains a rich set of fixers that will handle almost all code 2to3 supporting library lib2to3 is however a flexible and generic library so it is possible to write your own fixers for 2to3 Deprecated since version 3 11 will be removed in version 3 13 The lib2to3 module was marked pending for deprecation in Python 3 9 raising PendingDeprecationWarning on import and fully deprecated in Python 3 11 raising DeprecationWarning The 2to3 tool is part of that It will be removed in Python 3 13 Using 2to3 2to3 will usually be installed with the Python interpreter as a script It is also located in the Tools scripts directory of the Python root 2to3 s basic arguments are a list of files or directories to transform The directories are recursively traversed for Python sources Here is a sample Python 2 x source file example py def greet name print Hello 0 format name print What s your name name raw_input greet name It can be converted to Python 3 x code via 2to3 on the command line 2to3 example py A diff against the original source file is printed 2to3 can also write the needed modifications right back to the source file A backup of the original file is made unless n is also given Writing the changes back is enabled with the w flag 2to3 w example py After transformation example py looks like this def greet name print Hello 0 format name print What s your name name input greet name Comments and exact indentation are preserved throughout the translation process By default 2to3 runs a set of predefined fixers The l flag lists all available fixers An explicit set of fixers to run can be given with f Likewise the x explicitly disables a fixer The following example runs only the imports and has_key fixers 2to3 f imports f has_key example py This command runs every fixer except the apply fixer 2to3 x apply example py Some fixers are explicit meaning they aren t run by default and must be listed on the command line to be run Here in addition to the default fixers the idioms fixer is run 2to3 f all f idioms example py Notice how passing all enables all default fixers Sometimes 2to3 will find a place in your source code that needs to be changed but 2to3 cannot fix automatically In this case 2to3 will print a warning beneath the diff for a file You should address the warning in order to have compliant 3 x code 2to3 can also refactor doctests To enable this mode use the d flag Note that only doctests will be refactored This also doesn t require the module to be valid Python For example doctest like examples in a reST document could also be refactored with this option The v option enables output of more information on the translation process Since some print statements can be parsed as function calls or statements 2to3 cannot always read files containing the print function When 2to3 detects the presence of the from __future__ import print_function compiler directive it modifies its internal grammar to interpret print as a function This change can also be enabled manually with the p flag Use p to run fixers on code that already has had its print statements converted Also e can be used to make exec a function The o or output dir option allows specification of an alternate directory for processed output files to be written to The n flag is required when using this as backup files do not make sense when not overwriting the input files New in version 3 2 3 The o option was added The W or write unchanged files flag tells 2to3 to always write output files even if no changes were required to the file This is most useful with o so that an entire Python source tree is copied with translation from one directory to another This option implies the w flag as it would not make sense otherwise New in version 3 2 3 The W flag was added The add suffix option specifies a string to append to all output filenames The n flag is required when specifying this as backups are n
en
null
2,418
ot necessary when writing to different filenames Example 2to3 n W add suffix 3 example py Will cause a converted file named example py3 to be written New in version 3 2 3 The add suffix option was added To translate an entire project from one directory tree to another use 2to3 output dir python3 version mycode W n python2 version mycode Fixers Each step of transforming code is encapsulated in a fixer The command 2to3 l lists them As documented above each can be turned on and off individually They are described here in more detail apply Removes usage of apply For example apply function args kwargs is converted to function args kwargs asserts Replaces deprecated unittest method names with the correct ones From To failUnlessEqual a b assertEqual a b assertEquals a b assertEqual a b failIfEqual a b assertNotEqual a b assertNotEquals a b assertNotEqual a b failUnless a assertTrue a assert_ a assertTrue a failIf a assertFalse a failUnlessRaises exc cal assertRaises exc cal failUnlessAlmostEqual a b assertAlmostEqual a b assertAlmostEquals a b assertAlmostEqual a b failIfAlmostEqual a b assertNotAlmostEqual a b assertNotAlmostEquals a b assertNotAlmostEqual a b basestring Converts basestring to str buffer Converts buffer to memoryview This fixer is optional because the memoryview API is similar but not exactly the same as that of buffer dict Fixes dictionary iteration methods dict iteritems is converted to dict items dict iterkeys to dict keys and dict itervalues to dict values Similarly dict viewitems dict viewkeys and dict viewvalues are converted respectively to dict items dict keys and dict values It also wraps existing usages of dict items dict keys and dict values in a call to list except Converts except X T to except X as T exec Converts the exec statement to the exec function execfile Removes usage of execfile The argument to execfile is wrapped in calls to open compile and exec exitfunc Changes assignment of sys exitfunc to use of the atexit module filter Wraps filter usage in a list call funcattrs Fixes function attributes that have been renamed For example my_function func_closure is converted to my_function __closure__ future Removes from __future__ import new_feature statements getcwdu Renames os getcwdu to os getcwd has_key Changes dict has_key key to key in dict idioms This optional fixer performs several transformations that make Python code more idiomatic Type comparisons like type x is SomeClass and type x SomeClass are converted to isinstance x SomeClass while 1 becomes while True This fixer also tries to make use of sorted in appropriate places For example this block L list some_iterable L sort is changed to L sorted some_iterable import Detects sibling imports and converts them to relative imports imports Handles module renames in the standard library imports2 Handles other modules renames in the standard library It is separate from the imports fixer only because of technical limitations input Converts input prompt to eval input prompt intern Converts intern to sys intern isinstance Fixes duplicate types in the second argument of isinstance For example isinstance x int int is converted to isinstance x int and isinstance x int float int is converted to isinstance x int float itertools_imports Removes imports of itertools ifilter itertools izip and itertools imap Imports of itertools ifilterfalse are also changed to itertools filterfalse itertools Changes usage of itertools ifilter itertools izip and itertools imap to their built in equivalents itertools ifilterfalse is changed to itertools filterfalse long Renames long to int map Wraps map in a list call It also changes map None x to list x Using from future_builtins import map disables this fixer metaclass Converts the old metaclass syntax __metaclass__ Meta in the class body to the new class X metaclass Meta methodattrs Fixes old method attribute names For example meth im_func is converted to meth __func__ ne Converts the old not equal syntax to next Converts the use of iterator s next methods to the next function It also renames next methods to __next__ n
en
null
2,419
onzero Renames definitions of methods called __nonzero__ to __bool__ numliterals Converts octal literals into the new syntax operator Converts calls to various functions in the operator module to other but equivalent function calls When needed the appropriate import statements are added e g import collections abc The following mapping are made From To operator isCallable obj callable obj operator sequenceIncludes obj operator contains obj operator isSequenceType obj isinstance obj collections abc Sequence operator isMappingType obj isinstance obj collections abc Mapping operator isNumberType obj isinstance obj numbers Number operator repeat obj n operator mul obj n operator irepeat obj n operator imul obj n paren Add extra parenthesis where they are required in list comprehensions For example x for x in 1 2 becomes x for x in 1 2 print Converts the print statement to the print function raise Converts raise E V to raise E V and raise E V T to raise E V with_traceback T If E is a tuple the translation will be incorrect because substituting tuples for exceptions has been removed in 3 0 raw_input Converts raw_input to input reduce Handles the move of reduce to functools reduce reload Converts reload to importlib reload renames Changes sys maxint to sys maxsize repr Replaces backtick repr with the repr function set_literal Replaces use of the set constructor with set literals This fixer is optional standarderror Renames StandardError to Exception sys_exc Changes the deprecated sys exc_value sys exc_type sys exc_traceback to use sys exc_info throw Fixes the API change in generator s throw method tuple_params Removes implicit tuple parameter unpacking This fixer inserts temporary variables types Fixes code broken from the removal of some members in the types module unicode Renames unicode to str urllib Handles the rename of urllib and urllib2 to the urllib package ws_comma Removes excess whitespace from comma separated items This fixer is optional xrange Renames xrange to range and wraps existing range calls with list xreadlines Changes for x in file xreadlines to for x in file zip Wraps zip usage in a list call This is disabled when from future_builtins import zip appears lib2to3 2to3 s library Source code Lib lib2to3 Deprecated since version 3 11 will be removed in version 3 13 Python 3 9 switched to a PEG parser see PEP 617 while lib2to3 is using a less flexible LL 1 parser Python 3 10 includes new language syntax that is not parsable by lib2to3 s LL 1 parser see PEP 634 The lib2to3 module was marked pending for deprecation in Python 3 9 raising PendingDeprecationWarning on import and fully deprecated in Python 3 11 raising DeprecationWarning It will be removed from the standard library in Python 3 13 Consider third party alternatives such as LibCST or parso Note The lib2to3 API should be considered unstable and may change drastically in the future
en
null
2,420
importlib resources Package resource reading opening and access Source code Lib importlib resources __init__ py New in version 3 7 This module leverages Python s import system to provide access to resources within packages Resources are file like resources associated with a module or package in Python The resources may be contained directly in a package within a subdirectory contained in that package or adjacent to modules outside a package Resources may be text or binary As a result Python module sources py of a package and compilation artifacts pycache are technically de facto resources of that package In practice however resources are primarily those non Python artifacts exposed specifically by the package author Resources can be opened or read in either binary or text mode Resources are roughly akin to files inside directories though it s important to keep in mind that this is just a metaphor Resources and packages do not have to exist as physical files and directories on the file system for example a package and its resources can be imported from a zip file using zipimport Note This module provides functionality similar to pkg_resources Basic Resource Access without the performance overhead of that package This makes reading resources included in packages easier with more stable and consistent semantics The standalone backport of this module provides more information on using importlib resources and migrating from pkg_resources to importlib resources Loaders that wish to support resource reading should implement a get_resource_reader fullname method as specified by importlib resources abc ResourceReader class importlib resources Anchor Represents an anchor for resources either a module object or a module name as a string Defined as Union str ModuleType importlib resources files anchor Anchor None None Returns a Traversable object representing the resource container think directory and its resources think files A Traversable may contain other containers think subdirectories anchor is an optional Anchor If the anchor is a package resources are resolved from that package If a module resources are resolved adjacent to that module in the same package or the package root If the anchor is omitted the caller s module is used New in version 3 9 Changed in version 3 12 package parameter was renamed to anchor anchor can now be a non package module and if omitted will default to the caller s module package is still accepted for compatibility but will raise a DeprecationWarning Consider passing the anchor positionally or using importlib_resources 5 10 for a compatible interface on older Pythons importlib resources as_file traversable Given a Traversable object representing a file or directory typically from importlib resources files return a context manager for use in a with statement The context manager provides a pathlib Path object Exiting the context manager cleans up any temporary file or directory created when the resource was extracted from e g a zip file Use as_file when the Traversable methods read_text etc are insufficient and an actual file or directory on the file system is required New in version 3 9 Changed in version 3 12 Added support for traversable representing a directory Deprecated functions An older deprecated set of functions is still available but is scheduled for removal in a future version of Python The main drawback of these functions is that they do not support directories they assume all resources are located directly within a package importlib resources Package Whenever a function accepts a Package argument you can pass in either a module object or a module name as a string You can only pass module objects whose __spec__ submodule_search_locations is not None The Package type is defined as Union str ModuleType Deprecated since version 3 12 importlib resources Resource For resource arguments of the functions below you can pass in the name of a resource as a string or a path like object The Resource type is defined as Union str os PathLike importlib resources open_binary package resource Open for binar
en
null
2,421
y reading the resource within package package is either a name or a module object which conforms to the Package requirements resource is the name of the resource to open within package it may not contain path separators and it may not have sub resources i e it cannot be a directory This function returns a typing BinaryIO instance a binary I O stream open for reading Deprecated since version 3 11 Calls to this function can be replaced by files package joinpath resource open rb importlib resources open_text package resource encoding utf 8 errors strict Open for text reading the resource within package By default the resource is opened for reading as UTF 8 package is either a name or a module object which conforms to the Package requirements resource is the name of the resource to open within package it may not contain path separators and it may not have sub resources i e it cannot be a directory encoding and errors have the same meaning as with built in open This function returns a typing TextIO instance a text I O stream open for reading Deprecated since version 3 11 Calls to this function can be replaced by files package joinpath resource open r encoding encoding importlib resources read_binary package resource Read and return the contents of the resource within package as bytes package is either a name or a module object which conforms to the Package requirements resource is the name of the resource to open within package it may not contain path separators and it may not have sub resources i e it cannot be a directory This function returns the contents of the resource as bytes Deprecated since version 3 11 Calls to this function can be replaced by files package joinpath resource read_bytes importlib resources read_text package resource encoding utf 8 errors strict Read and return the contents of resource within package as a str By default the contents are read as strict UTF 8 package is either a name or a module object which conforms to the Package requirements resource is the name of the resource to open within package it may not contain path separators and it may not have sub resources i e it cannot be a directory encoding and errors have the same meaning as with built in open This function returns the contents of the resource as str Deprecated since version 3 11 Calls to this function can be replaced by files package joinpath resource read_text encoding encoding importlib resources path package resource Return the path to the resource as an actual file system path This function returns a context manager for use in a with statement The context manager provides a pathlib Path object Exiting the context manager cleans up any temporary file created when the resource needs to be extracted from e g a zip file package is either a name or a module object which conforms to the Package requirements resource is the name of the resource to open within package it may not contain path separators and it may not have sub resources i e it cannot be a directory Deprecated since version 3 11 Calls to this function can be replaced using as_file as_file files package joinpath resource importlib resources is_resource package name Return True if there is a resource named name in the package otherwise False This function does not consider directories to be resources package is either a name or a module object which conforms to the Package requirements Deprecated since version 3 11 Calls to this function can be replaced by files package joinpath resource is_file importlib resources contents package Return an iterable over the named items within the package The iterable returns str resources e g files and non resources e g directories The iterable does not recurse into subdirectories package is either a name or a module object which conforms to the Package requirements Deprecated since version 3 11 Calls to this function can be replaced by resource name for resource in files package iterdir if resource is_file
en
null
2,422
xml sax Support for SAX2 parsers Source code Lib xml sax __init__ py The xml sax package provides a number of modules which implement the Simple API for XML SAX interface for Python The package itself provides the SAX exceptions and the convenience functions which will be most used by users of the SAX API Warning The xml sax module is not secure against maliciously constructed data If you need to parse untrusted or unauthenticated data see XML vulnerabilities Changed in version 3 7 1 The SAX parser no longer processes general external entities by default to increase security Before the parser created network connections to fetch remote files or loaded local files from the file system for DTD and entities The feature can be enabled again with method setFeature on the parser object and argument feature_external_ges The convenience functions are xml sax make_parser parser_list Create and return a SAX XMLReader object The first parser found will be used If parser_list is provided it must be an iterable of strings which name modules that have a function named create_parser Modules listed in parser_list will be used before modules in the default list of parsers Changed in version 3 8 The parser_list argument can be any iterable not just a list xml sax parse filename_or_stream handler error_handler handler ErrorHandler Create a SAX parser and use it to parse a document The document passed in as filename_or_stream can be a filename or a file object The handler parameter needs to be a SAX ContentHandler instance If error_handler is given it must be a SAX ErrorHandler instance if omitted SAXParseException will be raised on all errors There is no return value all work must be done by the handler passed in xml sax parseString string handler error_handler handler ErrorHandler Similar to parse but parses from a buffer string received as a parameter string must be a str instance or a bytes like object Changed in version 3 5 Added support of str instances A typical SAX application uses three kinds of objects readers handlers and input sources Reader in this context is another term for parser i e some piece of code that reads the bytes or characters from the input source and produces a sequence of events The events then get distributed to the handler objects i e the reader invokes a method on the handler A SAX application must therefore obtain a reader object create or open the input sources create the handlers and connect these objects all together As the final step of preparation the reader is called to parse the input During parsing methods on the handler objects are called based on structural and syntactic events from the input data For these objects only the interfaces are relevant they are normally not instantiated by the application itself Since Python does not have an explicit notion of interface they are formally introduced as classes but applications may use implementations which do not inherit from the provided classes The InputSource Locator Attributes AttributesNS and XMLReader interfaces are defined in the module xml sax xmlreader The handler interfaces are defined in xml sax handler For convenience InputSource which is often instantiated directly and the handler classes are also available from xml sax These interfaces are described below In addition to these classes xml sax provides the following exception classes exception xml sax SAXException msg exception None Encapsulate an XML error or warning This class can contain basic error or warning information from either the XML parser or the application it can be subclassed to provide additional functionality or to add localization Note that although the handlers defined in the ErrorHandler interface receive instances of this exception it is not required to actually raise the exception it is also useful as a container for information When instantiated msg should be a human readable description of the error The optional exception parameter if given should be None or an exception that was caught by the parsing code and is being passed along as information This is the base class
en
null
2,423
for the other SAX exception classes exception xml sax SAXParseException msg exception locator Subclass of SAXException raised on parse errors Instances of this class are passed to the methods of the SAX ErrorHandler interface to provide information about the parse error This class supports the SAX Locator interface as well as the SAXException interface exception xml sax SAXNotRecognizedException msg exception None Subclass of SAXException raised when a SAX XMLReader is confronted with an unrecognized feature or property SAX applications and extensions may use this class for similar purposes exception xml sax SAXNotSupportedException msg exception None Subclass of SAXException raised when a SAX XMLReader is asked to enable a feature that is not supported or to set a property to a value that the implementation does not support SAX applications and extensions may use this class for similar purposes See also SAX The Simple API for XML This site is the focal point for the definition of the SAX API It provides a Java implementation and online documentation Links to implementations and historical information are also available Module xml sax handler Definitions of the interfaces for application provided objects Module xml sax saxutils Convenience functions for use in SAX applications Module xml sax xmlreader Definitions of the interfaces for parser provided objects SAXException Objects The SAXException exception class supports the following methods SAXException getMessage Return a human readable message describing the error condition SAXException getException Return an encapsulated exception object or None
en
null
2,424
venv Creation of virtual environments New in version 3 3 Source code Lib venv The venv module supports creating lightweight virtual environments each with their own independent set of Python packages installed in their site directories A virtual environment is created on top of an existing Python installation known as the virtual environment s base Python and may optionally be isolated from the packages in the base environment so only those explicitly installed in the virtual environment are available When used from within a virtual environment common installation tools such as pip will install Python packages into a virtual environment without needing to be told to do so explicitly A virtual environment is amongst other things Used to contain a specific Python interpreter and software libraries and binaries which are needed to support a project library or application These are by default isolated from software in other virtual environments and Python interpreters and libraries installed in the operating system Contained in a directory conventionally either named venv or venv in the project directory or under a container directory for lots of virtual environments such as virtualenvs Not checked into source control systems such as Git Considered as disposable it should be simple to delete and recreate it from scratch You don t place any project code in the environment Not considered as movable or copyable you just recreate the same environment in the target location See PEP 405 for more background on Python virtual environments See also Python Packaging User Guide Creating and using virtual environments Availability not Emscripten not WASI This module does not work or is not available on WebAssembly platforms wasm32 emscripten and wasm32 wasi See WebAssembly platforms for more information Creating virtual environments Creation of virtual environments is done by executing the command venv python m venv path to new virtual environment Running this command creates the target directory creating any parent directories that don t exist already and places a pyvenv cfg file in it with a home key pointing to the Python installation from which the command was run a common name for the target directory is venv It also creates a bin or Scripts on Windows subdirectory containing a copy symlink of the Python binary binaries as appropriate for the platform or arguments used at environment creation time It also creates an initially empty lib pythonX Y site packages subdirectory on Windows this is Lib site packages If an existing directory is specified it will be re used Changed in version 3 5 The use of venv is now recommended for creating virtual environments Deprecated since version 3 6 pyvenv was the recommended tool for creating virtual environments for Python 3 3 and 3 4 and is deprecated in Python 3 6 On Windows invoke the venv command as follows c Python35 python m venv c path to myenv Alternatively if you configured the PATH and PATHEXT variables for your Python installation c python m venv c path to myenv The command if run with h will show the available options usage venv h system site packages symlinks copies clear upgrade without pip prompt PROMPT upgrade deps ENV_DIR ENV_DIR Creates virtual Python environments in one or more target directories positional arguments ENV_DIR A directory to create the environment in optional arguments h help show this help message and exit system site packages Give the virtual environment access to the system site packages dir symlinks Try to use symlinks rather than copies when symlinks are not the default for the platform copies Try to use copies rather than symlinks even when symlinks are the default for the platform clear Delete the contents of the environment directory if it already exists before environment creation upgrade Upgrade the environment directory to use this version of Python assuming Python has been upgraded in place without pip Skips installing or upgrading pip in the virtual environment pip is bootstrapped by default prompt PROMPT Provides an alternative prompt prefix for thi
en
null
2,425
s environment upgrade deps Upgrade core dependencies pip to the latest version in PyPI Once an environment has been created you may wish to activate it e g by sourcing an activate script in its bin directory Changed in version 3 12 setuptools is no longer a core venv dependency Changed in version 3 9 Add upgrade deps option to upgrade pip setuptools to the latest on PyPI Changed in version 3 4 Installs pip by default added the without pip and copies options Changed in version 3 4 In earlier versions if the target directory already existed an error was raised unless the clear or upgrade option was provided Note While symlinks are supported on Windows they are not recommended Of particular note is that double clicking python exe in File Explorer will resolve the symlink eagerly and ignore the virtual environment Note On Microsoft Windows it may be required to enable the Activate ps1 script by setting the execution policy for the user You can do this by issuing the following PowerShell command PS C Set ExecutionPolicy ExecutionPolicy RemoteSigned Scope CurrentUserSee About Execution Policies for more information The created pyvenv cfg file also includes the include system site packages key set to true if venv is run with the system site packages option false otherwise Unless the without pip option is given ensurepip will be invoked to bootstrap pip into the virtual environment Multiple paths can be given to venv in which case an identical virtual environment will be created according to the given options at each provided path How venvs work When a Python interpreter is running from a virtual environment sys prefix and sys exec_prefix point to the directories of the virtual environment whereas sys base_prefix and sys base_exec_prefix point to those of the base Python used to create the environment It is sufficient to check sys prefix sys base_prefix to determine if the current interpreter is running from a virtual environment A virtual environment may be activated using a script in its binary directory bin on POSIX Scripts on Windows This will prepend that directory to your PATH so that running python will invoke the environment s Python interpreter and you can run installed scripts without having to use their full path The invocation of the activation script is platform specific venv must be replaced by the path to the directory containing the virtual environment Platform Shell Command to activate virtual environment POSIX bash zsh source venv bin activate fish source venv bin activate fish csh tcsh source venv bin activate csh PowerShell venv bin Activate ps1 Windows cmd exe C venv Scripts activate bat PowerShell PS C venv Scripts Activate ps1 New in version 3 4 fish and csh activation scripts New in version 3 8 PowerShell activation scripts installed under POSIX for PowerShell Core support You don t specifically need to activate a virtual environment as you can just specify the full path to that environment s Python interpreter when invoking Python Furthermore all scripts installed in the environment should be runnable without activating it In order to achieve this scripts installed into virtual environments have a shebang line which points to the environment s Python interpreter i e path to venv bin python This means that the script will run with that interpreter regardless of the value of PATH On Windows shebang line processing is supported if you have the Python Launcher for Windows installed Thus double clicking an installed script in a Windows Explorer window should run it with the correct interpreter without the environment needing to be activated or on the PATH When a virtual environment has been activated the VIRTUAL_ENV environment variable is set to the path of the environment Since explicitly activating a virtual environment is not required to use it VIRTUAL_ENV cannot be relied upon to determine whether a virtual environment is being used Warning Because scripts installed in environments should not expect the environment to be activated their shebang lines contain the absolute paths to their environment s inte
en
null
2,426
rpreters Because of this environments are inherently non portable in the general case You should always have a simple means of recreating an environment for example if you have a requirements file requirements txt you can invoke pip install r requirements txt using the environment s pip to install all of the packages needed by the environment If for any reason you need to move the environment to a new location you should recreate it at the desired location and delete the one at the old location If you move an environment because you moved a parent directory of it you should recreate the environment in its new location Otherwise software installed into the environment may not work as expected You can deactivate a virtual environment by typing deactivate in your shell The exact mechanism is platform specific and is an internal implementation detail typically a script or shell function will be used API The high level method described above makes use of a simple API which provides mechanisms for third party virtual environment creators to customize environment creation according to their needs the EnvBuilder class class venv EnvBuilder system_site_packages False clear False symlinks False upgrade False with_pip False prompt None upgrade_deps False The EnvBuilder class accepts the following keyword arguments on instantiation system_site_packages a Boolean value indicating that the system Python site packages should be available to the environment defaults to False clear a Boolean value which if true will delete the contents of any existing target directory before creating the environment symlinks a Boolean value indicating whether to attempt to symlink the Python binary rather than copying upgrade a Boolean value which if true will upgrade an existing environment with the running Python for use when that Python has been upgraded in place defaults to False with_pip a Boolean value which if true ensures pip is installed in the virtual environment This uses ensurepip with the default pip option prompt a String to be used after virtual environment is activated defaults to None which means directory name of the environment would be used If the special string is provided the basename of the current directory is used as the prompt upgrade_deps Update the base venv modules to the latest on PyPI Changed in version 3 4 Added the with_pip parameter Changed in version 3 6 Added the prompt parameter Changed in version 3 9 Added the upgrade_deps parameter Creators of third party virtual environment tools will be free to use the provided EnvBuilder class as a base class The returned env builder is an object which has a method create create env_dir Create a virtual environment by specifying the target directory absolute or relative to the current directory which is to contain the virtual environment The create method will either create the environment in the specified directory or raise an appropriate exception The create method of the EnvBuilder class illustrates the hooks available for subclass customization def create self env_dir Create a virtualized Python environment in a directory env_dir is the target directory to create an environment in env_dir os path abspath env_dir context self ensure_directories env_dir self create_configuration context self setup_python context self setup_scripts context self post_setup context Each of the methods ensure_directories create_configuration setup_python setup_scripts and post_setup can be overridden ensure_directories env_dir Creates the environment directory and all necessary subdirectories that don t already exist and returns a context object This context object is just a holder for attributes such as paths for use by the other methods If the EnvBuilder is created with the arg clear True contents of the environment directory will be cleared and then all necessary subdirectories will be recreated The returned context object is a types SimpleNamespace with the following attributes env_dir The location of the virtual environment Used for __VENV_DIR__ in activation scripts see install_scripts env_na
en
null
2,427
me The name of the virtual environment Used for __VENV_NAME__ in activation scripts see install_scripts prompt The prompt to be used by the activation scripts Used for __VENV_PROMPT__ in activation scripts see install_scripts executable The underlying Python executable used by the virtual environment This takes into account the case where a virtual environment is created from another virtual environment inc_path The include path for the virtual environment lib_path The purelib path for the virtual environment bin_path The script path for the virtual environment bin_name The name of the script path relative to the virtual environment location Used for __VENV_BIN_NAME__ in activation scripts see install_scripts env_exe The name of the Python interpreter in the virtual environment Used for __VENV_PYTHON__ in activation scripts see install_scripts env_exec_cmd The name of the Python interpreter taking into account filesystem redirections This can be used to run Python in the virtual environment Changed in version 3 11 The venv sysconfig installation scheme is used to construct the paths of the created directories Changed in version 3 12 The attribute lib_path was added to the context and the context object was documented create_configuration context Creates the pyvenv cfg configuration file in the environment setup_python context Creates a copy or symlink to the Python executable in the environment On POSIX systems if a specific executable python3 x was used symlinks to python and python3 will be created pointing to that executable unless files with those names already exist setup_scripts context Installs activation scripts appropriate to the platform into the virtual environment upgrade_dependencies context Upgrades the core venv dependency packages currently pip in the environment This is done by shelling out to the pip executable in the environment New in version 3 9 Changed in version 3 12 setuptools is no longer a core venv dependency post_setup context A placeholder method which can be overridden in third party implementations to pre install packages in the virtual environment or perform other post creation steps Changed in version 3 7 2 Windows now uses redirector scripts for python w exe instead of copying the actual binaries In 3 7 2 only setup_python does nothing unless running from a build in the source tree Changed in version 3 7 3 Windows copies the redirector scripts as part of setup_python instead of setup_scripts This was not the case in 3 7 2 When using symlinks the original executables will be linked In addition EnvBuilder provides this utility method that can be called from setup_scripts or post_setup in subclasses to assist in installing custom scripts into the virtual environment install_scripts context path path is the path to a directory that should contain subdirectories common posix nt each containing scripts destined for the bin directory in the environment The contents of common and the directory corresponding to os name are copied after some text replacement of placeholders __VENV_DIR__ is replaced with the absolute path of the environment directory __VENV_NAME__ is replaced with the environment name final path segment of environment directory __VENV_PROMPT__ is replaced with the prompt the environment name surrounded by parentheses and with a following space __VENV_BIN_NAME__ is replaced with the name of the bin directory either bin or Scripts __VENV_PYTHON__ is replaced with the absolute path of the environment s executable The directories are allowed to exist for when an existing environment is being upgraded There is also a module level convenience function venv create env_dir system_site_packages False clear False symlinks False with_pip False prompt None upgrade_deps False Create an EnvBuilder with the given keyword arguments and call its create method with the env_dir argument New in version 3 3 Changed in version 3 4 Added the with_pip parameter Changed in version 3 6 Added the prompt parameter Changed in version 3 9 Added the upgrade_deps parameter An example of extending EnvBuilder The f
en
null
2,428
ollowing script shows how to extend EnvBuilder by implementing a subclass which installs setuptools and pip into a created virtual environment import os import os path from subprocess import Popen PIPE import sys from threading import Thread from urllib parse import urlparse from urllib request import urlretrieve import venv class ExtendedEnvBuilder venv EnvBuilder This builder installs setuptools and pip so that you can pip or easy_install other packages into the created virtual environment param nodist If true setuptools and pip are not installed into the created virtual environment param nopip If true pip is not installed into the created virtual environment param progress If setuptools or pip are installed the progress of the installation can be monitored by passing a progress callable If specified it is called with two arguments a string indicating some progress and a context indicating where the string is coming from The context argument can have one of three values main indicating that it is called from virtualize itself and stdout and stderr which are obtained by reading lines from the output streams of a subprocess which is used to install the app If a callable is not specified default progress information is output to sys stderr def __init__ self args kwargs self nodist kwargs pop nodist False self nopip kwargs pop nopip False self progress kwargs pop progress None self verbose kwargs pop verbose False super __init__ args kwargs def post_setup self context Set up any packages which need to be pre installed into the virtual environment being created param context The information for the virtual environment creation request being processed os environ VIRTUAL_ENV context env_dir if not self nodist self install_setuptools context Can t install pip without setuptools if not self nopip and not self nodist self install_pip context def reader self stream context Read lines from a subprocess output stream and either pass to a progress callable if specified or write progress information to sys stderr progress self progress while True s stream readline if not s break if progress is not None progress s context else if not self verbose sys stderr write else sys stderr write s decode utf 8 sys stderr flush stream close def install_script self context name url _ _ path _ _ _ urlparse url fn os path split path 1 binpath context bin_path distpath os path join binpath fn Download script into the virtual environment s binaries folder urlretrieve url distpath progress self progress if self verbose term n else term if progress is not None progress Installing s s name term main else sys stderr write Installing s s name term sys stderr flush Install in the virtual environment args context env_exe fn p Popen args stdout PIPE stderr PIPE cwd binpath t1 Thread target self reader args p stdout stdout t1 start t2 Thread target self reader args p stderr stderr t2 start p wait t1 join t2 join if progress is not None progress done main else sys stderr write done n Clean up no longer needed os unlink distpath def install_setuptools self context Install setuptools in the virtual environment param context The information for the virtual environment creation request being processed url https bootstrap pypa io ez_setup py self install_script context setuptools url clear up the setuptools archive which gets downloaded pred lambda o o startswith setuptools and o endswith tar gz files filter pred os listdir context bin_path for f in files f os path join context bin_path f os unlink f def install_pip self context Install pip in the virtual environment param context The information for the virtual environment creation request being processed url https bootstrap pypa io get pip py self install_script context pip url def main args None import argparse parser argparse ArgumentParser prog __name__ description Creates virtual Python environments in one or more target directories parser add_argument dirs metavar ENV_DIR nargs help A directory in which to create the virtual environment parser add_argument no setuptools default False action store_true dest no
en
null
2,429
dist help Don t install setuptools or pip in the virtual environment parser add_argument no pip default False action store_true dest nopip help Don t install pip in the virtual environment parser add_argument system site packages default False action store_true dest system_site help Give the virtual environment access to the system site packages dir if os name nt use_symlinks False else use_symlinks True parser add_argument symlinks default use_symlinks action store_true dest symlinks help Try to use symlinks rather than copies when symlinks are not the default for the platform parser add_argument clear default False action store_true dest clear help Delete the contents of the virtual environment directory if it already exists before virtual environment creation parser add_argument upgrade default False action store_true dest upgrade help Upgrade the virtual environment directory to use this version of Python assuming Python has been upgraded in place parser add_argument verbose default False action store_true dest verbose help Display the output from the scripts which install setuptools and pip options parser parse_args args if options upgrade and options clear raise ValueError you cannot supply upgrade and clear together builder ExtendedEnvBuilder system_site_packages options system_site clear options clear symlinks options symlinks upgrade options upgrade nodist options nodist nopip options nopip verbose options verbose for d in options dirs builder create d if __name__ __main__ rc 1 try main rc 0 except Exception as e print Error s e file sys stderr sys exit rc This script is also available for download online
en
null
2,430
Module Objects PyTypeObject PyModule_Type Part of the Stable ABI This instance of PyTypeObject represents the Python module type This is exposed to Python programs as types ModuleType int PyModule_Check PyObject p Return true if p is a module object or a subtype of a module object This function always succeeds int PyModule_CheckExact PyObject p Return true if p is a module object but not a subtype of PyModule_Type This function always succeeds PyObject PyModule_NewObject PyObject name Return value New reference Part of the Stable ABI since version 3 7 Return a new module object with the __name__ attribute set to name The module s __name__ __doc__ __package__ and __loader__ attributes are filled in all but __name__ are set to None the caller is responsible for providing a __file__ attribute New in version 3 3 Changed in version 3 4 __package__ and __loader__ are set to None PyObject PyModule_New const char name Return value New reference Part of the Stable ABI Similar to PyModule_NewObject but the name is a UTF 8 encoded string instead of a Unicode object PyObject PyModule_GetDict PyObject module Return value Borrowed reference Part of the Stable ABI Return the dictionary object that implements module s namespace this object is the same as the __dict__ attribute of the module object If module is not a module object or a subtype of a module object SystemError is raised and NULL is returned It is recommended extensions use other PyModule_ and PyObject_ functions rather than directly manipulate a module s __dict__ PyObject PyModule_GetNameObject PyObject module Return value New reference Part of the Stable ABI since version 3 7 Return module s __name__ value If the module does not provide one or if it is not a string SystemError is raised and NULL is returned New in version 3 3 const char PyModule_GetName PyObject module Part of the Stable ABI Similar to PyModule_GetNameObject but return the name encoded to utf 8 void PyModule_GetState PyObject module Part of the Stable ABI Return the state of the module that is a pointer to the block of memory allocated at module creation time or NULL See PyModuleDef m_size PyModuleDef PyModule_GetDef PyObject module Part of the Stable ABI Return a pointer to the PyModuleDef struct from which the module was created or NULL if the module wasn t created from a definition PyObject PyModule_GetFilenameObject PyObject module Return value New reference Part of the Stable ABI Return the name of the file from which module was loaded using module s __file__ attribute If this is not defined or if it is not a unicode string raise SystemError and return NULL otherwise return a reference to a Unicode object New in version 3 2 const char PyModule_GetFilename PyObject module Part of the Stable ABI Similar to PyModule_GetFilenameObject but return the filename encoded to utf 8 Deprecated since version 3 2 PyModule_GetFilename raises UnicodeEncodeError on unencodable filenames use PyModule_GetFilenameObject instead Initializing C modules Modules objects are usually created from extension modules shared libraries which export an initialization function or compiled in modules where the initialization function is added using PyImport_AppendInittab See Building C and C Extensions or Extending Embedded Python for details The initialization function can either pass a module definition instance to PyModule_Create and return the resulting module object or request multi phase initialization by returning the definition struct itself type PyModuleDef Part of the Stable ABI including all members The module definition struct which holds all information needed to create a module object There is usually only one statically initialized variable of this type for each module PyModuleDef_Base m_base Always initialize this member to PyModuleDef_HEAD_INIT const char m_name Name for the new module const char m_doc Docstring for the module usually a docstring variable created with PyDoc_STRVAR is used Py_ssize_t m_size Module state may be kept in a per module memory area that can be retrieved with PyModule_GetState rather than in
en
null
2,431
static globals This makes modules safe for use in multiple sub interpreters This memory area is allocated based on m_size on module creation and freed when the module object is deallocated after the m_free function has been called if present Setting m_size to 1 means that the module does not support sub interpreters because it has global state Setting it to a non negative value means that the module can be re initialized and specifies the additional amount of memory it requires for its state Non negative m_size is required for multi phase initialization See PEP 3121 for more details PyMethodDef m_methods A pointer to a table of module level functions described by PyMethodDef values Can be NULL if no functions are present PyModuleDef_Slot m_slots An array of slot definitions for multi phase initialization terminated by a 0 NULL entry When using single phase initialization m_slots must be NULL Changed in version 3 5 Prior to version 3 5 this member was always set to NULL and was defined as inquiry m_reload traverseproc m_traverse A traversal function to call during GC traversal of the module object or NULL if not needed This function is not called if the module state was requested but is not allocated yet This is the case immediately after the module is created and before the module is executed Py_mod_exec function More precisely this function is not called if m_size is greater than 0 and the module state as returned by PyModule_GetState is NULL Changed in version 3 9 No longer called before the module state is allocated inquiry m_clear A clear function to call during GC clearing of the module object or NULL if not needed This function is not called if the module state was requested but is not allocated yet This is the case immediately after the module is created and before the module is executed Py_mod_exec function More precisely this function is not called if m_size is greater than 0 and the module state as returned by PyModule_GetState is NULL Like PyTypeObject tp_clear this function is not always called before a module is deallocated For example when reference counting is enough to determine that an object is no longer used the cyclic garbage collector is not involved and m_free is called directly Changed in version 3 9 No longer called before the module state is allocated freefunc m_free A function to call during deallocation of the module object or NULL if not needed This function is not called if the module state was requested but is not allocated yet This is the case immediately after the module is created and before the module is executed Py_mod_exec function More precisely this function is not called if m_size is greater than 0 and the module state as returned by PyModule_GetState is NULL Changed in version 3 9 No longer called before the module state is allocated Single phase initialization The module initialization function may create and return the module object directly This is referred to as single phase initialization and uses one of the following two module creation functions PyObject PyModule_Create PyModuleDef def Return value New reference Create a new module object given the definition in def This behaves like PyModule_Create2 with module_api_version set to PYTHON_API_VERSION PyObject PyModule_Create2 PyModuleDef def int module_api_version Return value New reference Part of the Stable ABI Create a new module object given the definition in def assuming the API version module_api_version If that version does not match the version of the running interpreter a RuntimeWarning is emitted Note Most uses of this function should be using PyModule_Create instead only use this if you are sure you need it Before it is returned from in the initialization function the resulting module object is typically populated using functions like PyModule_AddObjectRef Multi phase initialization An alternate way to specify extensions is to request multi phase initialization Extension modules created this way behave more like Python modules the initialization is split between the creation phase when the module object is created
en
null
2,432
and the execution phase when it is populated The distinction is similar to the __new__ and __init__ methods of classes Unlike modules created using single phase initialization these modules are not singletons if the sys modules entry is removed and the module is re imported a new module object is created and the old module is subject to normal garbage collection as with Python modules By default multiple modules created from the same definition should be independent changes to one should not affect the others This means that all state should be specific to the module object using e g using PyModule_GetState or its contents such as the module s __dict__ or individual classes created with PyType_FromSpec All modules created using multi phase initialization are expected to support sub interpreters Making sure multiple modules are independent is typically enough to achieve this To request multi phase initialization the initialization function PyInit_modulename returns a PyModuleDef instance with non empty m_slots Before it is returned the PyModuleDef instance must be initialized with the following function PyObject PyModuleDef_Init PyModuleDef def Return value Borrowed reference Part of the Stable ABI since version 3 5 Ensures a module definition is a properly initialized Python object that correctly reports its type and reference count Returns def cast to PyObject or NULL if an error occurred New in version 3 5 The m_slots member of the module definition must point to an array of PyModuleDef_Slot structures type PyModuleDef_Slot int slot A slot ID chosen from the available values explained below void value Value of the slot whose meaning depends on the slot ID New in version 3 5 The m_slots array must be terminated by a slot with id 0 The available slot types are Py_mod_create Specifies a function that is called to create the module object itself The value pointer of this slot must point to a function of the signature PyObject create_module PyObject spec PyModuleDef def The function receives a ModuleSpec instance as defined in PEP 451 and the module definition It should return a new module object or set an error and return NULL This function should be kept minimal In particular it should not call arbitrary Python code as trying to import the same module again may result in an infinite loop Multiple Py_mod_create slots may not be specified in one module definition If Py_mod_create is not specified the import machinery will create a normal module object using PyModule_New The name is taken from spec not the definition to allow extension modules to dynamically adjust to their place in the module hierarchy and be imported under different names through symlinks all while sharing a single module definition There is no requirement for the returned object to be an instance of PyModule_Type Any type can be used as long as it supports setting and getting import related attributes However only PyModule_Type instances may be returned if the PyModuleDef has non NULL m_traverse m_clear m_free non zero m_size or slots other than Py_mod_create Py_mod_exec Specifies a function that is called to execute the module This is equivalent to executing the code of a Python module typically this function adds classes and constants to the module The signature of the function is int exec_module PyObject module If multiple Py_mod_exec slots are specified they are processed in the order they appear in the m_slots array Py_mod_multiple_interpreters Specifies one of the following values Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED The module does not support being imported in subinterpreters Py_MOD_MULTIPLE_INTERPRETERS_SUPPORTED The module supports being imported in subinterpreters but only when they share the main interpreter s GIL See Isolating Extension Modules Py_MOD_PER_INTERPRETER_GIL_SUPPORTED The module supports being imported in subinterpreters even when they have their own GIL See Isolating Extension Modules This slot determines whether or not importing this module in a subinterpreter will fail Multiple Py_mod_multiple_interpreters slots may not
en
null
2,433
be specified in one module definition If Py_mod_multiple_interpreters is not specified the import machinery defaults to Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED New in version 3 12 See PEP 489 for more details on multi phase initialization Low level module creation functions The following functions are called under the hood when using multi phase initialization They can be used directly for example when creating module objects dynamically Note that both PyModule_FromDefAndSpec and PyModule_ExecDef must be called to fully initialize a module PyObject PyModule_FromDefAndSpec PyModuleDef def PyObject spec Return value New reference Create a new module object given the definition in def and the ModuleSpec spec This behaves like PyModule_FromDefAndSpec2 with module_api_version set to PYTHON_API_VERSION New in version 3 5 PyObject PyModule_FromDefAndSpec2 PyModuleDef def PyObject spec int module_api_version Return value New reference Part of the Stable ABI since version 3 7 Create a new module object given the definition in def and the ModuleSpec spec assuming the API version module_api_version If that version does not match the version of the running interpreter a RuntimeWarning is emitted Note Most uses of this function should be using PyModule_FromDefAndSpec instead only use this if you are sure you need it New in version 3 5 int PyModule_ExecDef PyObject module PyModuleDef def Part of the Stable ABI since version 3 7 Process any execution slots Py_mod_exec given in def New in version 3 5 int PyModule_SetDocString PyObject module const char docstring Part of the Stable ABI since version 3 7 Set the docstring for module to docstring This function is called automatically when creating a module from PyModuleDef using either PyModule_Create or PyModule_FromDefAndSpec New in version 3 5 int PyModule_AddFunctions PyObject module PyMethodDef functions Part of the Stable ABI since version 3 7 Add the functions from the NULL terminated functions array to module Refer to the PyMethodDef documentation for details on individual entries due to the lack of a shared module namespace module level functions implemented in C typically receive the module as their first parameter making them similar to instance methods on Python classes This function is called automatically when creating a module from PyModuleDef using either PyModule_Create or PyModule_FromDefAndSpec New in version 3 5 Support functions The module initialization function if using single phase initialization or a function called from a module execution slot if using multi phase initialization can use the following functions to help initialize the module state int PyModule_AddObjectRef PyObject module const char name PyObject value Part of the Stable ABI since version 3 10 Add an object to module as name This is a convenience function which can be used from the module s initialization function On success return 0 On error raise an exception and return 1 Return NULL if value is NULL It must be called with an exception raised in this case Example usage static int add_spam PyObject module int value PyObject obj PyLong_FromLong value if obj NULL return 1 int res PyModule_AddObjectRef module spam obj Py_DECREF obj return res The example can also be written without checking explicitly if obj is NULL static int add_spam PyObject module int value PyObject obj PyLong_FromLong value int res PyModule_AddObjectRef module spam obj Py_XDECREF obj return res Note that Py_XDECREF should be used instead of Py_DECREF in this case since obj can be NULL New in version 3 10 int PyModule_AddObject PyObject module const char name PyObject value Part of the Stable ABI Similar to PyModule_AddObjectRef but steals a reference to value on success if it returns 0 The new PyModule_AddObjectRef function is recommended since it is easy to introduce reference leaks by misusing the PyModule_AddObject function Note Unlike other functions that steal references PyModule_AddObject only releases the reference to value on success This means that its return value must be checked and calling code must Py_DECREF value
en
null
2,434
manually on error Example usage static int add_spam PyObject module int value PyObject obj PyLong_FromLong value if obj NULL return 1 if PyModule_AddObject module spam obj 0 Py_DECREF obj return 1 PyModule_AddObject stole a reference to obj Py_DECREF obj is not needed here return 0 The example can also be written without checking explicitly if obj is NULL static int add_spam PyObject module int value PyObject obj PyLong_FromLong value if PyModule_AddObject module spam obj 0 Py_XDECREF obj return 1 PyModule_AddObject stole a reference to obj Py_DECREF obj is not needed here return 0 Note that Py_XDECREF should be used instead of Py_DECREF in this case since obj can be NULL int PyModule_AddIntConstant PyObject module const char name long value Part of the Stable ABI Add an integer constant to module as name This convenience function can be used from the module s initialization function Return 1 on error 0 on success int PyModule_AddStringConstant PyObject module const char name const char value Part of the Stable ABI Add a string constant to module as name This convenience function can be used from the module s initialization function The string value must be NULL terminated Return 1 on error 0 on success PyModule_AddIntMacro module macro Add an int constant to module The name and the value are taken from macro For example PyModule_AddIntMacro module AF_INET adds the int constant AF_INET with the value of AF_INET to module Return 1 on error 0 on success PyModule_AddStringMacro module macro Add a string constant to module int PyModule_AddType PyObject module PyTypeObject type Part of the Stable ABI since version 3 10 Add a type object to module The type object is finalized by calling internally PyType_Ready The name of the type object is taken from the last component of tp_name after dot Return 1 on error 0 on success New in version 3 9 Module lookup Single phase initialization creates singleton modules that can be looked up in the context of the current interpreter This allows the module object to be retrieved later with only a reference to the module definition These functions will not work on modules created using multi phase initialization since multiple such modules can be created from a single definition PyObject PyState_FindModule PyModuleDef def Return value Borrowed reference Part of the Stable ABI Returns the module object that was created from def for the current interpreter This method requires that the module object has been attached to the interpreter state with PyState_AddModule beforehand In case the corresponding module object is not found or has not been attached to the interpreter state yet it returns NULL int PyState_AddModule PyObject module PyModuleDef def Part of the Stable ABI since version 3 3 Attaches the module object passed to the function to the interpreter state This allows the module object to be accessible via PyState_FindModule Only effective on modules created using single phase initialization Python calls PyState_AddModule automatically after importing a module so it is unnecessary but harmless to call it from module initialization code An explicit call is needed only if the module s own init code subsequently calls PyState_FindModule The function is mainly intended for implementing alternative import mechanisms either by calling it directly or by referring to its implementation for details of the required state updates The caller must hold the GIL Return 0 on success or 1 on failure New in version 3 3 int PyState_RemoveModule PyModuleDef def Part of the Stable ABI since version 3 3 Removes the module object created from def from the interpreter state Return 0 on success or 1 on failure The caller must hold the GIL New in version 3 3
en
null
2,435
Parsing arguments and building values These functions are useful when creating your own extensions functions and methods Additional information and examples are available in Extending and Embedding the Python Interpreter The first three of these functions described PyArg_ParseTuple PyArg_ParseTupleAndKeywords and PyArg_Parse all use format strings which are used to tell the function about the expected arguments The format strings use the same syntax for each of these functions Parsing arguments A format string consists of zero or more format units A format unit describes one Python object it is usually a single character or a parenthesized sequence of format units With a few exceptions a format unit that is not a parenthesized sequence normally corresponds to a single address argument to these functions In the following description the quoted form is the format unit the entry in round parentheses is the Python object type that matches the format unit and the entry in square brackets is the type of the C variable s whose address should be passed Strings and buffers These formats allow accessing an object as a contiguous chunk of memory You don t have to provide raw storage for the returned unicode or bytes area Unless otherwise stated buffers are not NUL terminated There are three ways strings and buffers can be converted to C Formats such as y and s fill a Py_buffer structure This locks the underlying buffer so that the caller can subsequently use the buffer even inside a Py_BEGIN_ALLOW_THREADS block without the risk of mutable data being resized or destroyed As a result you have to call PyBuffer_Release after you have finished processing the data or in any early abort case The es es et and et formats allocate the result buffer You have to call PyMem_Free after you have finished processing the data or in any early abort case Other formats take a str or a read only bytes like object such as bytes and provide a const char pointer to its buffer In this case the buffer is borrowed it is managed by the corresponding Python object and shares the lifetime of this object You won t have to release any memory yourself To ensure that the underlying buffer may be safely borrowed the object s PyBufferProcs bf_releasebuffer field must be NULL This disallows common mutable objects such as bytearray but also some read only objects such as memoryview of bytes Besides this bf_releasebuffer requirement there is no check to verify whether the input object is immutable e g whether it would honor a request for a writable buffer or whether another thread can mutate the data Note For all variants of formats s y etc the macro PY_SSIZE_T_CLEAN must be defined before including Python h On Python 3 9 and older the type of the length argument is Py_ssize_t if the PY_SSIZE_T_CLEAN macro is defined or int otherwise s str const char Convert a Unicode object to a C pointer to a character string A pointer to an existing string is stored in the character pointer variable whose address you pass The C string is NUL terminated The Python string must not contain embedded null code points if it does a ValueError exception is raised Unicode objects are converted to C strings using utf 8 encoding If this conversion fails a UnicodeError is raised Note This format does not accept bytes like objects If you want to accept filesystem paths and convert them to C character strings it is preferable to use the O format with PyUnicode_FSConverter as converter Changed in version 3 5 Previously TypeError was raised when embedded null code points were encountered in the Python string s str or bytes like object Py_buffer This format accepts Unicode objects as well as bytes like objects It fills a Py_buffer structure provided by the caller In this case the resulting C string may contain embedded NUL bytes Unicode objects are converted to C strings using utf 8 encoding s str read only bytes like object const char Py_ssize_t Like s except that it provides a borrowed buffer The result is stored into two C variables the first one a pointer to a C string the second one its length T
en
null
2,436
he string may contain embedded null bytes Unicode objects are converted to C strings using utf 8 encoding z str or None const char Like s but the Python object may also be None in which case the C pointer is set to NULL z str bytes like object or None Py_buffer Like s but the Python object may also be None in which case the buf member of the Py_buffer structure is set to NULL z str read only bytes like object or None const char Py_ssize_t Like s but the Python object may also be None in which case the C pointer is set to NULL y read only bytes like object const char This format converts a bytes like object to a C pointer to a borrowed character string it does not accept Unicode objects The bytes buffer must not contain embedded null bytes if it does a ValueError exception is raised Changed in version 3 5 Previously TypeError was raised when embedded null bytes were encountered in the bytes buffer y bytes like object Py_buffer This variant on s doesn t accept Unicode objects only bytes like objects This is the recommended way to accept binary data y read only bytes like object const char Py_ssize_t This variant on s doesn t accept Unicode objects only bytes like objects S bytes PyBytesObject Requires that the Python object is a bytes object without attempting any conversion Raises TypeError if the object is not a bytes object The C variable may also be declared as PyObject Y bytearray PyByteArrayObject Requires that the Python object is a bytearray object without attempting any conversion Raises TypeError if the object is not a bytearray object The C variable may also be declared as PyObject U str PyObject Requires that the Python object is a Unicode object without attempting any conversion Raises TypeError if the object is not a Unicode object The C variable may also be declared as PyObject w read write bytes like object Py_buffer This format accepts any object which implements the read write buffer interface It fills a Py_buffer structure provided by the caller The buffer may contain embedded null bytes The caller have to call PyBuffer_Release when it is done with the buffer es str const char encoding char buffer This variant on s is used for encoding Unicode into a character buffer It only works for encoded data without embedded NUL bytes This format requires two arguments The first is only used as input and must be a const char which points to the name of an encoding as a NUL terminated string or NULL in which case utf 8 encoding is used An exception is raised if the named encoding is not known to Python The second argument must be a char the value of the pointer it references will be set to a buffer with the contents of the argument text The text will be encoded in the encoding specified by the first argument PyArg_ParseTuple will allocate a buffer of the needed size copy the encoded data into this buffer and adjust buffer to reference the newly allocated storage The caller is responsible for calling PyMem_Free to free the allocated buffer after use et str bytes or bytearray const char encoding char buffer Same as es except that byte string objects are passed through without recoding them Instead the implementation assumes that the byte string object uses the encoding passed in as parameter es str const char encoding char buffer Py_ssize_t buffer_length This variant on s is used for encoding Unicode into a character buffer Unlike the es format this variant allows input data which contains NUL characters It requires three arguments The first is only used as input and must be a const char which points to the name of an encoding as a NUL terminated string or NULL in which case utf 8 encoding is used An exception is raised if the named encoding is not known to Python The second argument must be a char the value of the pointer it references will be set to a buffer with the contents of the argument text The text will be encoded in the encoding specified by the first argument The third argument must be a pointer to an integer the referenced integer will be set to the number of bytes in the output buffer There are two modes
en
null
2,437
of operation If buffer points a NULL pointer the function will allocate a buffer of the needed size copy the encoded data into this buffer and set buffer to reference the newly allocated storage The caller is responsible for calling PyMem_Free to free the allocated buffer after usage If buffer points to a non NULL pointer an already allocated buffer PyArg_ParseTuple will use this location as the buffer and interpret the initial value of buffer_length as the buffer size It will then copy the encoded data into the buffer and NUL terminate it If the buffer is not large enough a ValueError will be set In both cases buffer_length is set to the length of the encoded data without the trailing NUL byte et str bytes or bytearray const char encoding char buffer Py_ssize_t buffer_length Same as es except that byte string objects are passed through without recoding them Instead the implementation assumes that the byte string object uses the encoding passed in as parameter Changed in version 3 12 u u Z and Z are removed because they used a legacy Py_UNICODE representation Numbers b int unsigned char Convert a nonnegative Python integer to an unsigned tiny int stored in a C unsigned char B int unsigned char Convert a Python integer to a tiny int without overflow checking stored in a C unsigned char h int short int Convert a Python integer to a C short int H int unsigned short int Convert a Python integer to a C unsigned short int without overflow checking i int int Convert a Python integer to a plain C int I int unsigned int Convert a Python integer to a C unsigned int without overflow checking l int long int Convert a Python integer to a C long int k int unsigned long Convert a Python integer to a C unsigned long without overflow checking L int long long Convert a Python integer to a C long long K int unsigned long long Convert a Python integer to a C unsigned long long without overflow checking n int Py_ssize_t Convert a Python integer to a C Py_ssize_t c bytes or bytearray of length 1 char Convert a Python byte represented as a bytes or bytearray object of length 1 to a C char Changed in version 3 3 Allow bytearray objects C str of length 1 int Convert a Python character represented as a str object of length 1 to a C int f float float Convert a Python floating point number to a C float d float double Convert a Python floating point number to a C double D complex Py_complex Convert a Python complex number to a C Py_complex structure Other objects O object PyObject Store a Python object without any conversion in a C object pointer The C program thus receives the actual object that was passed A new strong reference to the object is not created i e its reference count is not increased The pointer stored is not NULL O object typeobject PyObject Store a Python object in a C object pointer This is similar to O but takes two C arguments the first is the address of a Python type object the second is the address of the C variable of type PyObject into which the object pointer is stored If the Python object does not have the required type TypeError is raised O object converter anything Convert a Python object to a C variable through a converter function This takes two arguments the first is a function the second is the address of a C variable of arbitrary type converted to void The converter function in turn is called as follows status converter object address where object is the Python object to be converted and address is the void argument that was passed to the PyArg_Parse function The returned status should be 1 for a successful conversion and 0 if the conversion has failed When the conversion fails the converter function should raise an exception and leave the content of address unmodified If the converter returns Py_CLEANUP_SUPPORTED it may get called a second time if the argument parsing eventually fails giving the converter a chance to release any memory that it had already allocated In this second call the object parameter will be NULL address will have the same value as in the original call Changed in version 3 1 Py_CLEANUP_SUPPORTE
en
null
2,438
D was added p bool int Tests the value passed in for truth a boolean p redicate and converts the result to its equivalent C true false integer value Sets the int to 1 if the expression was true and 0 if it was false This accepts any valid Python value See Truth Value Testing for more information about how Python tests values for truth New in version 3 3 items tuple matching items The object must be a Python sequence whose length is the number of format units in items The C arguments must correspond to the individual format units in items Format units for sequences may be nested It is possible to pass long integers integers whose value exceeds the platform s LONG_MAX however no proper range checking is done the most significant bits are silently truncated when the receiving field is too small to receive the value actually the semantics are inherited from downcasts in C your mileage may vary A few other characters have a meaning in a format string These may not occur inside nested parentheses They are Indicates that the remaining arguments in the Python argument list are optional The C variables corresponding to optional arguments should be initialized to their default value when an optional argument is not specified PyArg_ParseTuple does not touch the contents of the corresponding C variable s PyArg_ParseTupleAndKeywords only Indicates that the remaining arguments in the Python argument list are keyword only Currently all keyword only arguments must also be optional arguments so must always be specified before in the format string New in version 3 3 The list of format units ends here the string after the colon is used as the function name in error messages the associated value of the exception that PyArg_ParseTuple raises The list of format units ends here the string after the semicolon is used as the error message instead of the default error message and mutually exclude each other Note that any Python object references which are provided to the caller are borrowed references do not release them i e do not decrement their reference count Additional arguments passed to these functions must be addresses of variables whose type is determined by the format string these are used to store values from the input tuple There are a few cases as described in the list of format units above where these parameters are used as input values they should match what is specified for the corresponding format unit in that case For the conversion to succeed the arg object must match the format and the format must be exhausted On success the PyArg_Parse functions return true otherwise they return false and raise an appropriate exception When the PyArg_Parse functions fail due to conversion failure in one of the format units the variables at the addresses corresponding to that and the following format units are left untouched API Functions int PyArg_ParseTuple PyObject args const char format Part of the Stable ABI Parse the parameters of a function that takes only positional parameters into local variables Returns true on success on failure it returns false and raises the appropriate exception int PyArg_VaParse PyObject args const char format va_list vargs Part of the Stable ABI Identical to PyArg_ParseTuple except that it accepts a va_list rather than a variable number of arguments int PyArg_ParseTupleAndKeywords PyObject args PyObject kw const char format char keywords Part of the Stable ABI Parse the parameters of a function that takes both positional and keyword parameters into local variables The keywords argument is a NULL terminated array of keyword parameter names Empty names denote positional only parameters Returns true on success on failure it returns false and raises the appropriate exception Changed in version 3 6 Added support for positional only parameters int PyArg_VaParseTupleAndKeywords PyObject args PyObject kw const char format char keywords va_list vargs Part of the Stable ABI Identical to PyArg_ParseTupleAndKeywords except that it accepts a va_list rather than a variable number of arguments int PyArg_ValidateKeywordArgumen
en
null
2,439
ts PyObject Part of the Stable ABI Ensure that the keys in the keywords argument dictionary are strings This is only needed if PyArg_ParseTupleAndKeywords is not used since the latter already does this check New in version 3 2 int PyArg_Parse PyObject args const char format Part of the Stable ABI Function used to deconstruct the argument lists of old style functions these are functions which use the METH_OLDARGS parameter parsing method which has been removed in Python 3 This is not recommended for use in parameter parsing in new code and most code in the standard interpreter has been modified to no longer use this for that purpose It does remain a convenient way to decompose other tuples however and may continue to be used for that purpose int PyArg_UnpackTuple PyObject args const char name Py_ssize_t min Py_ssize_t max Part of the Stable ABI A simpler form of parameter retrieval which does not use a format string to specify the types of the arguments Functions which use this method to retrieve their parameters should be declared as METH_VARARGS in function or method tables The tuple containing the actual parameters should be passed as args it must actually be a tuple The length of the tuple must be at least min and no more than max min and max may be equal Additional arguments must be passed to the function each of which should be a pointer to a PyObject variable these will be filled in with the values from args they will contain borrowed references The variables which correspond to optional parameters not given by args will not be filled in these should be initialized by the caller This function returns true on success and false if args is not a tuple or contains the wrong number of elements an exception will be set if there was a failure This is an example of the use of this function taken from the sources for the _weakref helper module for weak references static PyObject weakref_ref PyObject self PyObject args PyObject object PyObject callback NULL PyObject result NULL if PyArg_UnpackTuple args ref 1 2 object callback result PyWeakref_NewRef object callback return result The call to PyArg_UnpackTuple in this example is entirely equivalent to this call to PyArg_ParseTuple PyArg_ParseTuple args O O ref object callback Building values PyObject Py_BuildValue const char format Return value New reference Part of the Stable ABI Create a new value based on a format string similar to those accepted by the PyArg_Parse family of functions and a sequence of values Returns the value or NULL in the case of an error an exception will be raised if NULL is returned Py_BuildValue does not always build a tuple It builds a tuple only if its format string contains two or more format units If the format string is empty it returns None if it contains exactly one format unit it returns whatever object is described by that format unit To force it to return a tuple of size 0 or one parenthesize the format string When memory buffers are passed as parameters to supply data to build objects as for the s and s formats the required data is copied Buffers provided by the caller are never referenced by the objects created by Py_BuildValue In other words if your code invokes malloc and passes the allocated memory to Py_BuildValue your code is responsible for calling free for that memory once Py_BuildValue returns In the following description the quoted form is the format unit the entry in round parentheses is the Python object type that the format unit will return and the entry in square brackets is the type of the C value s to be passed The characters space tab colon and comma are ignored in format strings but not within format units such as s This can be used to make long format strings a tad more readable s str or None const char Convert a null terminated C string to a Python str object using utf 8 encoding If the C string pointer is NULL None is used s str or None const char Py_ssize_t Convert a C string and its length to a Python str object using utf 8 encoding If the C string pointer is NULL the length is ignored and None is returned y bytes c
en
null
2,440
onst char This converts a C string to a Python bytes object If the C string pointer is NULL None is returned y bytes const char Py_ssize_t This converts a C string and its lengths to a Python object If the C string pointer is NULL None is returned z str or None const char Same as s z str or None const char Py_ssize_t Same as s u str const wchar_t Convert a null terminated wchar_t buffer of Unicode UTF 16 or UCS 4 data to a Python Unicode object If the Unicode buffer pointer is NULL None is returned u str const wchar_t Py_ssize_t Convert a Unicode UTF 16 or UCS 4 data buffer and its length to a Python Unicode object If the Unicode buffer pointer is NULL the length is ignored and None is returned U str or None const char Same as s U str or None const char Py_ssize_t Same as s i int int Convert a plain C int to a Python integer object b int char Convert a plain C char to a Python integer object h int short int Convert a plain C short int to a Python integer object l int long int Convert a C long int to a Python integer object B int unsigned char Convert a C unsigned char to a Python integer object H int unsigned short int Convert a C unsigned short int to a Python integer object I int unsigned int Convert a C unsigned int to a Python integer object k int unsigned long Convert a C unsigned long to a Python integer object L int long long Convert a C long long to a Python integer object K int unsigned long long Convert a C unsigned long long to a Python integer object n int Py_ssize_t Convert a C Py_ssize_t to a Python integer c bytes of length 1 char Convert a C int representing a byte to a Python bytes object of length 1 C str of length 1 int Convert a C int representing a character to Python str object of length 1 d float double Convert a C double to a Python floating point number f float float Convert a C float to a Python floating point number D complex Py_complex Convert a C Py_complex structure to a Python complex number O object PyObject Pass a Python object untouched but create a new strong reference to it i e its reference count is incremented by one If the object passed in is a NULL pointer it is assumed that this was caused because the call producing the argument found an error and set an exception Therefore Py_BuildValue will return NULL but won t raise an exception If no exception has been raised yet SystemError is set S object PyObject Same as O N object PyObject Same as O except it doesn t create a new strong reference Useful when the object is created by a call to an object constructor in the argument list O object converter anything Convert anything to a Python object through a converter function The function is called with anything which should be compatible with void as its argument and should return a new Python object or NULL if an error occurred items tuple matching items Convert a sequence of C values to a Python tuple with the same number of items items list matching items Convert a sequence of C values to a Python list with the same number of items items dict matching items Convert a sequence of C values to a Python dictionary Each pair of consecutive C values adds one item to the dictionary serving as key and value respectively If there is an error in the format string the SystemError exception is set and NULL returned PyObject Py_VaBuildValue const char format va_list vargs Return value New reference Part of the Stable ABI Identical to Py_BuildValue except that it accepts a va_list rather than a variable number of arguments
en
null
2,441
1 Introduction This reference manual describes the Python programming language It is not intended as a tutorial While I am trying to be as precise as possible I chose to use English rather than formal specifications for everything except syntax and lexical analysis This should make the document more understandable to the average reader but will leave room for ambiguities Consequently if you were coming from Mars and tried to re implement Python from this document alone you might have to guess things and in fact you would probably end up implementing quite a different language On the other hand if you are using Python and wonder what the precise rules about a particular area of the language are you should definitely be able to find them here If you would like to see a more formal definition of the language maybe you could volunteer your time or invent a cloning machine It is dangerous to add too many implementation details to a language reference document the implementation may change and other implementations of the same language may work differently On the other hand CPython is the one Python implementation in widespread use although alternate implementations continue to gain support and its particular quirks are sometimes worth being mentioned especially where the implementation imposes additional limitations Therefore you ll find short implementation notes sprinkled throughout the text Every Python implementation comes with a number of built in and standard modules These are documented in The Python Standard Library A few built in modules are mentioned when they interact in a significant way with the language definition 1 1 Alternate Implementations Though there is one Python implementation which is by far the most popular there are some alternate implementations which are of particular interest to different audiences Known implementations include CPython This is the original and most maintained implementation of Python written in C New language features generally appear here first Jython Python implemented in Java This implementation can be used as a scripting language for Java applications or can be used to create applications using the Java class libraries It is also often used to create tests for Java libraries More information can be found at the Jython website Python for NET This implementation actually uses the CPython implementation but is a managed NET application and makes NET libraries available It was created by Brian Lloyd For more information see the Python for NET home page IronPython An alternate Python for NET Unlike Python NET this is a complete Python implementation that generates IL and compiles Python code directly to NET assemblies It was created by Jim Hugunin the original creator of Jython For more information see the IronPython website PyPy An implementation of Python written completely in Python It supports several advanced features not found in other implementations like stackless support and a Just in Time compiler One of the goals of the project is to encourage experimentation with the language itself by making it easier to modify the interpreter since it is written in Python Additional information is available on the PyPy project s home page Each of these implementations varies in some way from the language as documented in this manual or introduces specific information beyond what s covered in the standard Python documentation Please refer to the implementation specific documentation to determine what else you need to know about the specific implementation you re using 1 2 Notation The descriptions of lexical analysis and syntax use a modified Backus Naur form BNF grammar notation This uses the following style of definition name lc_letter lc_letter _ lc_letter a z The first line says that a name is an lc_letter followed by a sequence of zero or more lc_letter s and underscores An lc_letter in turn is any of the single characters a through z This rule is actually adhered to for the names defined in lexical and grammar rules in this document Each rule begins with a name which is the name d
en
null
2,442
efined by the rule and A vertical bar is used to separate alternatives it is the least binding operator in this notation A star means zero or more repetitions of the preceding item likewise a plus means one or more repetitions and a phrase enclosed in square brackets means zero or one occurrences in other words the enclosed phrase is optional The and operators bind as tightly as possible parentheses are used for grouping Literal strings are enclosed in quotes White space is only meaningful to separate tokens Rules are normally contained on a single line rules with many alternatives may be formatted alternatively with each line after the first beginning with a vertical bar In lexical definitions as the example above two more conventions are used Two literal characters separated by three dots mean a choice of any single character in the given inclusive range of ASCII characters A phrase between angular brackets gives an informal description of the symbol defined e g this could be used to describe the notion of control character if needed Even though the notation used is almost the same there is a big difference between the meaning of lexical and syntactic definitions a lexical definition operates on the individual characters of the input source while a syntax definition operates on the stream of tokens generated by the lexical analysis All uses of BNF in the next chapter Lexical Analysis are lexical definitions uses in subsequent chapters are syntactic definitions
en
null
2,443
String conversion and formatting Functions for number conversion and formatted string output int PyOS_snprintf char str size_t size const char format Part of the Stable ABI Output not more than size bytes to str according to the format string format and the extra arguments See the Unix man page snprintf 3 int PyOS_vsnprintf char str size_t size const char format va_list va Part of the Stable ABI Output not more than size bytes to str according to the format string format and the variable argument list va Unix man page vsnprintf 3 PyOS_snprintf and PyOS_vsnprintf wrap the Standard C library functions snprintf and vsnprintf Their purpose is to guarantee consistent behavior in corner cases which the Standard C functions do not The wrappers ensure that str size 1 is always 0 upon return They never write more than size bytes including the trailing 0 into str Both functions require that str NULL size 0 format NULL and size INT_MAX Note that this means there is no equivalent to the C99 n snprintf NULL 0 which would determine the necessary buffer size The return value rv for these functions should be interpreted as follows When 0 rv size the output conversion was successful and rv characters were written to str excluding the trailing 0 byte at str rv When rv size the output conversion was truncated and a buffer with rv 1 bytes would have been needed to succeed str size 1 is 0 in this case When rv 0 something bad happened str size 1 is 0 in this case too but the rest of str is undefined The exact cause of the error depends on the underlying platform The following functions provide locale independent string to number conversions unsigned long PyOS_strtoul const char str char ptr int base Part of the Stable ABI Convert the initial part of the string in str to an unsigned long value according to the given base which must be between 2 and 36 inclusive or be the special value 0 Leading white space and case of characters are ignored If base is zero it looks for a leading 0b 0o or 0x to tell which base If these are absent it defaults to 10 Base must be 0 or between 2 and 36 inclusive If ptr is non NULL it will contain a pointer to the end of the scan If the converted value falls out of range of corresponding return type range error occurs errno is set to ERANGE and ULONG_MAX is returned If no conversion can be performed 0 is returned See also the Unix man page strtoul 3 New in version 3 2 long PyOS_strtol const char str char ptr int base Part of the Stable ABI Convert the initial part of the string in str to an long value according to the given base which must be between 2 and 36 inclusive or be the special value 0 Same as PyOS_strtoul but return a long value instead and LONG_MAX on overflows See also the Unix man page strtol 3 New in version 3 2 double PyOS_string_to_double const char s char endptr PyObject overflow_exception Part of the Stable ABI Convert a string s to a double raising a Python exception on failure The set of accepted strings corresponds to the set of strings accepted by Python s float constructor except that s must not have leading or trailing whitespace The conversion is independent of the current locale If endptr is NULL convert the whole string Raise ValueError and return 1 0 if the string is not a valid representation of a floating point number If endptr is not NULL convert as much of the string as possible and set endptr to point to the first unconverted character If no initial segment of the string is the valid representation of a floating point number set endptr to point to the beginning of the string raise ValueError and return 1 0 If s represents a value that is too large to store in a float for example 1e500 is such a string on many platforms then if overflow_exception is NULL return Py_HUGE_VAL with an appropriate sign and don t set any exception Otherwise overflow_exception must point to a Python exception object raise that exception and return 1 0 In both cases set endptr to point to the first character after the converted value If any other error occurs during the conversion for example an out of memory
en
null
2,444
error set the appropriate Python exception and return 1 0 New in version 3 1 char PyOS_double_to_string double val char format_code int precision int flags int ptype Part of the Stable ABI Convert a double val to a string using supplied format_code precision and flags format_code must be one of e E f F g G or r For r the supplied precision must be 0 and is ignored The r format code specifies the standard repr format flags can be zero or more of the values Py_DTSF_SIGN Py_DTSF_ADD_DOT_0 or Py_DTSF_ALT or ed together Py_DTSF_SIGN means to always precede the returned string with a sign character even if val is non negative Py_DTSF_ADD_DOT_0 means to ensure that the returned string will not look like an integer Py_DTSF_ALT means to apply alternate formatting rules See the documentation for the PyOS_snprintf specifier for details If ptype is non NULL then the value it points to will be set to one of Py_DTST_FINITE Py_DTST_INFINITE or Py_DTST_NAN signifying that val is a finite number an infinite number or not a number respectively The return value is a pointer to buffer with the converted string or NULL if the conversion failed The caller is responsible for freeing the returned string by calling PyMem_Free New in version 3 1 int PyOS_stricmp const char s1 const char s2 Case insensitive comparison of strings The function works almost identically to strcmp except that it ignores the case int PyOS_strnicmp const char s1 const char s2 Py_ssize_t size Case insensitive comparison of strings The function works almost identically to strncmp except that it ignores the case
en
null
2,445
Descriptor Guide Author Raymond Hettinger Contact python at rcn dot com Contents Descriptor Guide Primer Simple example A descriptor that returns a constant Dynamic lookups Managed attributes Customized names Closing thoughts Complete Practical Example Validator class Custom validators Practical application Technical Tutorial Abstract Definition and introduction Descriptor protocol Overview of descriptor invocation Invocation from an instance Invocation from a class Invocation from super Summary of invocation logic Automatic name notification ORM example Pure Python Equivalents Properties Functions and methods Kinds of methods Static methods Class methods Member objects and __slots__ Descriptors let objects customize attribute lookup storage and deletion This guide has four major sections 1 The primer gives a basic overview moving gently from simple examples adding one feature at a time Start here if you re new to descriptors 2 The second section shows a complete practical descriptor example If you already know the basics start there 3 The third section provides a more technical tutorial that goes into the detailed mechanics of how descriptors work Most people don t need this level of detail 4 The last section has pure Python equivalents for built in descriptors that are written in C Read this if you re curious about how functions turn into bound methods or about the implementation of common tools like classmethod staticmethod property and __slots__ Primer In this primer we start with the most basic possible example and then we ll add new capabilities one by one Simple example A descriptor that returns a constant The Ten class is a descriptor whose __get__ method always returns the constant 10 class Ten def __get__ self obj objtype None return 10 To use the descriptor it must be stored as a class variable in another class class A x 5 Regular class attribute y Ten Descriptor instance An interactive session shows the difference between normal attribute lookup and descriptor lookup a A Make an instance of class A a x Normal attribute lookup 5 a y Descriptor lookup 10 In the a x attribute lookup the dot operator finds x 5 in the class dictionary In the a y lookup the dot operator finds a descriptor instance recognized by its __get__ method Calling that method returns 10 Note that the value 10 is not stored in either the class dictionary or the instance dictionary Instead the value 10 is computed on demand This example shows how a simple descriptor works but it isn t very useful For retrieving constants normal attribute lookup would be better In the next section we ll create something more useful a dynamic lookup Dynamic lookups Interesting descriptors typically run computations instead of returning constants import os class DirectorySize def __get__ self obj objtype None return len os listdir obj dirname class Directory size DirectorySize Descriptor instance def __init__ self dirname self dirname dirname Regular instance attribute An interactive session shows that the lookup is dynamic it computes different updated answers each time s Directory songs g Directory games s size The songs directory has twenty files 20 g size The games directory has three files 3 os remove games chess Delete a game g size File count is automatically updated 2 Besides showing how descriptors can run computations this example also reveals the purpose of the parameters to __get__ The self parameter is size an instance of DirectorySize The obj parameter is either g or s an instance of Directory It is the obj parameter that lets the __get__ method learn the target directory The objtype parameter is the class Directory Managed attributes A popular use for descriptors is managing access to instance data The descriptor is assigned to a public attribute in the class dictionary while the actual data is stored as a private attribute in the instance dictionary The descriptor s __get__ and __set__ methods are triggered when the public attribute is accessed In the following example age is the public attribute and _age is the private attribute When the public a
en
null
2,446
ttribute is accessed the descriptor logs the lookup or update import logging logging basicConfig level logging INFO class LoggedAgeAccess def __get__ self obj objtype None value obj _age logging info Accessing r giving r age value return value def __set__ self obj value logging info Updating r to r age value obj _age value class Person age LoggedAgeAccess Descriptor instance def __init__ self name age self name name Regular instance attribute self age age Calls __set__ def birthday self self age 1 Calls both __get__ and __set__ An interactive session shows that all access to the managed attribute age is logged but that the regular attribute name is not logged mary Person Mary M 30 The initial age update is logged INFO root Updating age to 30 dave Person David D 40 INFO root Updating age to 40 vars mary The actual data is in a private attribute name Mary M _age 30 vars dave name David D _age 40 mary age Access the data and log the lookup INFO root Accessing age giving 30 30 mary birthday Updates are logged as well INFO root Accessing age giving 30 INFO root Updating age to 31 dave name Regular attribute lookup isn t logged David D dave age Only the managed attribute is logged INFO root Accessing age giving 40 40 One major issue with this example is that the private name _age is hardwired in the LoggedAgeAccess class That means that each instance can only have one logged attribute and that its name is unchangeable In the next example we ll fix that problem Customized names When a class uses descriptors it can inform each descriptor about which variable name was used In this example the Person class has two descriptor instances name and age When the Person class is defined it makes a callback to __set_name__ in LoggedAccess so that the field names can be recorded giving each descriptor its own public_name and private_name import logging logging basicConfig level logging INFO class LoggedAccess def __set_name__ self owner name self public_name name self private_name _ name def __get__ self obj objtype None value getattr obj self private_name logging info Accessing r giving r self public_name value return value def __set__ self obj value logging info Updating r to r self public_name value setattr obj self private_name value class Person name LoggedAccess First descriptor instance age LoggedAccess Second descriptor instance def __init__ self name age self name name Calls the first descriptor self age age Calls the second descriptor def birthday self self age 1 An interactive session shows that the Person class has called __set_name__ so that the field names would be recorded Here we call vars to look up the descriptor without triggering it vars vars Person name public_name name private_name _name vars vars Person age public_name age private_name _age The new class now logs access to both name and age pete Person Peter P 10 INFO root Updating name to Peter P INFO root Updating age to 10 kate Person Catherine C 20 INFO root Updating name to Catherine C INFO root Updating age to 20 The two Person instances contain only the private names vars pete _name Peter P _age 10 vars kate _name Catherine C _age 20 Closing thoughts A descriptor is what we call any object that defines __get__ __set__ or __delete__ Optionally descriptors can have a __set_name__ method This is only used in cases where a descriptor needs to know either the class where it was created or the name of class variable it was assigned to This method if present is called even if the class is not a descriptor Descriptors get invoked by the dot operator during attribute lookup If a descriptor is accessed indirectly with vars some_class descriptor_name the descriptor instance is returned without invoking it Descriptors only work when used as class variables When put in instances they have no effect The main motivation for descriptors is to provide a hook allowing objects stored in class variables to control what happens during attribute lookup Traditionally the calling class controls what happens during lookup Descriptors invert that relationship and allow the data being l
en
null
2,447
ooked up to have a say in the matter Descriptors are used throughout the language It is how functions turn into bound methods Common tools like classmethod staticmethod property and functools cached_property are all implemented as descriptors Complete Practical Example In this example we create a practical and powerful tool for locating notoriously hard to find data corruption bugs Validator class A validator is a descriptor for managed attribute access Prior to storing any data it verifies that the new value meets various type and range restrictions If those restrictions aren t met it raises an exception to prevent data corruption at its source This Validator class is both an abstract base class and a managed attribute descriptor from abc import ABC abstractmethod class Validator ABC def __set_name__ self owner name self private_name _ name def __get__ self obj objtype None return getattr obj self private_name def __set__ self obj value self validate value setattr obj self private_name value abstractmethod def validate self value pass Custom validators need to inherit from Validator and must supply a validate method to test various restrictions as needed Custom validators Here are three practical data validation utilities 1 OneOf verifies that a value is one of a restricted set of options 2 Number verifies that a value is either an int or float Optionally it verifies that a value is between a given minimum or maximum 3 String verifies that a value is a str Optionally it validates a given minimum or maximum length It can validate a user defined predicate as well class OneOf Validator def __init__ self options self options set options def validate self value if value not in self options raise ValueError f Expected value r to be one of self options r class Number Validator def __init__ self minvalue None maxvalue None self minvalue minvalue self maxvalue maxvalue def validate self value if not isinstance value int float raise TypeError f Expected value r to be an int or float if self minvalue is not None and value self minvalue raise ValueError f Expected value r to be at least self minvalue r if self maxvalue is not None and value self maxvalue raise ValueError f Expected value r to be no more than self maxvalue r class String Validator def __init__ self minsize None maxsize None predicate None self minsize minsize self maxsize maxsize self predicate predicate def validate self value if not isinstance value str raise TypeError f Expected value r to be an str if self minsize is not None and len value self minsize raise ValueError f Expected value r to be no smaller than self minsize r if self maxsize is not None and len value self maxsize raise ValueError f Expected value r to be no bigger than self maxsize r if self predicate is not None and not self predicate value raise ValueError f Expected self predicate to be true for value r Practical application Here s how the data validators can be used in a real class class Component name String minsize 3 maxsize 10 predicate str isupper kind OneOf wood metal plastic quantity Number minvalue 0 def __init__ self name kind quantity self name name self kind kind self quantity quantity The descriptors prevent invalid instances from being created Component Widget metal 5 Blocked Widget is not all uppercase Traceback most recent call last ValueError Expected method isupper of str objects to be true for Widget Component WIDGET metle 5 Blocked metle is misspelled Traceback most recent call last ValueError Expected metle to be one of metal plastic wood Component WIDGET metal 5 Blocked 5 is negative Traceback most recent call last ValueError Expected 5 to be at least 0 Component WIDGET metal V Blocked V isn t a number Traceback most recent call last TypeError Expected V to be an int or float c Component WIDGET metal 5 Allowed The inputs are valid Technical Tutorial What follows is a more technical tutorial for the mechanics and details of how descriptors work Abstract Defines descriptors summarizes the protocol and shows how descriptors are called Provides an example showing how object rel
en
null
2,448
ational mappings work Learning about descriptors not only provides access to a larger toolset it creates a deeper understanding of how Python works Definition and introduction In general a descriptor is an attribute value that has one of the methods in the descriptor protocol Those methods are __get__ __set__ and __delete__ If any of those methods are defined for an attribute it is said to be a descriptor The default behavior for attribute access is to get set or delete the attribute from an object s dictionary For instance a x has a lookup chain starting with a __dict__ x then type a __dict__ x and continuing through the method resolution order of type a If the looked up value is an object defining one of the descriptor methods then Python may override the default behavior and invoke the descriptor method instead Where this occurs in the precedence chain depends on which descriptor methods were defined Descriptors are a powerful general purpose protocol They are the mechanism behind properties methods static methods class methods and super They are used throughout Python itself Descriptors simplify the underlying C code and offer a flexible set of new tools for everyday Python programs Descriptor protocol descr __get__ self obj type None descr __set__ self obj value descr __delete__ self obj That is all there is to it Define any of these methods and an object is considered a descriptor and can override default behavior upon being looked up as an attribute If an object defines __set__ or __delete__ it is considered a data descriptor Descriptors that only define __get__ are called non data descriptors they are often used for methods but other uses are possible Data and non data descriptors differ in how overrides are calculated with respect to entries in an instance s dictionary If an instance s dictionary has an entry with the same name as a data descriptor the data descriptor takes precedence If an instance s dictionary has an entry with the same name as a non data descriptor the dictionary entry takes precedence To make a read only data descriptor define both __get__ and __set__ with the __set__ raising an AttributeError when called Defining the __set__ method with an exception raising placeholder is enough to make it a data descriptor Overview of descriptor invocation A descriptor can be called directly with desc __get__ obj or desc __get__ None cls But it is more common for a descriptor to be invoked automatically from attribute access The expression obj x looks up the attribute x in the chain of namespaces for obj If the search finds a descriptor outside of the instance __dict__ its __get__ method is invoked according to the precedence rules listed below The details of invocation depend on whether obj is an object class or instance of super Invocation from an instance Instance lookup scans through a chain of namespaces giving data descriptors the highest priority followed by instance variables then non data descriptors then class variables and lastly __getattr__ if it is provided If a descriptor is found for a x then it is invoked with desc __get__ a type a The logic for a dotted lookup is in object __getattribute__ Here is a pure Python equivalent def find_name_in_mro cls name default Emulate _PyType_Lookup in Objects typeobject c for base in cls __mro__ if name in vars base return vars base name return default def object_getattribute obj name Emulate PyObject_GenericGetAttr in Objects object c null object objtype type obj cls_var find_name_in_mro objtype name null descr_get getattr type cls_var __get__ null if descr_get is not null if hasattr type cls_var __set__ or hasattr type cls_var __delete__ return descr_get cls_var obj objtype data descriptor if hasattr obj __dict__ and name in vars obj return vars obj name instance variable if descr_get is not null return descr_get cls_var obj objtype non data descriptor if cls_var is not null return cls_var class variable raise AttributeError name Note there is no __getattr__ hook in the __getattribute__ code That is why calling __getattribute__ directly or with super __g
en
null
2,449
etattribute__ will bypass __getattr__ entirely Instead it is the dot operator and the getattr function that are responsible for invoking __getattr__ whenever __getattribute__ raises an AttributeError Their logic is encapsulated in a helper function def getattr_hook obj name Emulate slot_tp_getattr_hook in Objects typeobject c try return obj __getattribute__ name except AttributeError if not hasattr type obj __getattr__ raise return type obj __getattr__ obj name __getattr__ Invocation from a class The logic for a dotted lookup such as A x is in type __getattribute__ The steps are similar to those for object __getattribute__ but the instance dictionary lookup is replaced by a search through the class s method resolution order If a descriptor is found it is invoked with desc __get__ None A The full C implementation can be found in type_getattro and _PyType_Lookup in Objects typeobject c Invocation from super The logic for super s dotted lookup is in the __getattribute__ method for object returned by super A dotted lookup such as super A obj m searches obj __class__ __mro__ for the base class B immediately following A and then returns B __dict__ m __get__ obj A If not a descriptor m is returned unchanged The full C implementation can be found in super_getattro in Objects typeobject c A pure Python equivalent can be found in Guido s Tutorial Summary of invocation logic The mechanism for descriptors is embedded in the __getattribute__ methods for object type and super The important points to remember are Descriptors are invoked by the __getattribute__ method Classes inherit this machinery from object type or super Overriding __getattribute__ prevents automatic descriptor calls because all the descriptor logic is in that method object __getattribute__ and type __getattribute__ make different calls to __get__ The first includes the instance and may include the class The second puts in None for the instance and always includes the class Data descriptors always override instance dictionaries Non data descriptors may be overridden by instance dictionaries Automatic name notification Sometimes it is desirable for a descriptor to know what class variable name it was assigned to When a new class is created the type metaclass scans the dictionary of the new class If any of the entries are descriptors and if they define __set_name__ that method is called with two arguments The owner is the class where the descriptor is used and the name is the class variable the descriptor was assigned to The implementation details are in type_new and set_names in Objects typeobject c Since the update logic is in type __new__ notifications only take place at the time of class creation If descriptors are added to the class afterwards __set_name__ will need to be called manually ORM example The following code is a simplified skeleton showing how data descriptors could be used to implement an object relational mapping The essential idea is that the data is stored in an external database The Python instances only hold keys to the database s tables Descriptors take care of lookups or updates class Field def __set_name__ self owner name self fetch f SELECT name FROM owner table WHERE owner key self store f UPDATE owner table SET name WHERE owner key def __get__ self obj objtype None return conn execute self fetch obj key fetchone 0 def __set__ self obj value conn execute self store value obj key conn commit We can use the Field class to define models that describe the schema for each table in a database class Movie table Movies Table name key title Primary key director Field year Field def __init__ self key self key key class Song table Music key title artist Field year Field genre Field def __init__ self key self key key To use the models first connect to the database import sqlite3 conn sqlite3 connect entertainment db An interactive session shows how data is retrieved from the database and how it can be updated Movie Star Wars director George Lucas jaws Movie Jaws f Released in jaws year by jaws director Released in 1975 by Steven Spielberg Song Country Ro
en
null
2,450
ads artist John Denver Movie Star Wars director J J Abrams Movie Star Wars director J J Abrams Pure Python Equivalents The descriptor protocol is simple and offers exciting possibilities Several use cases are so common that they have been prepackaged into built in tools Properties bound methods static methods class methods and __slots__ are all based on the descriptor protocol Properties Calling property is a succinct way of building a data descriptor that triggers a function call upon access to an attribute Its signature is property fget None fset None fdel None doc None property The documentation shows a typical use to define a managed attribute x class C def getx self return self __x def setx self value self __x value def delx self del self __x x property getx setx delx I m the x property To see how property is implemented in terms of the descriptor protocol here is a pure Python equivalent class Property Emulate PyProperty_Type in Objects descrobject c def __init__ self fget None fset None fdel None doc None self fget fget self fset fset self fdel fdel if doc is None and fget is not None doc fget __doc__ self __doc__ doc self _name def __set_name__ self owner name self _name name def __get__ self obj objtype None if obj is None return self if self fget is None raise AttributeError f property self _name r of type obj __name__ r object has no getter return self fget obj def __set__ self obj value if self fset is None raise AttributeError f property self _name r of type obj __name__ r object has no setter self fset obj value def __delete__ self obj if self fdel is None raise AttributeError f property self _name r of type obj __name__ r object has no deleter self fdel obj def getter self fget prop type self fget self fset self fdel self __doc__ prop _name self _name return prop def setter self fset prop type self self fget fset self fdel self __doc__ prop _name self _name return prop def deleter self fdel prop type self self fget self fset fdel self __doc__ prop _name self _name return prop The property builtin helps whenever a user interface has granted attribute access and then subsequent changes require the intervention of a method For instance a spreadsheet class may grant access to a cell value through Cell b10 value Subsequent improvements to the program require the cell to be recalculated on every access however the programmer does not want to affect existing client code accessing the attribute directly The solution is to wrap access to the value attribute in a property data descriptor class Cell property def value self Recalculate the cell before returning value self recalc return self _value Either the built in property or our Property equivalent would work in this example Functions and methods Python s object oriented features are built upon a function based environment Using non data descriptors the two are merged seamlessly Functions stored in class dictionaries get turned into methods when invoked Methods only differ from regular functions in that the object instance is prepended to the other arguments By convention the instance is called self but could be called this or any other variable name Methods can be created manually with types MethodType which is roughly equivalent to class MethodType Emulate PyMethod_Type in Objects classobject c def __init__ self func obj self __func__ func self __self__ obj def __call__ self args kwargs func self __func__ obj self __self__ return func obj args kwargs To support automatic creation of methods functions include the __get__ method for binding methods during attribute access This means that functions are non data descriptors that return bound methods during dotted lookup from an instance Here s how it works class Function def __get__ self obj objtype None Simulate func_descr_get in Objects funcobject c if obj is None return self return MethodType self obj Running the following class in the interpreter shows how the function descriptor works in practice class D def f self x return x The function has a qualified name attribute to support introspection D f __qualname__
en
null
2,451
D f Accessing the function through the class dictionary does not invoke __get__ Instead it just returns the underlying function object D __dict__ f function D f at 0x00C45070 Dotted access from a class calls __get__ which just returns the underlying function unchanged D f function D f at 0x00C45070 The interesting behavior occurs during dotted access from an instance The dotted lookup calls __get__ which returns a bound method object d D d f bound method D f of __main__ D object at 0x00B18C90 Internally the bound method stores the underlying function and the bound instance d f __func__ function D f at 0x00C45070 d f __self__ __main__ D object at 0x00B18C90 If you have ever wondered where self comes from in regular methods or where cls comes from in class methods this is it Kinds of methods Non data descriptors provide a simple mechanism for variations on the usual patterns of binding functions into methods To recap functions have a __get__ method so that they can be converted to a method when accessed as attributes The non data descriptor transforms an obj f args call into f obj args Calling cls f args becomes f args This chart summarizes the binding and its two most useful variants Transformation Called from an object Called from a class function f obj args f args staticmethod f args f args classmethod f type obj args f cls args Static methods Static methods return the underlying function without changes Calling either c f or C f is the equivalent of a direct lookup into object __getattribute__ c f or object __getattribute__ C f As a result the function becomes identically accessible from either an object or a class Good candidates for static methods are methods that do not reference the self variable For instance a statistics package may include a container class for experimental data The class provides normal methods for computing the average mean median and other descriptive statistics that depend on the data However there may be useful functions which are conceptually related but do not depend on the data For instance erf x is handy conversion routine that comes up in statistical work but does not directly depend on a particular dataset It can be called either from an object or the class s erf 1 5 9332 or Sample erf 1 5 9332 Since static methods return the underlying function with no changes the example calls are unexciting class E staticmethod def f x return x 10 E f 3 30 E f 3 30 Using the non data descriptor protocol a pure Python version of staticmethod would look like this import functools class StaticMethod Emulate PyStaticMethod_Type in Objects funcobject c def __init__ self f self f f functools update_wrapper self f def __get__ self obj objtype None return self f def __call__ self args kwds return self f args kwds The functools update_wrapper call adds a __wrapped__ attribute that refers to the underlying function Also it carries forward the attributes necessary to make the wrapper look like the wrapped function __name__ __qualname__ __doc__ and __annotations__ Class methods Unlike static methods class methods prepend the class reference to the argument list before calling the function This format is the same for whether the caller is an object or a class class F classmethod def f cls x return cls __name__ x F f 3 F 3 F f 3 F 3 This behavior is useful whenever the method only needs to have a class reference and does not rely on data stored in a specific instance One use for class methods is to create alternate class constructors For example the classmethod dict fromkeys creates a new dictionary from a list of keys The pure Python equivalent is class Dict dict classmethod def fromkeys cls iterable value None Emulate dict_fromkeys in Objects dictobject c d cls for key in iterable d key value return d Now a new dictionary of unique keys can be constructed like this d Dict fromkeys abracadabra type d is Dict True d a None b None r None c None d None Using the non data descriptor protocol a pure Python version of classmethod would look like this import functools class ClassMethod Emulate PyClassMethod_Type in Obj
en
null
2,452
ects funcobject c def __init__ self f self f f functools update_wrapper self f def __get__ self obj cls None if cls is None cls type obj if hasattr type self f __get__ This code path was added in Python 3 9 and was deprecated in Python 3 11 return self f __get__ cls cls return MethodType self f cls The code path for hasattr type self f __get__ was added in Python 3 9 and makes it possible for classmethod to support chained decorators For example a classmethod and property could be chained together In Python 3 11 this functionality was deprecated class G classmethod property def __doc__ cls return f A doc for cls __name__ r G __doc__ A doc for G The functools update_wrapper call in ClassMethod adds a __wrapped__ attribute that refers to the underlying function Also it carries forward the attributes necessary to make the wrapper look like the wrapped function __name__ __qualname__ __doc__ and __annotations__ Member objects and __slots__ When a class defines __slots__ it replaces instance dictionaries with a fixed length array of slot values From a user point of view that has several effects 1 Provides immediate detection of bugs due to misspelled attribute assignments Only attribute names specified in __slots__ are allowed class Vehicle __slots__ id_number make model auto Vehicle auto id_nubmer VYE483814LQEX Traceback most recent call last AttributeError Vehicle object has no attribute id_nubmer 2 Helps create immutable objects where descriptors manage access to private attributes stored in __slots__ class Immutable __slots__ _dept _name Replace the instance dictionary def __init__ self dept name self _dept dept Store to private attribute self _name name Store to private attribute property Read only descriptor def dept self return self _dept property def name self Read only descriptor return self _name mark Immutable Botany Mark Watney mark dept Botany mark dept Space Pirate Traceback most recent call last AttributeError property dept of Immutable object has no setter mark location Mars Traceback most recent call last AttributeError Immutable object has no attribute location 3 Saves memory On a 64 bit Linux build an instance with two attributes takes 48 bytes with __slots__ and 152 bytes without This flyweight design pattern likely only matters when a large number of instances are going to be created 4 Improves speed Reading instance variables is 35 faster with __slots__ as measured with Python 3 10 on an Apple M1 processor 5 Blocks tools like functools cached_property which require an instance dictionary to function correctly from functools import cached_property class CP __slots__ Eliminates the instance dict cached_property Requires an instance dict def pi self return 4 sum 1 0 n 2 0 n 1 0 for n in reversed range 100_000 CP pi Traceback most recent call last TypeError No __dict__ attribute on CP instance to cache pi property It is not possible to create an exact drop in pure Python version of __slots__ because it requires direct access to C structures and control over object memory allocation However we can build a mostly faithful simulation where the actual C structure for slots is emulated by a private _slotvalues list Reads and writes to that private structure are managed by member descriptors null object class Member def __init__ self name clsname offset Emulate PyMemberDef in Include structmember h Also see descr_new in Objects descrobject c self name name self clsname clsname self offset offset def __get__ self obj objtype None Emulate member_get in Objects descrobject c Also see PyMember_GetOne in Python structmember c if obj is None return self value obj _slotvalues self offset if value is null raise AttributeError self name return value def __set__ self obj value Emulate member_set in Objects descrobject c obj _slotvalues self offset value def __delete__ self obj Emulate member_delete in Objects descrobject c value obj _slotvalues self offset if value is null raise AttributeError self name obj _slotvalues self offset null def __repr__ self Emulate member_repr in Objects descrobject c return f Member self name r
en
null
2,453
of self clsname r The type __new__ method takes care of adding member objects to class variables class Type type Simulate how the type metaclass adds member objects for slots def __new__ mcls clsname bases mapping kwargs Emulate type_new in Objects typeobject c type_new calls PyTypeReady which calls add_methods slot_names mapping get slot_names for offset name in enumerate slot_names mapping name Member name clsname offset return type __new__ mcls clsname bases mapping kwargs The object __new__ method takes care of creating instances that have slots instead of an instance dictionary Here is a rough simulation in pure Python class Object Simulate how object __new__ allocates memory for __slots__ def __new__ cls args kwargs Emulate object_new in Objects typeobject c inst super __new__ cls if hasattr cls slot_names empty_slots null len cls slot_names object __setattr__ inst _slotvalues empty_slots return inst def __setattr__ self name value Emulate _PyObject_GenericSetAttrWithDict Objects object c cls type self if hasattr cls slot_names and name not in cls slot_names raise AttributeError f cls __name__ r object has no attribute name r super __setattr__ name value def __delattr__ self name Emulate _PyObject_GenericSetAttrWithDict Objects object c cls type self if hasattr cls slot_names and name not in cls slot_names raise AttributeError f cls __name__ r object has no attribute name r super __delattr__ name To use the simulation in a real class just inherit from Object and set the metaclass to Type class H Object metaclass Type Instance variables stored in slots slot_names x y def __init__ self x y self x x self y y At this point the metaclass has loaded member objects for x and y from pprint import pp pp dict vars H __module__ __main__ __doc__ Instance variables stored in slots slot_names x y __init__ function H __init__ at 0x7fb5d302f9d0 x Member x of H y Member y of H When instances are created they have a slot_values list where the attributes are stored h H 10 20 vars h _slotvalues 10 20 h x 55 vars h _slotvalues 55 20 Misspelled or unassigned attributes will raise an exception h xz Traceback most recent call last AttributeError H object has no attribute xz
en
null
2,454
Python C API Reference Manual This manual documents the API used by C and C programmers who want to write extension modules or embed Python It is a companion to Extending and Embedding the Python Interpreter which describes the general principles of extension writing but does not document the API functions in detail Introduction Coding standards Include Files Useful macros Objects Types and Reference Counts Exceptions Embedding Python Debugging Builds C API Stability Unstable C API Stable Application Binary Interface Platform Considerations Contents of Limited API The Very High Level Layer Reference Counting Exception Handling Printing and clearing Raising exceptions Issuing warnings Querying the error indicator Signal Handling Exception Classes Exception Objects Unicode Exception Objects Recursion Control Standard Exceptions Standard Warning Categories Utilities Operating System Utilities System Functions Process Control Importing Modules Data marshalling support Parsing arguments and building values String conversion and formatting PyHash API Reflection Codec registry and support functions Support for Perf Maps Abstract Objects Layer Object Protocol Call Protocol Number Protocol Sequence Protocol Mapping Protocol Iterator Protocol Buffer Protocol Old Buffer Protocol Concrete Objects Layer Fundamental Objects Numeric Objects Sequence Objects Container Objects Function Objects Other Objects Initialization Finalization and Threads Before Python Initialization Global configuration variables Initializing and finalizing the interpreter Process wide parameters Thread State and the Global Interpreter Lock Sub interpreter support Asynchronous Notifications Profiling and Tracing Advanced Debugger Support Thread Local Storage Support Python Initialization Configuration Example PyWideStringList PyStatus PyPreConfig Preinitialize Python with PyPreConfig PyConfig Initialization with PyConfig Isolated Configuration Python Configuration Python Path Configuration Py_RunMain Py_GetArgcArgv Multi Phase Initialization Private Provisional API Memory Management Overview Allocator Domains Raw Memory Interface Memory Interface Object allocators Default Memory Allocators Customize Memory Allocators Debug hooks on the Python memory allocators The pymalloc allocator tracemalloc C API Examples Object Implementation Support Allocating Objects on the Heap Common Object Structures Type Objects Number Object Structures Mapping Object Structures Sequence Object Structures Buffer Object Structures Async Object Structures Slot Type typedefs Examples Supporting Cyclic Garbage Collection API and ABI Versioning
en
null
2,455
Built in Functions The Python interpreter has a number of functions and types built into it that are always available They are listed here in alphabetical order Built in Functions A abs aiter E enumerate L len list R range repr all anext any eval exec locals M reversed round ascii B bin F filter map max S set setattr bool breakpoint float format memoryview min slice sorted bytearray bytes frozenset G N next O staticmethod str C callable getattr globals object oct sum super T chr classmethod H hasattr open ord P tuple type V compile complex hash help pow print vars Z zip D delattr hex I id property _ __import__ dict dir input int divmod isinstance issubclass iter abs x Return the absolute value of a number The argument may be an integer a floating point number or an object implementing __abs__ If the argument is a complex number its magnitude is returned aiter async_iterable Return an asynchronous iterator for an asynchronous iterable Equivalent to calling x __aiter__ Note Unlike iter aiter has no 2 argument variant New in version 3 10 all iterable Return True if all elements of the iterable are true or if the iterable is empty Equivalent to def all iterable for element in iterable if not element return False return True awaitable anext async_iterator awaitable anext async_iterator default When awaited return the next item from the given asynchronous iterator or default if given and the iterator is exhausted This is the async variant of the next builtin and behaves similarly This calls the __anext__ method of async_iterator returning an awaitable Awaiting this returns the next value of the iterator If default is given it is returned if the iterator is exhausted otherwise StopAsyncIteration is raised New in version 3 10 any iterable Return True if any element of the iterable is true If the iterable is empty return False Equivalent to def any iterable for element in iterable if element return True return False ascii object As repr return a string containing a printable representation of an object but escape the non ASCII characters in the string returned by repr using x u or U escapes This generates a string similar to that returned by repr in Python 2 bin x Convert an integer number to a binary string prefixed with 0b The result is a valid Python expression If x is not a Python int object it has to define an __index__ method that returns an integer Some examples bin 3 0b11 bin 10 0b1010 If the prefix 0b is desired or not you can use either of the following ways format 14 b format 14 b 0b1110 1110 f 14 b f 14 b 0b1110 1110 See also format for more information class bool x False Return a Boolean value i e one of True or False x is converted using the standard truth testing procedure If x is false or omitted this returns False otherwise it returns True The bool class is a subclass of int see Numeric Types int float complex It cannot be subclassed further Its only instances are False and True see Boolean Type bool Changed in version 3 7 x is now a positional only parameter breakpoint args kws This function drops you into the debugger at the call site Specifically it calls sys breakpointhook passing args and kws straight through By default sys breakpointhook calls pdb set_trace expecting no arguments In this case it is purely a convenience function so you don t have to explicitly import pdb or type as much code to enter the debugger However sys breakpointhook can be set to some other function and breakpoint will automatically call that allowing you to drop into the debugger of choice If sys breakpointhook is not accessible this function will raise RuntimeError By default the behavior of breakpoint can be changed with the PYTHONBREAKPOINT environment variable See sys breakpointhook for usage details Note that this is not guaranteed if sys breakpointhook has been replaced Raises an auditing event builtins breakpoint with argument breakpointhook New in version 3 7 class bytearray source b class bytearray source encoding class bytearray source encoding errors Return a new array of bytes The bytearray class is a mutable sequence of
en
null
2,456
integers in the range 0 x 256 It has most of the usual methods of mutable sequences described in Mutable Sequence Types as well as most methods that the bytes type has see Bytes and Bytearray Operations The optional source parameter can be used to initialize the array in a few different ways If it is a string you must also give the encoding and optionally errors parameters bytearray then converts the string to bytes using str encode If it is an integer the array will have that size and will be initialized with null bytes If it is an object conforming to the buffer interface a read only buffer of the object will be used to initialize the bytes array If it is an iterable it must be an iterable of integers in the range 0 x 256 which are used as the initial contents of the array Without an argument an array of size 0 is created See also Binary Sequence Types bytes bytearray memoryview and Bytearray Objects class bytes source b class bytes source encoding class bytes source encoding errors Return a new bytes object which is an immutable sequence of integers in the range 0 x 256 bytes is an immutable version of bytearray it has the same non mutating methods and the same indexing and slicing behavior Accordingly constructor arguments are interpreted as for bytearray Bytes objects can also be created with literals see String and Bytes literals See also Binary Sequence Types bytes bytearray memoryview Bytes Objects and Bytes and Bytearray Operations callable object Return True if the object argument appears callable False if not If this returns True it is still possible that a call fails but if it is False calling object will never succeed Note that classes are callable calling a class returns a new instance instances are callable if their class has a __call__ method New in version 3 2 This function was first removed in Python 3 0 and then brought back in Python 3 2 chr i Return the string representing a character whose Unicode code point is the integer i For example chr 97 returns the string a while chr 8364 returns the string This is the inverse of ord The valid range for the argument is from 0 through 1 114 111 0x10FFFF in base 16 ValueError will be raised if i is outside that range classmethod Transform a method into a class method A class method receives the class as an implicit first argument just like an instance method receives the instance To declare a class method use this idiom class C classmethod def f cls arg1 arg2 The classmethod form is a function decorator see Function definitions for details A class method can be called either on the class such as C f or on an instance such as C f The instance is ignored except for its class If a class method is called for a derived class the derived class object is passed as the implied first argument Class methods are different than C or Java static methods If you want those see staticmethod in this section For more information on class methods see The standard type hierarchy Changed in version 3 9 Class methods can now wrap other descriptors such as property Changed in version 3 10 Class methods now inherit the method attributes __module__ __name__ __qualname__ __doc__ and __annotations__ and have a new __wrapped__ attribute Changed in version 3 11 Class methods can no longer wrap other descriptors such as property compile source filename mode flags 0 dont_inherit False optimize 1 Compile the source into a code or AST object Code objects can be executed by exec or eval source can either be a normal string a byte string or an AST object Refer to the ast module documentation for information on how to work with AST objects The filename argument should give the file from which the code was read pass some recognizable value if it wasn t read from a file string is commonly used The mode argument specifies what kind of code must be compiled it can be exec if source consists of a sequence of statements eval if it consists of a single expression or single if it consists of a single interactive statement in the latter case expression statements that evaluate to something other than None
en
null
2,457
will be printed The optional arguments flags and dont_inherit control which compiler options should be activated and which future features should be allowed If neither is present or both are zero the code is compiled with the same flags that affect the code that is calling compile If the flags argument is given and dont_inherit is not or is zero then the compiler options and the future statements specified by the flags argument are used in addition to those that would be used anyway If dont_inherit is a non zero integer then the flags argument is it the flags future features and compiler options in the surrounding code are ignored Compiler options and future statements are specified by bits which can be bitwise ORed together to specify multiple options The bitfield required to specify a given future feature can be found as the compiler_flag attribute on the _Feature instance in the __future__ module Compiler flags can be found in ast module with PyCF_ prefix The argument optimize specifies the optimization level of the compiler the default value of 1 selects the optimization level of the interpreter as given by O options Explicit levels are 0 no optimization __debug__ is true 1 asserts are removed __debug__ is false or 2 docstrings are removed too This function raises SyntaxError if the compiled source is invalid and ValueError if the source contains null bytes If you want to parse Python code into its AST representation see ast parse Raises an auditing event compile with arguments source and filename This event may also be raised by implicit compilation Note When compiling a string with multi line code in single or eval mode input must be terminated by at least one newline character This is to facilitate detection of incomplete and complete statements in the code module Warning It is possible to crash the Python interpreter with a sufficiently large complex string when compiling to an AST object due to stack depth limitations in Python s AST compiler Changed in version 3 2 Allowed use of Windows and Mac newlines Also input in exec mode does not have to end in a newline anymore Added the optimize parameter Changed in version 3 5 Previously TypeError was raised when null bytes were encountered in source New in version 3 8 ast PyCF_ALLOW_TOP_LEVEL_AWAIT can now be passed in flags to enable support for top level await async for and async with class complex real 0 imag 0 class complex string Return a complex number with the value real imag 1j or convert a string or number to a complex number If the first parameter is a string it will be interpreted as a complex number and the function must be called without a second parameter The second parameter can never be a string Each argument may be any numeric type including complex If imag is omitted it defaults to zero and the constructor serves as a numeric conversion like int and float If both arguments are omitted returns 0j For a general Python object x complex x delegates to x __complex__ If __complex__ is not defined then it falls back to __float__ If __float__ is not defined then it falls back to __index__ Note When converting from a string the string must not contain whitespace around the central or operator For example complex 1 2j is fine but complex 1 2j raises ValueError The complex type is described in Numeric Types int float complex Changed in version 3 6 Grouping digits with underscores as in code literals is allowed Changed in version 3 8 Falls back to __index__ if __complex__ and __float__ are not defined delattr object name This is a relative of setattr The arguments are an object and a string The string must be the name of one of the object s attributes The function deletes the named attribute provided the object allows it For example delattr x foobar is equivalent to del x foobar name need not be a Python identifier see setattr class dict kwarg class dict mapping kwarg class dict iterable kwarg Create a new dictionary The dict object is the dictionary class See dict and Mapping Types dict for documentation about this class For other containers see the built in l
en
null
2,458
ist set and tuple classes as well as the collections module dir dir object Without arguments return the list of names in the current local scope With an argument attempt to return a list of valid attributes for that object If the object has a method named __dir__ this method will be called and must return the list of attributes This allows objects that implement a custom __getattr__ or __getattribute__ function to customize the way dir reports their attributes If the object does not provide __dir__ the function tries its best to gather information from the object s __dict__ attribute if defined and from its type object The resulting list is not necessarily complete and may be inaccurate when the object has a custom __getattr__ The default dir mechanism behaves differently with different types of objects as it attempts to produce the most relevant rather than complete information If the object is a module object the list contains the names of the module s attributes If the object is a type or class object the list contains the names of its attributes and recursively of the attributes of its bases Otherwise the list contains the object s attributes names the names of its class s attributes and recursively of the attributes of its class s base classes The resulting list is sorted alphabetically For example import struct dir show the names in the module namespace __builtins__ __name__ struct dir struct show the names in the struct module Struct __all__ __builtins__ __cached__ __doc__ __file__ __initializing__ __loader__ __name__ __package__ _clearcache calcsize error pack pack_into unpack unpack_from class Shape def __dir__ self return area perimeter location s Shape dir s area location perimeter Note Because dir is supplied primarily as a convenience for use at an interactive prompt it tries to supply an interesting set of names more than it tries to supply a rigorously or consistently defined set of names and its detailed behavior may change across releases For example metaclass attributes are not in the result list when the argument is a class divmod a b Take two non complex numbers as arguments and return a pair of numbers consisting of their quotient and remainder when using integer division With mixed operand types the rules for binary arithmetic operators apply For integers the result is the same as a b a b For floating point numbers the result is q a b where q is usually math floor a b but may be 1 less than that In any case q b a b is very close to a if a b is non zero it has the same sign as b and 0 abs a b abs b enumerate iterable start 0 Return an enumerate object iterable must be a sequence an iterator or some other object which supports iteration The __next__ method of the iterator returned by enumerate returns a tuple containing a count from start which defaults to 0 and the values obtained from iterating over iterable seasons Spring Summer Fall Winter list enumerate seasons 0 Spring 1 Summer 2 Fall 3 Winter list enumerate seasons start 1 1 Spring 2 Summer 3 Fall 4 Winter Equivalent to def enumerate iterable start 0 n start for elem in iterable yield n elem n 1 eval expression globals None locals None The arguments are a string and optional globals and locals If provided globals must be a dictionary If provided locals can be any mapping object The expression argument is parsed and evaluated as a Python expression technically speaking a condition list using the globals and locals dictionaries as global and local namespace If the globals dictionary is present and does not contain a value for the key __builtins__ a reference to the dictionary of the built in module builtins is inserted under that key before expression is parsed That way you can control what builtins are available to the executed code by inserting your own __builtins__ dictionary into globals before passing it to eval If the locals dictionary is omitted it defaults to the globals dictionary If both dictionaries are omitted the expression is executed with the globals and locals in the environment where eval is called Note eval does not have access
en
null
2,459
to the nested scopes non locals in the enclosing environment The return value is the result of the evaluated expression Syntax errors are reported as exceptions Example x 1 eval x 1 2 This function can also be used to execute arbitrary code objects such as those created by compile In this case pass a code object instead of a string If the code object has been compiled with exec as the mode argument eval s return value will be None Hints dynamic execution of statements is supported by the exec function The globals and locals functions return the current global and local dictionary respectively which may be useful to pass around for use by eval or exec If the given source is a string then leading and trailing spaces and tabs are stripped See ast literal_eval for a function that can safely evaluate strings with expressions containing only literals Raises an auditing event exec with the code object as the argument Code compilation events may also be raised exec object globals None locals None closure None This function supports dynamic execution of Python code object must be either a string or a code object If it is a string the string is parsed as a suite of Python statements which is then executed unless a syntax error occurs 1 If it is a code object it is simply executed In all cases the code that s executed is expected to be valid as file input see the section File input in the Reference Manual Be aware that the nonlocal yield and return statements may not be used outside of function definitions even within the context of code passed to the exec function The return value is None In all cases if the optional parts are omitted the code is executed in the current scope If only globals is provided it must be a dictionary and not a subclass of dictionary which will be used for both the global and the local variables If globals and locals are given they are used for the global and local variables respectively If provided locals can be any mapping object Remember that at the module level globals and locals are the same dictionary If exec gets two separate objects as globals and locals the code will be executed as if it were embedded in a class definition If the globals dictionary does not contain a value for the key __builtins__ a reference to the dictionary of the built in module builtins is inserted under that key That way you can control what builtins are available to the executed code by inserting your own __builtins__ dictionary into globals before passing it to exec The closure argument specifies a closure a tuple of cellvars It s only valid when the object is a code object containing free variables The length of the tuple must exactly match the number of free variables referenced by the code object Raises an auditing event exec with the code object as the argument Code compilation events may also be raised Note The built in functions globals and locals return the current global and local dictionary respectively which may be useful to pass around for use as the second and third argument to exec Note The default locals act as described for function locals below modifications to the default locals dictionary should not be attempted Pass an explicit locals dictionary if you need to see effects of the code on locals after function exec returns Changed in version 3 11 Added the closure parameter filter function iterable Construct an iterator from those elements of iterable for which function is true iterable may be either a sequence a container which supports iteration or an iterator If function is None the identity function is assumed that is all elements of iterable that are false are removed Note that filter function iterable is equivalent to the generator expression item for item in iterable if function item if function is not None and item for item in iterable if item if function is None See itertools filterfalse for the complementary function that returns elements of iterable for which function is false class float x 0 0 Return a floating point number constructed from a number or string x If the argument is a string it s
en
null
2,460
hould contain a decimal number optionally preceded by a sign and optionally embedded in whitespace The optional sign may be or a sign has no effect on the value produced The argument may also be a string representing a NaN not a number or positive or negative infinity More precisely the input must conform to the floatvalue production rule in the following grammar after leading and trailing whitespace characters are removed sign infinity Infinity inf nan nan digit a Unicode decimal digit i e characters in Unicode general category Nd digitpart digit _ digit number digitpart digitpart digitpart exponent e E digitpart floatnumber number exponent floatvalue sign floatnumber infinity nan Case is not significant so for example inf Inf INFINITY and iNfINity are all acceptable spellings for positive infinity Otherwise if the argument is an integer or a floating point number a floating point number with the same value within Python s floating point precision is returned If the argument is outside the range of a Python float an OverflowError will be raised For a general Python object x float x delegates to x __float__ If __float__ is not defined then it falls back to __index__ If no argument is given 0 0 is returned Examples float 1 23 1 23 float 12345 n 12345 0 float 1e 003 0 001 float 1E6 1000000 0 float Infinity inf The float type is described in Numeric Types int float complex Changed in version 3 6 Grouping digits with underscores as in code literals is allowed Changed in version 3 7 x is now a positional only parameter Changed in version 3 8 Falls back to __index__ if __float__ is not defined format value format_spec Convert a value to a formatted representation as controlled by format_spec The interpretation of format_spec will depend on the type of the value argument however there is a standard formatting syntax that is used by most built in types Format Specification Mini Language The default format_spec is an empty string which usually gives the same effect as calling str value A call to format value format_spec is translated to type value __format__ value format_spec which bypasses the instance dictionary when searching for the value s __format__ method A TypeError exception is raised if the method search reaches object and the format_spec is non empty or if either the format_spec or the return value are not strings Changed in version 3 4 object __format__ format_spec raises TypeError if format_spec is not an empty string class frozenset iterable set Return a new frozenset object optionally with elements taken from iterable frozenset is a built in class See frozenset and Set Types set frozenset for documentation about this class For other containers see the built in set list tuple and dict classes as well as the collections module getattr object name getattr object name default Return the value of the named attribute of object name must be a string If the string is the name of one of the object s attributes the result is the value of that attribute For example getattr x foobar is equivalent to x foobar If the named attribute does not exist default is returned if provided otherwise AttributeError is raised name need not be a Python identifier see setattr Note Since private name mangling happens at compilation time one must manually mangle a private attribute s attributes with two leading underscores name in order to retrieve it with getattr globals Return the dictionary implementing the current module namespace For code within functions this is set when the function is defined and remains the same regardless of where the function is called hasattr object name The arguments are an object and a string The result is True if the string is the name of one of the object s attributes False if not This is implemented by calling getattr object name and seeing whether it raises an AttributeError or not hash object Return the hash value of the object if it has one Hash values are integers They are used to quickly compare dictionary keys during a dictionary lookup Numeric values that compare equal have the same hash value even if the
en
null
2,461
y are of different types as is the case for 1 and 1 0 Note For objects with custom __hash__ methods note that hash truncates the return value based on the bit width of the host machine help help request Invoke the built in help system This function is intended for interactive use If no argument is given the interactive help system starts on the interpreter console If the argument is a string then the string is looked up as the name of a module function class method keyword or documentation topic and a help page is printed on the console If the argument is any other kind of object a help page on the object is generated Note that if a slash appears in the parameter list of a function when invoking help it means that the parameters prior to the slash are positional only For more info see the FAQ entry on positional only parameters This function is added to the built in namespace by the site module Changed in version 3 4 Changes to pydoc and inspect mean that the reported signatures for callables are now more comprehensive and consistent hex x Convert an integer number to a lowercase hexadecimal string prefixed with 0x If x is not a Python int object it has to define an __index__ method that returns an integer Some examples hex 255 0xff hex 42 0x2a If you want to convert an integer number to an uppercase or lower hexadecimal string with prefix or not you can use either of the following ways x 255 x 255 X 255 0xff ff FF format 255 x format 255 x format 255 X 0xff ff FF f 255 x f 255 x f 255 X 0xff ff FF See also format for more information See also int for converting a hexadecimal string to an integer using a base of 16 Note To obtain a hexadecimal string representation for a float use the float hex method id object Return the identity of an object This is an integer which is guaranteed to be unique and constant for this object during its lifetime Two objects with non overlapping lifetimes may have the same id value CPython implementation detail This is the address of the object in memory Raises an auditing event builtins id with argument id input input prompt If the prompt argument is present it is written to standard output without a trailing newline The function then reads a line from input converts it to a string stripping a trailing newline and returns that When EOF is read EOFError is raised Example s input Monty Python s Flying Circus s Monty Python s Flying Circus If the readline module was loaded then input will use it to provide elaborate line editing and history features Raises an auditing event builtins input with argument prompt before reading input Raises an auditing event builtins input result with the result after successfully reading input class int x 0 class int x base 10 Return an integer object constructed from a number or string x or return 0 if no arguments are given If x defines __int__ int x returns x __int__ If x defines __index__ it returns x __index__ If x defines __trunc__ it returns x __trunc__ For floating point numbers this truncates towards zero If x is not a number or if base is given then x must be a string bytes or bytearray instance representing an integer in radix base Optionally the string can be preceded by or with no space in between have leading zeros be surrounded by whitespace and have single underscores interspersed between digits A base n integer string contains digits each representing a value from 0 to n 1 The values 0 9 can be represented by any Unicode decimal digit The values 10 35 can be represented by a to z or A to Z The default base is 10 The allowed bases are 0 and 2 36 Base 2 8 and 16 strings can be optionally prefixed with 0b 0B 0o 0O or 0x 0X as with integer literals in code For base 0 the string is interpreted in a similar way to an integer literal in code in that the actual base is 2 8 10 or 16 as determined by the prefix Base 0 also disallows leading zeros int 010 0 is not legal while int 010 and int 010 8 are The integer type is described in Numeric Types int float complex Changed in version 3 4 If base is not an instance of int and the base object has a base __ind
en
null
2,462
ex__ method that method is called to obtain an integer for the base Previous versions used base __int__ instead of base __index__ Changed in version 3 6 Grouping digits with underscores as in code literals is allowed Changed in version 3 7 x is now a positional only parameter Changed in version 3 8 Falls back to __index__ if __int__ is not defined Changed in version 3 11 The delegation to __trunc__ is deprecated Changed in version 3 11 int string inputs and string representations can be limited to help avoid denial of service attacks A ValueError is raised when the limit is exceeded while converting a string x to an int or when converting an int into a string would exceed the limit See the integer string conversion length limitation documentation isinstance object classinfo Return True if the object argument is an instance of the classinfo argument or of a direct indirect or virtual subclass thereof If object is not an object of the given type the function always returns False If classinfo is a tuple of type objects or recursively other such tuples or a Union Type of multiple types return True if object is an instance of any of the types If classinfo is not a type or tuple of types and such tuples a TypeError exception is raised TypeError may not be raised for an invalid type if an earlier check succeeds Changed in version 3 10 classinfo can be a Union Type issubclass class classinfo Return True if class is a subclass direct indirect or virtual of classinfo A class is considered a subclass of itself classinfo may be a tuple of class objects or recursively other such tuples or a Union Type in which case return True if class is a subclass of any entry in classinfo In any other case a TypeError exception is raised Changed in version 3 10 classinfo can be a Union Type iter object iter object sentinel Return an iterator object The first argument is interpreted very differently depending on the presence of the second argument Without a second argument object must be a collection object which supports the iterable protocol the __iter__ method or it must support the sequence protocol the __getitem__ method with integer arguments starting at 0 If it does not support either of those protocols TypeError is raised If the second argument sentinel is given then object must be a callable object The iterator created in this case will call object with no arguments for each call to its __next__ method if the value returned is equal to sentinel StopIteration will be raised otherwise the value will be returned See also Iterator Types One useful application of the second form of iter is to build a block reader For example reading fixed width blocks from a binary database file until the end of file is reached from functools import partial with open mydata db rb as f for block in iter partial f read 64 b process_block block len s Return the length the number of items of an object The argument may be a sequence such as a string bytes tuple list or range or a collection such as a dictionary set or frozen set CPython implementation detail len raises OverflowError on lengths larger than sys maxsize such as range 2 100 class list class list iterable Rather than being a function list is actually a mutable sequence type as documented in Lists and Sequence Types list tuple range locals Update and return a dictionary representing the current local symbol table Free variables are returned by locals when it is called in function blocks but not in class blocks Note that at the module level locals and globals are the same dictionary Note The contents of this dictionary should not be modified changes may not affect the values of local and free variables used by the interpreter map function iterable iterables Return an iterator that applies function to every item of iterable yielding the results If additional iterables arguments are passed function must take that many arguments and is applied to the items from all iterables in parallel With multiple iterables the iterator stops when the shortest iterable is exhausted For cases where the function inputs are a
en
null
2,463
lready arranged into argument tuples see itertools starmap max iterable key None max iterable default key None max arg1 arg2 args key None Return the largest item in an iterable or the largest of two or more arguments If one positional argument is provided it should be an iterable The largest item in the iterable is returned If two or more positional arguments are provided the largest of the positional arguments is returned There are two optional keyword only arguments The key argument specifies a one argument ordering function like that used for list sort The default argument specifies an object to return if the provided iterable is empty If the iterable is empty and default is not provided a ValueError is raised If multiple items are maximal the function returns the first one encountered This is consistent with other sort stability preserving tools such as sorted iterable key keyfunc reverse True 0 and heapq nlargest 1 iterable key keyfunc Changed in version 3 4 Added the default keyword only parameter Changed in version 3 8 The key can be None class memoryview object Return a memory view object created from the given argument See Memory Views for more information min iterable key None min iterable default key None min arg1 arg2 args key None Return the smallest item in an iterable or the smallest of two or more arguments If one positional argument is provided it should be an iterable The smallest item in the iterable is returned If two or more positional arguments are provided the smallest of the positional arguments is returned There are two optional keyword only arguments The key argument specifies a one argument ordering function like that used for list sort The default argument specifies an object to return if the provided iterable is empty If the iterable is empty and default is not provided a ValueError is raised If multiple items are minimal the function returns the first one encountered This is consistent with other sort stability preserving tools such as sorted iterable key keyfunc 0 and heapq nsmallest 1 iterable key keyfunc Changed in version 3 4 Added the default keyword only parameter Changed in version 3 8 The key can be None next iterator next iterator default Retrieve the next item from the iterator by calling its __next__ method If default is given it is returned if the iterator is exhausted otherwise StopIteration is raised class object Return a new featureless object object is a base for all classes It has methods that are common to all instances of Python classes This function does not accept any arguments Note object does not have a __dict__ so you can t assign arbitrary attributes to an instance of the object class oct x Convert an integer number to an octal string prefixed with 0o The result is a valid Python expression If x is not a Python int object it has to define an __index__ method that returns an integer For example oct 8 0o10 oct 56 0o70 If you want to convert an integer number to an octal string either with the prefix 0o or not you can use either of the following ways o 10 o 10 0o12 12 format 10 o format 10 o 0o12 12 f 10 o f 10 o 0o12 12 See also format for more information open file mode r buffering 1 encoding None errors None newline None closefd True opener None Open file and return a corresponding file object If the file cannot be opened an OSError is raised See Reading and Writing Files for more examples of how to use this function file is a path like object giving the pathname absolute or relative to the current working directory of the file to be opened or an integer file descriptor of the file to be wrapped If a file descriptor is given it is closed when the returned I O object is closed unless closefd is set to False mode is an optional string that specifies the mode in which the file is opened It defaults to r which means open for reading in text mode Other common values are w for writing truncating the file if it already exists x for exclusive creation and a for appending which on some Unix systems means that all writes append to the end of the file regardless of the current
en
null
2,464
seek position In text mode if encoding is not specified the encoding used is platform dependent locale getencoding is called to get the current locale encoding For reading and writing raw bytes use binary mode and leave encoding unspecified The available modes are Character Meaning r open for reading default w open for writing truncating the file first x open for exclusive creation failing if the file already exists a open for writing appending to the end of file if it exists b binary mode t text mode default open for updating reading and writing The default mode is r open for reading text a synonym of rt Modes w and w b open and truncate the file Modes r and r b open the file with no truncation As mentioned in the Overview Python distinguishes between binary and text I O Files opened in binary mode including b in the mode argument return contents as bytes objects without any decoding In text mode the default or when t is included in the mode argument the contents of the file are returned as str the bytes having been first decoded using a platform dependent encoding or using the specified encoding if given Note Python doesn t depend on the underlying operating system s notion of text files all the processing is done by Python itself and is therefore platform independent buffering is an optional integer used to set the buffering policy Pass 0 to switch buffering off only allowed in binary mode 1 to select line buffering only usable when writing in text mode and an integer 1 to indicate the size in bytes of a fixed size chunk buffer Note that specifying a buffer size this way applies for binary buffered I O but TextIOWrapper i e files opened with mode r would have another buffering To disable buffering in TextIOWrapper consider using the write_through flag for io TextIOWrapper reconfigure When no buffering argument is given the default buffering policy works as follows Binary files are buffered in fixed size chunks the size of the buffer is chosen using a heuristic trying to determine the underlying device s block size and falling back on io DEFAULT_BUFFER_SIZE On many systems the buffer will typically be 4096 or 8192 bytes long Interactive text files files for which isatty returns True use line buffering Other text files use the policy described above for binary files encoding is the name of the encoding used to decode or encode the file This should only be used in text mode The default encoding is platform dependent whatever locale getencoding returns but any text encoding supported by Python can be used See the codecs module for the list of supported encodings errors is an optional string that specifies how encoding and decoding errors are to be handled this cannot be used in binary mode A variety of standard error handlers are available listed under Error Handlers though any error handling name that has been registered with codecs register_error is also valid The standard names include strict to raise a ValueError exception if there is an encoding error The default value of None has the same effect ignore ignores errors Note that ignoring encoding errors can lead to data loss replace causes a replacement marker such as to be inserted where there is malformed data surrogateescape will represent any incorrect bytes as low surrogate code units ranging from U DC80 to U DCFF These surrogate code units will then be turned back into the same bytes when the surrogateescape error handler is used when writing data This is useful for processing files in an unknown encoding xmlcharrefreplace is only supported when writing to a file Characters not supported by the encoding are replaced with the appropriate XML character reference nnn backslashreplace replaces malformed data by Python s backslashed escape sequences namereplace also only supported when writing replaces unsupported characters with N escape sequences newline determines how to parse newline characters from the stream It can be None n r and r n It works as follows When reading input from the stream if newline is None universal newlines mode is enabled Lines in the input
en
null
2,465
can end in n r or r n and these are translated into n before being returned to the caller If it is universal newlines mode is enabled but line endings are returned to the caller untranslated If it has any of the other legal values input lines are only terminated by the given string and the line ending is returned to the caller untranslated When writing output to the stream if newline is None any n characters written are translated to the system default line separator os linesep If newline is or n no translation takes place If newline is any of the other legal values any n characters written are translated to the given string If closefd is False and a file descriptor rather than a filename was given the underlying file descriptor will be kept open when the file is closed If a filename is given closefd must be True the default otherwise an error will be raised A custom opener can be used by passing a callable as opener The underlying file descriptor for the file object is then obtained by calling opener with file flags opener must return an open file descriptor passing os open as opener results in functionality similar to passing None The newly created file is non inheritable The following example uses the dir_fd parameter of the os open function to open a file relative to a given directory import os dir_fd os open somedir os O_RDONLY def opener path flags return os open path flags dir_fd dir_fd with open spamspam txt w opener opener as f print This will be written to somedir spamspam txt file f os close dir_fd don t leak a file descriptor The type of file object returned by the open function depends on the mode When open is used to open a file in a text mode w r wt rt etc it returns a subclass of io TextIOBase specifically io TextIOWrapper When used to open a file in a binary mode with buffering the returned class is a subclass of io BufferedIOBase The exact class varies in read binary mode it returns an io BufferedReader in write binary and append binary modes it returns an io BufferedWriter and in read write mode it returns an io BufferedRandom When buffering is disabled the raw stream a subclass of io RawIOBase io FileIO is returned See also the file handling modules such as fileinput io where open is declared os os path tempfile and shutil Raises an auditing event open with arguments file mode flags The mode and flags arguments may have been modified or inferred from the original call Changed in version 3 3 The opener parameter was added The x mode was added IOError used to be raised it is now an alias of OSError FileExistsError is now raised if the file opened in exclusive creation mode x already exists Changed in version 3 4 The file is now non inheritable Changed in version 3 5 If the system call is interrupted and the signal handler does not raise an exception the function now retries the system call instead of raising an InterruptedError exception see PEP 475 for the rationale The namereplace error handler was added Changed in version 3 6 Support added to accept objects implementing os PathLike On Windows opening a console buffer may return a subclass of io RawIOBase other than io FileIO Changed in version 3 11 The U mode has been removed ord c Given a string representing one Unicode character return an integer representing the Unicode code point of that character For example ord a returns the integer 97 and ord Euro sign returns 8364 This is the inverse of chr pow base exp mod None Return base to the power exp if mod is present return base to the power exp modulo mod computed more efficiently than pow base exp mod The two argument form pow base exp is equivalent to using the power operator base exp The arguments must have numeric types With mixed operand types the coercion rules for binary arithmetic operators apply For int operands the result has the same type as the operands after coercion unless the second argument is negative in that case all arguments are converted to float and a float result is delivered For example pow 10 2 returns 100 but pow 10 2 returns 0 01 For a negative base of type int or float an
en
null
2,466
d a non integral exponent a complex result is delivered For example pow 9 0 5 returns a value close to 3j For int operands base and exp if mod is present mod must also be of integer type and mod must be nonzero If mod is present and exp is negative base must be relatively prime to mod In that case pow inv_base exp mod is returned where inv_base is an inverse to base modulo mod Here s an example of computing an inverse for 38 modulo 97 pow 38 1 mod 97 23 23 38 97 1 True Changed in version 3 8 For int operands the three argument form of pow now allows the second argument to be negative permitting computation of modular inverses Changed in version 3 8 Allow keyword arguments Formerly only positional arguments were supported print objects sep end n file None flush False Print objects to the text stream file separated by sep and followed by end sep end file and flush if present must be given as keyword arguments All non keyword arguments are converted to strings like str does and written to the stream separated by sep and followed by end Both sep and end must be strings they can also be None which means to use the default values If no objects are given print will just write end The file argument must be an object with a write string method if it is not present or None sys stdout will be used Since printed arguments are converted to text strings print cannot be used with binary mode file objects For these use file write instead Output buffering is usually determined by file However if flush is true the stream is forcibly flushed Changed in version 3 3 Added the flush keyword argument class property fget None fset None fdel None doc None Return a property attribute fget is a function for getting an attribute value fset is a function for setting an attribute value fdel is a function for deleting an attribute value And doc creates a docstring for the attribute A typical use is to define a managed attribute x class C def __init__ self self _x None def getx self return self _x def setx self value self _x value def delx self del self _x x property getx setx delx I m the x property If c is an instance of C c x will invoke the getter c x value will invoke the setter and del c x the deleter If given doc will be the docstring of the property attribute Otherwise the property will copy fget s docstring if it exists This makes it possible to create read only properties easily using property as a decorator class Parrot def __init__ self self _voltage 100000 property def voltage self Get the current voltage return self _voltage The property decorator turns the voltage method into a getter for a read only attribute with the same name and it sets the docstring for voltage to Get the current voltage getter setter deleter A property object has getter setter and deleter methods usable as decorators that create a copy of the property with the corresponding accessor function set to the decorated function This is best explained with an example class C def __init__ self self _x None property def x self I m the x property return self _x x setter def x self value self _x value x deleter def x self del self _x This code is exactly equivalent to the first example Be sure to give the additional functions the same name as the original property x in this case The returned property object also has the attributes fget fset and fdel corresponding to the constructor arguments Changed in version 3 5 The docstrings of property objects are now writeable class range stop class range start stop step 1 Rather than being a function range is actually an immutable sequence type as documented in Ranges and Sequence Types list tuple range repr object Return a string containing a printable representation of an object For many types this function makes an attempt to return a string that would yield an object with the same value when passed to eval otherwise the representation is a string enclosed in angle brackets that contains the name of the type of the object together with additional information often including the name and address of the object A class can control what
en
null
2,467
this function returns for its instances by defining a __repr__ method If sys displayhook is not accessible this function will raise RuntimeError This class has a custom representation that can be evaluated class Person def __init__ self name age self name name self age age def __repr__ self return f Person self name self age reversed seq Return a reverse iterator seq must be an object which has a __reversed__ method or supports the sequence protocol the __len__ method and the __getitem__ method with integer arguments starting at 0 round number ndigits None Return number rounded to ndigits precision after the decimal point If ndigits is omitted or is None it returns the nearest integer to its input For the built in types supporting round values are rounded to the closest multiple of 10 to the power minus ndigits if two multiples are equally close rounding is done toward the even choice so for example both round 0 5 and round 0 5 are 0 and round 1 5 is 2 Any integer value is valid for ndigits positive zero or negative The return value is an integer if ndigits is omitted or None Otherwise the return value has the same type as number For a general Python object number round delegates to number __round__ Note The behavior of round for floats can be surprising for example round 2 675 2 gives 2 67 instead of the expected 2 68 This is not a bug it s a result of the fact that most decimal fractions can t be represented exactly as a float See Floating Point Arithmetic Issues and Limitations for more information class set class set iterable Return a new set object optionally with elements taken from iterable set is a built in class See set and Set Types set frozenset for documentation about this class For other containers see the built in frozenset list tuple and dict classes as well as the collections module setattr object name value This is the counterpart of getattr The arguments are an object a string and an arbitrary value The string may name an existing attribute or a new attribute The function assigns the value to the attribute provided the object allows it For example setattr x foobar 123 is equivalent to x foobar 123 name need not be a Python identifier as defined in Identifiers and keywords unless the object chooses to enforce that for example in a custom __getattribute__ or via __slots__ An attribute whose name is not an identifier will not be accessible using the dot notation but is accessible through getattr etc Note Since private name mangling happens at compilation time one must manually mangle a private attribute s attributes with two leading underscores name in order to set it with setattr class slice stop class slice start stop step None Return a slice object representing the set of indices specified by range start stop step The start and step arguments default to None start stop step Slice objects have read only data attributes start stop and step which merely return the argument values or their default They have no other explicit functionality however they are used by NumPy and other third party packages Slice objects are also generated when extended indexing syntax is used For example a start stop step or a start stop i See itertools islice for an alternate version that returns an iterator Changed in version 3 12 Slice objects are now hashable provided start stop and step are hashable sorted iterable key None reverse False Return a new sorted list from the items in iterable Has two optional arguments which must be specified as keyword arguments key specifies a function of one argument that is used to extract a comparison key from each element in iterable for example key str lower The default value is None compare the elements directly reverse is a boolean value If set to True then the list elements are sorted as if each comparison were reversed Use functools cmp_to_key to convert an old style cmp function to a key function The built in sorted function is guaranteed to be stable A sort is stable if it guarantees not to change the relative order of elements that compare equal this is helpful for sorting in multip
en
null
2,468
le passes for example sort by department then by salary grade The sort algorithm uses only comparisons between items While defining an __lt__ method will suffice for sorting PEP 8 recommends that all six rich comparisons be implemented This will help avoid bugs when using the same data with other ordering tools such as max that rely on a different underlying method Implementing all six comparisons also helps avoid confusion for mixed type comparisons which can call reflected the __gt__ method For sorting examples and a brief sorting tutorial see Sorting Techniques staticmethod Transform a method into a static method A static method does not receive an implicit first argument To declare a static method use this idiom class C staticmethod def f arg1 arg2 argN The staticmethod form is a function decorator see Function definitions for details A static method can be called either on the class such as C f or on an instance such as C f Moreover they can be called as regular functions such as f Static methods in Python are similar to those found in Java or C Also see classmethod for a variant that is useful for creating alternate class constructors Like all decorators it is also possible to call staticmethod as a regular function and do something with its result This is needed in some cases where you need a reference to a function from a class body and you want to avoid the automatic transformation to instance method For these cases use this idiom def regular_function class C method staticmethod regular_function For more information on static methods see The standard type hierarchy Changed in version 3 10 Static methods now inherit the method attributes __module__ __name__ __qualname__ __doc__ and __annotations__ have a new __wrapped__ attribute and are now callable as regular functions class str object class str object b encoding utf 8 errors strict Return a str version of object See str for details str is the built in string class For general information about strings see Text Sequence Type str sum iterable start 0 Sums start and the items of an iterable from left to right and returns the total The iterable s items are normally numbers and the start value is not allowed to be a string For some use cases there are good alternatives to sum The preferred fast way to concatenate a sequence of strings is by calling join sequence To add floating point values with extended precision see math fsum To concatenate a series of iterables consider using itertools chain Changed in version 3 8 The start parameter can be specified as a keyword argument Changed in version 3 12 Summation of floats switched to an algorithm that gives higher accuracy on most builds class super class super type object_or_type None Return a proxy object that delegates method calls to a parent or sibling class of type This is useful for accessing inherited methods that have been overridden in a class The object_or_type determines the method resolution order to be searched The search starts from the class right after the type For example if __mro__ of object_or_type is D B C A object and the value of type is B then super searches C A object The __mro__ attribute of the object_or_type lists the method resolution search order used by both getattr and super The attribute is dynamic and can change whenever the inheritance hierarchy is updated If the second argument is omitted the super object returned is unbound If the second argument is an object isinstance obj type must be true If the second argument is a type issubclass type2 type must be true this is useful for classmethods There are two typical use cases for super In a class hierarchy with single inheritance super can be used to refer to parent classes without naming them explicitly thus making the code more maintainable This use closely parallels the use of super in other programming languages The second use case is to support cooperative multiple inheritance in a dynamic execution environment This use case is unique to Python and is not found in statically compiled languages or languages that only support single i
en
null
2,469
nheritance This makes it possible to implement diamond diagrams where multiple base classes implement the same method Good design dictates that such implementations have the same calling signature in every case because the order of calls is determined at runtime because that order adapts to changes in the class hierarchy and because that order can include sibling classes that are unknown prior to runtime For both use cases a typical superclass call looks like this class C B def method self arg super method arg This does the same thing as super C self method arg In addition to method lookups super also works for attribute lookups One possible use case for this is calling descriptors in a parent or sibling class Note that super is implemented as part of the binding process for explicit dotted attribute lookups such as super __getitem__ name It does so by implementing its own __getattribute__ method for searching classes in a predictable order that supports cooperative multiple inheritance Accordingly super is undefined for implicit lookups using statements or operators such as super name Also note that aside from the zero argument form super is not limited to use inside methods The two argument form specifies the arguments exactly and makes the appropriate references The zero argument form only works inside a class definition as the compiler fills in the necessary details to correctly retrieve the class being defined as well as accessing the current instance for ordinary methods For practical suggestions on how to design cooperative classes using super see guide to using super class tuple class tuple iterable Rather than being a function tuple is actually an immutable sequence type as documented in Tuples and Sequence Types list tuple range class type object class type name bases dict kwds With one argument return the type of an object The return value is a type object and generally the same object as returned by object __class__ The isinstance built in function is recommended for testing the type of an object because it takes subclasses into account With three arguments return a new type object This is essentially a dynamic form of the class statement The name string is the class name and becomes the __name__ attribute The bases tuple contains the base classes and becomes the __bases__ attribute if empty object the ultimate base of all classes is added The dict dictionary contains attribute and method definitions for the class body it may be copied or wrapped before becoming the __dict__ attribute The following two statements create identical type objects class X a 1 X type X dict a 1 See also Type Objects Keyword arguments provided to the three argument form are passed to the appropriate metaclass machinery usually __init_subclass__ in the same way that keywords in a class definition besides metaclass would See also Customizing class creation Changed in version 3 6 Subclasses of type which don t override type __new__ may no longer use the one argument form to get the type of an object vars vars object Return the __dict__ attribute for a module class instance or any other object with a __dict__ attribute Objects such as modules and instances have an updateable __dict__ attribute however other objects may have write restrictions on their __dict__ attributes for example classes use a types MappingProxyType to prevent direct dictionary updates Without an argument vars acts like locals Note the locals dictionary is only useful for reads since updates to the locals dictionary are ignored A TypeError exception is raised if an object is specified but it doesn t have a __dict__ attribute for example if its class defines the __slots__ attribute zip iterables strict False Iterate over several iterables in parallel producing tuples with an item from each one Example for item in zip 1 2 3 sugar spice everything nice print item 1 sugar 2 spice 3 everything nice More formally zip returns an iterator of tuples where the i th tuple contains the i th element from each of the argument iterables Another way to think of zip is that it turns
en
null
2,470
rows into columns and columns into rows This is similar to transposing a matrix zip is lazy The elements won t be processed until the iterable is iterated on e g by a for loop or by wrapping in a list One thing to consider is that the iterables passed to zip could have different lengths sometimes by design and sometimes because of a bug in the code that prepared these iterables Python offers three different approaches to dealing with this issue By default zip stops when the shortest iterable is exhausted It will ignore the remaining items in the longer iterables cutting off the result to the length of the shortest iterable list zip range 3 fee fi fo fum 0 fee 1 fi 2 fo zip is often used in cases where the iterables are assumed to be of equal length In such cases it s recommended to use the strict True option Its output is the same as regular zip list zip a b c 1 2 3 strict True a 1 b 2 c 3 Unlike the default behavior it raises a ValueError if one iterable is exhausted before the others for item in zip range 3 fee fi fo fum strict True print item 0 fee 1 fi 2 fo Traceback most recent call last ValueError zip argument 2 is longer than argument 1 Without the strict True argument any bug that results in iterables of different lengths will be silenced possibly manifesting as a hard to find bug in another part of the program Shorter iterables can be padded with a constant value to make all the iterables have the same length This is done by itertools zip_longest Edge cases With a single iterable argument zip returns an iterator of 1 tuples With no arguments it returns an empty iterator Tips and tricks The left to right evaluation order of the iterables is guaranteed This makes possible an idiom for clustering a data series into n length groups using zip iter s n strict True This repeats the same iterator n times so that each output tuple has the result of n calls to the iterator This has the effect of dividing the input into n length chunks zip in conjunction with the operator can be used to unzip a list x 1 2 3 y 4 5 6 list zip x y 1 4 2 5 3 6 x2 y2 zip zip x y x list x2 and y list y2 True Changed in version 3 10 Added the strict argument __import__ name globals None locals None fromlist level 0 Note This is an advanced function that is not needed in everyday Python programming unlike importlib import_module This function is invoked by the import statement It can be replaced by importing the builtins module and assigning to builtins __import__ in order to change semantics of the import statement but doing so is strongly discouraged as it is usually simpler to use import hooks see PEP 302 to attain the same goals and does not cause issues with code which assumes the default import implementation is in use Direct use of __import__ is also discouraged in favor of importlib import_module The function imports the module name potentially using the given globals and locals to determine how to interpret the name in a package context The fromlist gives the names of objects or submodules that should be imported from the module given by name The standard implementation does not use its locals argument at all and uses its globals only to determine the package context of the import statement level specifies whether to use absolute or relative imports 0 the default means only perform absolute imports Positive values for level indicate the number of parent directories to search relative to the directory of the module calling __import__ see PEP 328 for the details When the name variable is of the form package module normally the top level package the name up till the first dot is returned not the module named by name However when a non empty fromlist argument is given the module named by name is returned For example the statement import spam results in bytecode resembling the following code spam __import__ spam globals locals 0 The statement import spam ham results in this call spam __import__ spam ham globals locals 0 Note how __import__ returns the toplevel module here because this is the object that is bound to a name by the import stateme
en
null
2,471
nt On the other hand the statement from spam ham import eggs sausage as saus results in _temp __import__ spam ham globals locals eggs sausage 0 eggs _temp eggs saus _temp sausage Here the spam ham module is returned from __import__ From this object the names to import are retrieved and assigned to their respective names If you simply want to import a module potentially within a package by name use importlib import_module Changed in version 3 3 Negative values for level are no longer supported which also changes the default value to 0 Changed in version 3 9 When the command line options E or I are being used the environment variable PYTHONCASEOK is now ignored Footnotes 1 Note that the parser only accepts the Unix style end of line convention If you are reading the code from a file make sure to use newline conversion mode to convert Windows or Mac style newlines
en
null
2,472
Futures Source code Lib asyncio futures py Lib asyncio base_futures py Future objects are used to bridge low level callback based code with high level async await code Future Functions asyncio isfuture obj Return True if obj is either of an instance of asyncio Future an instance of asyncio Task a Future like object with a _asyncio_future_blocking attribute New in version 3 5 asyncio ensure_future obj loop None Return obj argument as is if obj is a Future a Task or a Future like object isfuture is used for the test a Task object wrapping obj if obj is a coroutine iscoroutine is used for the test in this case the coroutine will be scheduled by ensure_future a Task object that would await on obj if obj is an awaitable inspect isawaitable is used for the test If obj is neither of the above a TypeError is raised Important See also the create_task function which is the preferred way for creating new Tasks Save a reference to the result of this function to avoid a task disappearing mid execution Changed in version 3 5 1 The function accepts any awaitable object Deprecated since version 3 10 Deprecation warning is emitted if obj is not a Future like object and loop is not specified and there is no running event loop asyncio wrap_future future loop None Wrap a concurrent futures Future object in a asyncio Future object Deprecated since version 3 10 Deprecation warning is emitted if future is not a Future like object and loop is not specified and there is no running event loop Future Object class asyncio Future loop None A Future represents an eventual result of an asynchronous operation Not thread safe Future is an awaitable object Coroutines can await on Future objects until they either have a result or an exception set or until they are cancelled A Future can be awaited multiple times and the result is same Typically Futures are used to enable low level callback based code e g in protocols implemented using asyncio transports to interoperate with high level async await code The rule of thumb is to never expose Future objects in user facing APIs and the recommended way to create a Future object is to call loop create_future This way alternative event loop implementations can inject their own optimized implementations of a Future object Changed in version 3 7 Added support for the contextvars module Deprecated since version 3 10 Deprecation warning is emitted if loop is not specified and there is no running event loop result Return the result of the Future If the Future is done and has a result set by the set_result method the result value is returned If the Future is done and has an exception set by the set_exception method this method raises the exception If the Future has been cancelled this method raises a CancelledError exception If the Future s result isn t yet available this method raises a InvalidStateError exception set_result result Mark the Future as done and set its result Raises a InvalidStateError error if the Future is already done set_exception exception Mark the Future as done and set an exception Raises a InvalidStateError error if the Future is already done done Return True if the Future is done A Future is done if it was cancelled or if it has a result or an exception set with set_result or set_exception calls cancelled Return True if the Future was cancelled The method is usually used to check if a Future is not cancelled before setting a result or an exception for it if not fut cancelled fut set_result 42 add_done_callback callback context None Add a callback to be run when the Future is done The callback is called with the Future object as its only argument If the Future is already done when this method is called the callback is scheduled with loop call_soon An optional keyword only context argument allows specifying a custom contextvars Context for the callback to run in The current context is used when no context is provided functools partial can be used to pass parameters to the callback e g Call print Future fut when fut is done fut add_done_callback functools partial print Future Changed in version 3 7
en
null
2,473
The context keyword only parameter was added See PEP 567 for more details remove_done_callback callback Remove callback from the callbacks list Returns the number of callbacks removed which is typically 1 unless a callback was added more than once cancel msg None Cancel the Future and schedule callbacks If the Future is already done or cancelled return False Otherwise change the Future s state to cancelled schedule the callbacks and return True Changed in version 3 9 Added the msg parameter exception Return the exception that was set on this Future The exception or None if no exception was set is returned only if the Future is done If the Future has been cancelled this method raises a CancelledError exception If the Future isn t done yet this method raises an InvalidStateError exception get_loop Return the event loop the Future object is bound to New in version 3 7 This example creates a Future object creates and schedules an asynchronous Task to set result for the Future and waits until the Future has a result async def set_after fut delay value Sleep for delay seconds await asyncio sleep delay Set value as a result of fut Future fut set_result value async def main Get the current event loop loop asyncio get_running_loop Create a new Future object fut loop create_future Run set_after coroutine in a parallel Task We are using the low level loop create_task API here because we already have a reference to the event loop at hand Otherwise we could have just used asyncio create_task loop create_task set_after fut 1 world print hello Wait until fut has a result 1 second and print it print await fut asyncio run main Important The Future object was designed to mimic concurrent futures Future Key differences include unlike asyncio Futures concurrent futures Future instances cannot be awaited asyncio Future result and asyncio Future exception do not accept the timeout argument asyncio Future result and asyncio Future exception raise an InvalidStateError exception when the Future is not done Callbacks registered with asyncio Future add_done_callback are not called immediately They are scheduled with loop call_soon instead asyncio Future is not compatible with the concurrent futures wait and concurrent futures as_completed functions asyncio Future cancel accepts an optional msg argument but concurrent futures Future cancel does not
en
null
2,474
Curses Programming with Python Author A M Kuchling Eric S Raymond Release 2 04 Abstract This document describes how to use the curses extension module to control text mode displays What is curses The curses library supplies a terminal independent screen painting and keyboard handling facility for text based terminals such terminals include VT100s the Linux console and the simulated terminal provided by various programs Display terminals support various control codes to perform common operations such as moving the cursor scrolling the screen and erasing areas Different terminals use widely differing codes and often have their own minor quirks In a world of graphical displays one might ask why bother It s true that character cell display terminals are an obsolete technology but there are niches in which being able to do fancy things with them are still valuable One niche is on small footprint or embedded Unixes that don t run an X server Another is tools such as OS installers and kernel configurators that may have to run before any graphical support is available The curses library provides fairly basic functionality providing the programmer with an abstraction of a display containing multiple non overlapping windows of text The contents of a window can be changed in various ways adding text erasing it changing its appearance and the curses library will figure out what control codes need to be sent to the terminal to produce the right output curses doesn t provide many user interface concepts such as buttons checkboxes or dialogs if you need such features consider a user interface library such as Urwid The curses library was originally written for BSD Unix the later System V versions of Unix from AT T added many enhancements and new functions BSD curses is no longer maintained having been replaced by ncurses which is an open source implementation of the AT T interface If you re using an open source Unix such as Linux or FreeBSD your system almost certainly uses ncurses Since most current commercial Unix versions are based on System V code all the functions described here will probably be available The older versions of curses carried by some proprietary Unixes may not support everything though The Windows version of Python doesn t include the curses module A ported version called UniCurses is available The Python curses module The Python module is a fairly simple wrapper over the C functions provided by curses if you re already familiar with curses programming in C it s really easy to transfer that knowledge to Python The biggest difference is that the Python interface makes things simpler by merging different C functions such as addstr mvaddstr and mvwaddstr into a single addstr method You ll see this covered in more detail later This HOWTO is an introduction to writing text mode programs with curses and Python It doesn t attempt to be a complete guide to the curses API for that see the Python library guide s section on ncurses and the C manual pages for ncurses It will however give you the basic ideas Starting and ending a curses application Before doing anything curses must be initialized This is done by calling the initscr function which will determine the terminal type send any required setup codes to the terminal and create various internal data structures If successful initscr returns a window object representing the entire screen this is usually called stdscr after the name of the corresponding C variable import curses stdscr curses initscr Usually curses applications turn off automatic echoing of keys to the screen in order to be able to read keys and only display them under certain circumstances This requires calling the noecho function curses noecho Applications will also commonly need to react to keys instantly without requiring the Enter key to be pressed this is called cbreak mode as opposed to the usual buffered input mode curses cbreak Terminals usually return special keys such as the cursor keys or navigation keys such as Page Up and Home as a multibyte escape sequence While you could write your application
en
null
2,475
to expect such sequences and process them accordingly curses can do it for you returning a special value such as curses KEY_LEFT To get curses to do the job you ll have to enable keypad mode stdscr keypad True Terminating a curses application is much easier than starting one You ll need to call curses nocbreak stdscr keypad False curses echo to reverse the curses friendly terminal settings Then call the endwin function to restore the terminal to its original operating mode curses endwin A common problem when debugging a curses application is to get your terminal messed up when the application dies without restoring the terminal to its previous state In Python this commonly happens when your code is buggy and raises an uncaught exception Keys are no longer echoed to the screen when you type them for example which makes using the shell difficult In Python you can avoid these complications and make debugging much easier by importing the curses wrapper function and using it like this from curses import wrapper def main stdscr Clear screen stdscr clear This raises ZeroDivisionError when i 10 for i in range 0 11 v i 10 stdscr addstr i 0 10 divided by is format v 10 v stdscr refresh stdscr getkey wrapper main The wrapper function takes a callable object and does the initializations described above also initializing colors if color support is present wrapper then runs your provided callable Once the callable returns wrapper will restore the original state of the terminal The callable is called inside a try except that catches exceptions restores the state of the terminal and then re raises the exception Therefore your terminal won t be left in a funny state on exception and you ll be able to read the exception s message and traceback Windows and Pads Windows are the basic abstraction in curses A window object represents a rectangular area of the screen and supports methods to display text erase it allow the user to input strings and so forth The stdscr object returned by the initscr function is a window object that covers the entire screen Many programs may need only this single window but you might wish to divide the screen into smaller windows in order to redraw or clear them separately The newwin function creates a new window of a given size returning the new window object begin_x 20 begin_y 7 height 5 width 40 win curses newwin height width begin_y begin_x Note that the coordinate system used in curses is unusual Coordinates are always passed in the order y x and the top left corner of a window is coordinate 0 0 This breaks the normal convention for handling coordinates where the x coordinate comes first This is an unfortunate difference from most other computer applications but it s been part of curses since it was first written and it s too late to change things now Your application can determine the size of the screen by using the curses LINES and curses COLS variables to obtain the y and x sizes Legal coordinates will then extend from 0 0 to curses LINES 1 curses COLS 1 When you call a method to display or erase text the effect doesn t immediately show up on the display Instead you must call the refresh method of window objects to update the screen This is because curses was originally written with slow 300 baud terminal connections in mind with these terminals minimizing the time required to redraw the screen was very important Instead curses accumulates changes to the screen and displays them in the most efficient manner when you call refresh For example if your program displays some text in a window and then clears the window there s no need to send the original text because they re never visible In practice explicitly telling curses to redraw a window doesn t really complicate programming with curses much Most programs go into a flurry of activity and then pause waiting for a keypress or some other action on the part of the user All you have to do is to be sure that the screen has been redrawn before pausing to wait for user input by first calling stdscr refresh or the refresh method of some other relevant window A
en
null
2,476
pad is a special case of a window it can be larger than the actual display screen and only a portion of the pad displayed at a time Creating a pad requires the pad s height and width while refreshing a pad requires giving the coordinates of the on screen area where a subsection of the pad will be displayed pad curses newpad 100 100 These loops fill the pad with letters addch is explained in the next section for y in range 0 99 for x in range 0 99 pad addch y x ord a x x y y 26 Displays a section of the pad in the middle of the screen 0 0 coordinate of upper left corner of pad area to display 5 5 coordinate of upper left corner of window area to be filled with pad content 20 75 coordinate of lower right corner of window area to be filled with pad content pad refresh 0 0 5 5 20 75 The refresh call displays a section of the pad in the rectangle extending from coordinate 5 5 to coordinate 20 75 on the screen the upper left corner of the displayed section is coordinate 0 0 on the pad Beyond that difference pads are exactly like ordinary windows and support the same methods If you have multiple windows and pads on screen there is a more efficient way to update the screen and prevent annoying screen flicker as each part of the screen gets updated refresh actually does two things 1 Calls the noutrefresh method of each window to update an underlying data structure representing the desired state of the screen 2 Calls the function doupdate function to change the physical screen to match the desired state recorded in the data structure Instead you can call noutrefresh on a number of windows to update the data structure and then call doupdate to update the screen Displaying Text From a C programmer s point of view curses may sometimes look like a twisty maze of functions all subtly different For example addstr displays a string at the current cursor location in the stdscr window while mvaddstr moves to a given y x coordinate first before displaying the string waddstr is just like addstr but allows specifying a window to use instead of using stdscr by default mvwaddstr allows specifying both a window and a coordinate Fortunately the Python interface hides all these details stdscr is a window object like any other and methods such as addstr accept multiple argument forms Usually there are four different forms Form Description str or ch Display the string str or character ch at the current position str or ch attr Display the string str or character ch using attribute attr at the current position y x str or ch Move to position y x within the window and display str or ch y x str or ch attr Move to position y x within the window and display str or ch using attribute attr Attributes allow displaying text in highlighted forms such as boldface underline reverse code or in color They ll be explained in more detail in the next subsection The addstr method takes a Python string or bytestring as the value to be displayed The contents of bytestrings are sent to the terminal as is Strings are encoded to bytes using the value of the window s encoding attribute this defaults to the default system encoding as returned by locale getencoding The addch methods take a character which can be either a string of length 1 a bytestring of length 1 or an integer Constants are provided for extension characters these constants are integers greater than 255 For example ACS_PLMINUS is a symbol and ACS_ULCORNER is the upper left corner of a box handy for drawing borders You can also use the appropriate Unicode character Windows remember where the cursor was left after the last operation so if you leave out the y x coordinates the string or character will be displayed wherever the last operation left off You can also move the cursor with the move y x method Because some terminals always display a flashing cursor you may want to ensure that the cursor is positioned in some location where it won t be distracting it can be confusing to have the cursor blinking at some apparently random location If your application doesn t need a blinking cursor at all you can call curs_
en
null
2,477
set False to make it invisible For compatibility with older curses versions there s a leaveok bool function that s a synonym for curs_set When bool is true the curses library will attempt to suppress the flashing cursor and you won t need to worry about leaving it in odd locations Attributes and Color Characters can be displayed in different ways Status lines in a text based application are commonly shown in reverse video or a text viewer may need to highlight certain words curses supports this by allowing you to specify an attribute for each cell on the screen An attribute is an integer each bit representing a different attribute You can try to display text with multiple attribute bits set but curses doesn t guarantee that all the possible combinations are available or that they re all visually distinct That depends on the ability of the terminal being used so it s safest to stick to the most commonly available attributes listed here Attribute Description A_BLINK Blinking text A_BOLD Extra bright or bold text A_DIM Half bright text A_REVERSE Reverse video text A_STANDOUT The best highlighting mode available A_UNDERLINE Underlined text So to display a reverse video status line on the top line of the screen you could code stdscr addstr 0 0 Current mode Typing mode curses A_REVERSE stdscr refresh The curses library also supports color on those terminals that provide it The most common such terminal is probably the Linux console followed by color xterms To use color you must call the start_color function soon after calling initscr to initialize the default color set the curses wrapper function does this automatically Once that s done the has_colors function returns TRUE if the terminal in use can actually display color Note curses uses the American spelling color instead of the Canadian British spelling colour If you re used to the British spelling you ll have to resign yourself to misspelling it for the sake of these functions The curses library maintains a finite number of color pairs containing a foreground or text color and a background color You can get the attribute value corresponding to a color pair with the color_pair function this can be bitwise OR ed with other attributes such as A_REVERSE but again such combinations are not guaranteed to work on all terminals An example which displays a line of text using color pair 1 stdscr addstr Pretty text curses color_pair 1 stdscr refresh As I said before a color pair consists of a foreground and background color The init_pair n f b function changes the definition of color pair n to foreground color f and background color b Color pair 0 is hard wired to white on black and cannot be changed Colors are numbered and start_color initializes 8 basic colors when it activates color mode They are 0 black 1 red 2 green 3 yellow 4 blue 5 magenta 6 cyan and 7 white The curses module defines named constants for each of these colors curses COLOR_BLACK curses COLOR_RED and so forth Let s put all this together To change color 1 to red text on a white background you would call curses init_pair 1 curses COLOR_RED curses COLOR_WHITE When you change a color pair any text already displayed using that color pair will change to the new colors You can also display new text in this color with stdscr addstr 0 0 RED ALERT curses color_pair 1 Very fancy terminals can change the definitions of the actual colors to a given RGB value This lets you change color 1 which is usually red to purple or blue or any other color you like Unfortunately the Linux console doesn t support this so I m unable to try it out and can t provide any examples You can check if your terminal can do this by calling can_change_color which returns True if the capability is there If you re lucky enough to have such a talented terminal consult your system s man pages for more information User Input The C curses library offers only very simple input mechanisms Python s curses module adds a basic text input widget Other libraries such as Urwid have more extensive collections of widgets There are two methods for getting input from a
en
null
2,478
window getch refreshes the screen and then waits for the user to hit a key displaying the key if echo has been called earlier You can optionally specify a coordinate to which the cursor should be moved before pausing getkey does the same thing but converts the integer to a string Individual characters are returned as 1 character strings and special keys such as function keys return longer strings containing a key name such as KEY_UP or G It s possible to not wait for the user using the nodelay window method After nodelay True getch and getkey for the window become non blocking To signal that no input is ready getch returns curses ERR a value of 1 and getkey raises an exception There s also a halfdelay function which can be used to in effect set a timer on each getch if no input becomes available within a specified delay measured in tenths of a second curses raises an exception The getch method returns an integer if it s between 0 and 255 it represents the ASCII code of the key pressed Values greater than 255 are special keys such as Page Up Home or the cursor keys You can compare the value returned to constants such as curses KEY_PPAGE curses KEY_HOME or curses KEY_LEFT The main loop of your program may look something like this while True c stdscr getch if c ord p PrintDocument elif c ord q break Exit the while loop elif c curses KEY_HOME x y 0 The curses ascii module supplies ASCII class membership functions that take either integer or 1 character string arguments these may be useful in writing more readable tests for such loops It also supplies conversion functions that take either integer or 1 character string arguments and return the same type For example curses ascii ctrl returns the control character corresponding to its argument There s also a method to retrieve an entire string getstr It isn t used very often because its functionality is quite limited the only editing keys available are the backspace key and the Enter key which terminates the string It can optionally be limited to a fixed number of characters curses echo Enable echoing of characters Get a 15 character string with the cursor on the top line s stdscr getstr 0 0 15 The curses textpad module supplies a text box that supports an Emacs like set of keybindings Various methods of the Textbox class support editing with input validation and gathering the edit results either with or without trailing spaces Here s an example import curses from curses textpad import Textbox rectangle def main stdscr stdscr addstr 0 0 Enter IM message hit Ctrl G to send editwin curses newwin 5 30 2 1 rectangle stdscr 1 0 1 5 1 1 30 1 stdscr refresh box Textbox editwin Let the user edit until Ctrl G is struck box edit Get resulting contents message box gather See the library documentation on curses textpad for more details For More Information This HOWTO doesn t cover some advanced topics such as reading the contents of the screen or capturing mouse events from an xterm instance but the Python library page for the curses module is now reasonably complete You should browse it next If you re in doubt about the detailed behavior of the curses functions consult the manual pages for your curses implementation whether it s ncurses or a proprietary Unix vendor s The manual pages will document any quirks and provide complete lists of all the functions attributes and ACS_ characters available to you Because the curses API is so large some functions aren t supported in the Python interface Often this isn t because they re difficult to implement but because no one has needed them yet Also Python doesn t yet support the menu library associated with ncurses Patches adding support for these would be welcome see the Python Developer s Guide to learn more about submitting patches to Python Writing Programs with NCURSES a lengthy tutorial for C programmers The ncurses man page The ncurses FAQ Use curses don t swear video of a PyCon 2013 talk on controlling terminals using curses or Urwid Console Applications with Urwid video of a PyCon CA 2012 talk demonstrating some applications written using
en
null
2,479
Urwid
cy
null
2,480
The None Object Note that the PyTypeObject for None is not directly exposed in the Python C API Since None is a singleton testing for object identity using in C is sufficient There is no PyNone_Check function for the same reason PyObject Py_None The Python None object denoting lack of value This object has no methods and is immortal Changed in version 3 12 Py_None is immortal Py_RETURN_NONE Return Py_None from a function
en
null
2,481
pydoc Documentation generator and online help system Source code Lib pydoc py The pydoc module automatically generates documentation from Python modules The documentation can be presented as pages of text on the console served to a web browser or saved to HTML files For modules classes functions and methods the displayed documentation is derived from the docstring i e the __doc__ attribute of the object and recursively of its documentable members If there is no docstring pydoc tries to obtain a description from the block of comment lines just above the definition of the class function or method in the source file or at the top of the module see inspect getcomments The built in function help invokes the online help system in the interactive interpreter which uses pydoc to generate its documentation as text on the console The same text documentation can also be viewed from outside the Python interpreter by running pydoc as a script at the operating system s command prompt For example running python m pydoc sys at a shell prompt will display documentation on the sys module in a style similar to the manual pages shown by the Unix man command The argument to pydoc can be the name of a function module or package or a dotted reference to a class method or function within a module or module in a package If the argument to pydoc looks like a path that is it contains the path separator for your operating system such as a slash in Unix and refers to an existing Python source file then documentation is produced for that file Note In order to find objects and their documentation pydoc imports the module s to be documented Therefore any code on module level will be executed on that occasion Use an if __name__ __main__ guard to only execute code when a file is invoked as a script and not just imported When printing output to the console pydoc attempts to paginate the output for easier reading If the PAGER environment variable is set pydoc will use its value as a pagination program Specifying a w flag before the argument will cause HTML documentation to be written out to a file in the current directory instead of displaying text on the console Specifying a k flag before the argument will search the synopsis lines of all available modules for the keyword given as the argument again in a manner similar to the Unix man command The synopsis line of a module is the first line of its documentation string You can also use pydoc to start an HTTP server on the local machine that will serve documentation to visiting web browsers python m pydoc p 1234 will start a HTTP server on port 1234 allowing you to browse the documentation at http localhost 1234 in your preferred web browser Specifying 0 as the port number will select an arbitrary unused port python m pydoc n hostname will start the server listening at the given hostname By default the hostname is localhost but if you want the server to be reached from other machines you may want to change the host name that the server responds to During development this is especially useful if you want to run pydoc from within a container python m pydoc b will start the server and additionally open a web browser to a module index page Each served page has a navigation bar at the top where you can Get help on an individual item Search all modules with a keyword in their synopsis line and go to the Module index Topics and Keywords pages When pydoc generates documentation it uses the current environment and path to locate modules Thus invoking pydoc spam documents precisely the version of the module you would get if you started the Python interpreter and typed import spam Module docs for core modules are assumed to reside in https docs python org X Y library where X and Y are the major and minor version numbers of the Python interpreter This can be overridden by setting the PYTHONDOCS environment variable to a different URL or to a local directory containing the Library Reference Manual pages Changed in version 3 2 Added the b option Changed in version 3 3 The g command line option was removed Changed in versi
en
null
2,482
on 3 4 pydoc now uses inspect signature rather than inspect getfullargspec to extract signature information from callables Changed in version 3 7 Added the n option
en
null
2,483
Dictionary Objects type PyDictObject This subtype of PyObject represents a Python dictionary object PyTypeObject PyDict_Type Part of the Stable ABI This instance of PyTypeObject represents the Python dictionary type This is the same object as dict in the Python layer int PyDict_Check PyObject p Return true if p is a dict object or an instance of a subtype of the dict type This function always succeeds int PyDict_CheckExact PyObject p Return true if p is a dict object but not an instance of a subtype of the dict type This function always succeeds PyObject PyDict_New Return value New reference Part of the Stable ABI Return a new empty dictionary or NULL on failure PyObject PyDictProxy_New PyObject mapping Return value New reference Part of the Stable ABI Return a types MappingProxyType object for a mapping which enforces read only behavior This is normally used to create a view to prevent modification of the dictionary for non dynamic class types void PyDict_Clear PyObject p Part of the Stable ABI Empty an existing dictionary of all key value pairs int PyDict_Contains PyObject p PyObject key Part of the Stable ABI Determine if dictionary p contains key If an item in p is matches key return 1 otherwise return 0 On error return 1 This is equivalent to the Python expression key in p PyObject PyDict_Copy PyObject p Return value New reference Part of the Stable ABI Return a new dictionary that contains the same key value pairs as p int PyDict_SetItem PyObject p PyObject key PyObject val Part of the Stable ABI Insert val into the dictionary p with a key of key key must be hashable if it isn t TypeError will be raised Return 0 on success or 1 on failure This function does not steal a reference to val int PyDict_SetItemString PyObject p const char key PyObject val Part of the Stable ABI This is the same as PyDict_SetItem but key is specified as a const char UTF 8 encoded bytes string rather than a PyObject int PyDict_DelItem PyObject p PyObject key Part of the Stable ABI Remove the entry in dictionary p with key key key must be hashable if it isn t TypeError is raised If key is not in the dictionary KeyError is raised Return 0 on success or 1 on failure int PyDict_DelItemString PyObject p const char key Part of the Stable ABI This is the same as PyDict_DelItem but key is specified as a const char UTF 8 encoded bytes string rather than a PyObject PyObject PyDict_GetItem PyObject p PyObject key Return value Borrowed reference Part of the Stable ABI Return the object from dictionary p which has a key key Return NULL if the key key is not present but without setting an exception Note Exceptions that occur while this calls __hash__ and __eq__ methods are silently ignored Prefer the PyDict_GetItemWithError function instead Changed in version 3 10 Calling this API without GIL held had been allowed for historical reason It is no longer allowed PyObject PyDict_GetItemWithError PyObject p PyObject key Return value Borrowed reference Part of the Stable ABI Variant of PyDict_GetItem that does not suppress exceptions Return NULL with an exception set if an exception occurred Return NULL without an exception set if the key wasn t present PyObject PyDict_GetItemString PyObject p const char key Return value Borrowed reference Part of the Stable ABI This is the same as PyDict_GetItem but key is specified as a const char UTF 8 encoded bytes string rather than a PyObject Note Exceptions that occur while this calls __hash__ and __eq__ methods or while creating the temporary str object are silently ignored Prefer using the PyDict_GetItemWithError function with your own PyUnicode_FromString key instead PyObject PyDict_SetDefault PyObject p PyObject key PyObject defaultobj Return value Borrowed reference This is the same as the Python level dict setdefault If present it returns the value corresponding to key from the dictionary p If the key is not in the dict it is inserted with value defaultobj and defaultobj is returned This function evaluates the hash function of key only once instead of evaluating it independently for the lookup and the insertion Ne
en
null
2,484
w in version 3 4 PyObject PyDict_Items PyObject p Return value New reference Part of the Stable ABI Return a PyListObject containing all the items from the dictionary PyObject PyDict_Keys PyObject p Return value New reference Part of the Stable ABI Return a PyListObject containing all the keys from the dictionary PyObject PyDict_Values PyObject p Return value New reference Part of the Stable ABI Return a PyListObject containing all the values from the dictionary p Py_ssize_t PyDict_Size PyObject p Part of the Stable ABI Return the number of items in the dictionary This is equivalent to len p on a dictionary int PyDict_Next PyObject p Py_ssize_t ppos PyObject pkey PyObject pvalue Part of the Stable ABI Iterate over all key value pairs in the dictionary p The Py_ssize_t referred to by ppos must be initialized to 0 prior to the first call to this function to start the iteration the function returns true for each pair in the dictionary and false once all pairs have been reported The parameters pkey and pvalue should either point to PyObject variables that will be filled in with each key and value respectively or may be NULL Any references returned through them are borrowed ppos should not be altered during iteration Its value represents offsets within the internal dictionary structure and since the structure is sparse the offsets are not consecutive For example PyObject key value Py_ssize_t pos 0 while PyDict_Next self dict pos key value do something interesting with the values The dictionary p should not be mutated during iteration It is safe to modify the values of the keys as you iterate over the dictionary but only so long as the set of keys does not change For example PyObject key value Py_ssize_t pos 0 while PyDict_Next self dict pos key value long i PyLong_AsLong value if i 1 PyErr_Occurred return 1 PyObject o PyLong_FromLong i 1 if o NULL return 1 if PyDict_SetItem self dict key o 0 Py_DECREF o return 1 Py_DECREF o int PyDict_Merge PyObject a PyObject b int override Part of the Stable ABI Iterate over mapping object b adding key value pairs to dictionary a b may be a dictionary or any object supporting PyMapping_Keys and PyObject_GetItem If override is true existing pairs in a will be replaced if a matching key is found in b otherwise pairs will only be added if there is not a matching key in a Return 0 on success or 1 if an exception was raised int PyDict_Update PyObject a PyObject b Part of the Stable ABI This is the same as PyDict_Merge a b 1 in C and is similar to a update b in Python except that PyDict_Update doesn t fall back to the iterating over a sequence of key value pairs if the second argument has no keys attribute Return 0 on success or 1 if an exception was raised int PyDict_MergeFromSeq2 PyObject a PyObject seq2 int override Part of the Stable ABI Update or merge into dictionary a from the key value pairs in seq2 seq2 must be an iterable object producing iterable objects of length 2 viewed as key value pairs In case of duplicate keys the last wins if override is true else the first wins Return 0 on success or 1 if an exception was raised Equivalent Python except for the return value def PyDict_MergeFromSeq2 a seq2 override for key value in seq2 if override or key not in a a key value int PyDict_AddWatcher PyDict_WatchCallback callback Register callback as a dictionary watcher Return a non negative integer id which must be passed to future calls to PyDict_Watch In case of error e g no more watcher IDs available return 1 and set an exception New in version 3 12 int PyDict_ClearWatcher int watcher_id Clear watcher identified by watcher_id previously returned from PyDict_AddWatcher Return 0 on success 1 on error e g if the given watcher_id was never registered New in version 3 12 int PyDict_Watch int watcher_id PyObject dict Mark dictionary dict as watched The callback granted watcher_id by PyDict_AddWatcher will be called when dict is modified or deallocated Return 0 on success or 1 on error New in version 3 12 int PyDict_Unwatch int watcher_id PyObject dict Mark dictionary dict as no longer watched The c
en
null
2,485
allback granted watcher_id by PyDict_AddWatcher will no longer be called when dict is modified or deallocated The dict must previously have been watched by this watcher Return 0 on success or 1 on error New in version 3 12 type PyDict_WatchEvent Enumeration of possible dictionary watcher events PyDict_EVENT_ADDED PyDict_EVENT_MODIFIED PyDict_EVENT_DELETED PyDict_EVENT_CLONED PyDict_EVENT_CLEARED or PyDict_EVENT_DEALLOCATED New in version 3 12 typedef int PyDict_WatchCallback PyDict_WatchEvent event PyObject dict PyObject key PyObject new_value Type of a dict watcher callback function If event is PyDict_EVENT_CLEARED or PyDict_EVENT_DEALLOCATED both key and new_value will be NULL If event is PyDict_EVENT_ADDED or PyDict_EVENT_MODIFIED new_value will be the new value for key If event is PyDict_EVENT_DELETED key is being deleted from the dictionary and new_value will be NULL PyDict_EVENT_CLONED occurs when dict was previously empty and another dict is merged into it To maintain efficiency of this operation per key PyDict_EVENT_ADDED events are not issued in this case instead a single PyDict_EVENT_CLONED is issued and key will be the source dictionary The callback may inspect but must not modify dict doing so could have unpredictable effects including infinite recursion Do not trigger Python code execution in the callback as it could modify the dict as a side effect If event is PyDict_EVENT_DEALLOCATED taking a new reference in the callback to the about to be destroyed dictionary will resurrect it and prevent it from being freed at this time When the resurrected object is destroyed later any watcher callbacks active at that time will be called again Callbacks occur before the notified modification to dict takes place so the prior state of dict can be inspected If the callback sets an exception it must return 1 this exception will be printed as an unraisable exception using PyErr_WriteUnraisable Otherwise it should return 0 There may already be a pending exception set on entry to the callback In this case the callback should return 0 with the same exception still set This means the callback may not call any other API that can set an exception unless it saves and clears the exception state first and restores it before returning New in version 3 12
en
null
2,486
xml dom The Document Object Model API Source code Lib xml dom __init__ py The Document Object Model or DOM is a cross language API from the World Wide Web Consortium W3C for accessing and modifying XML documents A DOM implementation presents an XML document as a tree structure or allows client code to build such a structure from scratch It then gives access to the structure through a set of objects which provided well known interfaces The DOM is extremely useful for random access applications SAX only allows you a view of one bit of the document at a time If you are looking at one SAX element you have no access to another If you are looking at a text node you have no access to a containing element When you write a SAX application you need to keep track of your program s position in the document somewhere in your own code SAX does not do it for you Also if you need to look ahead in the XML document you are just out of luck Some applications are simply impossible in an event driven model with no access to a tree Of course you could build some sort of tree yourself in SAX events but the DOM allows you to avoid writing that code The DOM is a standard tree representation for XML data The Document Object Model is being defined by the W3C in stages or levels in their terminology The Python mapping of the API is substantially based on the DOM Level 2 recommendation DOM applications typically start by parsing some XML into a DOM How this is accomplished is not covered at all by DOM Level 1 and Level 2 provides only limited improvements There is a DOMImplementation object class which provides access to Document creation methods but no way to access an XML reader parser Document builder in an implementation independent way There is also no well defined way to access these methods without an existing Document object In Python each DOM implementation will provide a function getDOMImplementation DOM Level 3 adds a Load Store specification which defines an interface to the reader but this is not yet available in the Python standard library Once you have a DOM document object you can access the parts of your XML document through its properties and methods These properties are defined in the DOM specification this portion of the reference manual describes the interpretation of the specification in Python The specification provided by the W3C defines the DOM API for Java ECMAScript and OMG IDL The Python mapping defined here is based in large part on the IDL version of the specification but strict compliance is not required though implementations are free to support the strict mapping from IDL See section Conformance for a detailed discussion of mapping requirements See also Document Object Model DOM Level 2 Specification The W3C recommendation upon which the Python DOM API is based Document Object Model DOM Level 1 Specification The W3C recommendation for the DOM supported by xml dom minidom Python Language Mapping Specification This specifies the mapping from OMG IDL to Python Module Contents The xml dom contains the following functions xml dom registerDOMImplementation name factory Register the factory function with the name name The factory function should return an object which implements the DOMImplementation interface The factory function can return the same object every time or a new one for each call as appropriate for the specific implementation e g if that implementation supports some customization xml dom getDOMImplementation name None features Return a suitable DOM implementation The name is either well known the module name of a DOM implementation or None If it is not None imports the corresponding module and returns a DOMImplementation object if the import succeeds If no name is given and if the environment variable PYTHON_DOM is set this variable is used to find the implementation If name is not given this examines the available implementations to find one with the required feature set If no implementation can be found raise an ImportError The features list must be a sequence of feature version pairs which are passed to th
en
null
2,487
e hasFeature method on available DOMImplementation objects Some convenience constants are also provided xml dom EMPTY_NAMESPACE The value used to indicate that no namespace is associated with a node in the DOM This is typically found as the namespaceURI of a node or used as the namespaceURI parameter to a namespaces specific method xml dom XML_NAMESPACE The namespace URI associated with the reserved prefix xml as defined by Namespaces in XML section 4 xml dom XMLNS_NAMESPACE The namespace URI for namespace declarations as defined by Document Object Model DOM Level 2 Core Specification section 1 1 8 xml dom XHTML_NAMESPACE The URI of the XHTML namespace as defined by XHTML 1 0 The Extensible HyperText Markup Language section 3 1 1 In addition xml dom contains a base Node class and the DOM exception classes The Node class provided by this module does not implement any of the methods or attributes defined by the DOM specification concrete DOM implementations must provide those The Node class provided as part of this module does provide the constants used for the nodeType attribute on concrete Node objects they are located within the class rather than at the module level to conform with the DOM specifications Objects in the DOM The definitive documentation for the DOM is the DOM specification from the W3C Note that DOM attributes may also be manipulated as nodes instead of as simple strings It is fairly rare that you must do this however so this usage is not yet documented Interface Section Purpose DOMImplementation DOMImplementation Objects Interface to the underlying implementation Node Node Objects Base interface for most objects in a document NodeList NodeList Objects Interface for a sequence of nodes DocumentType DocumentType Objects Information about the declarations needed to process a document Document Document Objects Object which represents an entire document Element Element Objects Element nodes in the document hierarchy Attr Attr Objects Attribute value nodes on element nodes Comment Comment Objects Representation of comments in the source document Text Text and CDATASection Objects Nodes containing textual content from the document ProcessingInstruction ProcessingInstruction Objects Processing instruction representation An additional section describes the exceptions defined for working with the DOM in Python DOMImplementation Objects The DOMImplementation interface provides a way for applications to determine the availability of particular features in the DOM they are using DOM Level 2 added the ability to create new Document and DocumentType objects using the DOMImplementation as well DOMImplementation hasFeature feature version Return True if the feature identified by the pair of strings feature and version is implemented DOMImplementation createDocument namespaceUri qualifiedName doctype Return a new Document object the root of the DOM with a child Element object having the given namespaceUri and qualifiedName The doctype must be a DocumentType object created by createDocumentType or None In the Python DOM API the first two arguments can also be None in order to indicate that no Element child is to be created DOMImplementation createDocumentType qualifiedName publicId systemId Return a new DocumentType object that encapsulates the given qualifiedName publicId and systemId strings representing the information contained in an XML document type declaration Node Objects All of the components of an XML document are subclasses of Node Node nodeType An integer representing the node type Symbolic constants for the types are on the Node object ELEMENT_NODE ATTRIBUTE_NODE TEXT_NODE CDATA_SECTION_NODE ENTITY_NODE PROCESSING_INSTRUCTION_NODE COMMENT_NODE DOCUMENT_NODE DOCUMENT_TYPE_NODE NOTATION_NODE This is a read only attribute Node parentNode The parent of the current node or None for the document node The value is always a Node object or None For Element nodes this will be the parent element except for the root element in which case it will be the Document object For Attr nodes this is always None This is a read only
en
null
2,488
attribute Node attributes A NamedNodeMap of attribute objects Only elements have actual values for this others provide None for this attribute This is a read only attribute Node previousSibling The node that immediately precedes this one with the same parent For instance the element with an end tag that comes just before the self element s start tag Of course XML documents are made up of more than just elements so the previous sibling could be text a comment or something else If this node is the first child of the parent this attribute will be None This is a read only attribute Node nextSibling The node that immediately follows this one with the same parent See also previousSibling If this is the last child of the parent this attribute will be None This is a read only attribute Node childNodes A list of nodes contained within this node This is a read only attribute Node firstChild The first child of the node if there are any or None This is a read only attribute Node lastChild The last child of the node if there are any or None This is a read only attribute Node localName The part of the tagName following the colon if there is one else the entire tagName The value is a string Node prefix The part of the tagName preceding the colon if there is one else the empty string The value is a string or None Node namespaceURI The namespace associated with the element name This will be a string or None This is a read only attribute Node nodeName This has a different meaning for each node type see the DOM specification for details You can always get the information you would get here from another property such as the tagName property for elements or the name property for attributes For all node types the value of this attribute will be either a string or None This is a read only attribute Node nodeValue This has a different meaning for each node type see the DOM specification for details The situation is similar to that with nodeName The value is a string or None Node hasAttributes Return True if the node has any attributes Node hasChildNodes Return True if the node has any child nodes Node isSameNode other Return True if other refers to the same node as this node This is especially useful for DOM implementations which use any sort of proxy architecture because more than one object can refer to the same node Note This is based on a proposed DOM Level 3 API which is still in the working draft stage but this particular interface appears uncontroversial Changes from the W3C will not necessarily affect this method in the Python DOM interface though any new W3C API for this would also be supported Node appendChild newChild Add a new child node to this node at the end of the list of children returning newChild If the node was already in the tree it is removed first Node insertBefore newChild refChild Insert a new child node before an existing child It must be the case that refChild is a child of this node if not ValueError is raised newChild is returned If refChild is None it inserts newChild at the end of the children s list Node removeChild oldChild Remove a child node oldChild must be a child of this node if not ValueError is raised oldChild is returned on success If oldChild will not be used further its unlink method should be called Node replaceChild newChild oldChild Replace an existing node with a new node It must be the case that oldChild is a child of this node if not ValueError is raised Node normalize Join adjacent text nodes so that all stretches of text are stored as single Text instances This simplifies processing text from a DOM tree for many applications Node cloneNode deep Clone this node Setting deep means to clone all child nodes as well This returns the clone NodeList Objects A NodeList represents a sequence of nodes These objects are used in two ways in the DOM Core recommendation an Element object provides one as its list of child nodes and the getElementsByTagName and getElementsByTagNameNS methods of Node return objects with this interface to represent query results The DOM Level 2 recommendation defines one method
en
null
2,489
and one attribute for these objects NodeList item i Return the i th item from the sequence if there is one or None The index i is not allowed to be less than zero or greater than or equal to the length of the sequence NodeList length The number of nodes in the sequence In addition the Python DOM interface requires that some additional support is provided to allow NodeList objects to be used as Python sequences All NodeList implementations must include support for __len__ and __getitem__ this allows iteration over the NodeList in for statements and proper support for the len built in function If a DOM implementation supports modification of the document the NodeList implementation must also support the __setitem__ and __delitem__ methods DocumentType Objects Information about the notations and entities declared by a document including the external subset if the parser uses it and can provide the information is available from a DocumentType object The DocumentType for a document is available from the Document object s doctype attribute if there is no DOCTYPE declaration for the document the document s doctype attribute will be set to None instead of an instance of this interface DocumentType is a specialization of Node and adds the following attributes DocumentType publicId The public identifier for the external subset of the document type definition This will be a string or None DocumentType systemId The system identifier for the external subset of the document type definition This will be a URI as a string or None DocumentType internalSubset A string giving the complete internal subset from the document This does not include the brackets which enclose the subset If the document has no internal subset this should be None DocumentType name The name of the root element as given in the DOCTYPE declaration if present DocumentType entities This is a NamedNodeMap giving the definitions of external entities For entity names defined more than once only the first definition is provided others are ignored as required by the XML recommendation This may be None if the information is not provided by the parser or if no entities are defined DocumentType notations This is a NamedNodeMap giving the definitions of notations For notation names defined more than once only the first definition is provided others are ignored as required by the XML recommendation This may be None if the information is not provided by the parser or if no notations are defined Document Objects A Document represents an entire XML document including its constituent elements attributes processing instructions comments etc Remember that it inherits properties from Node Document documentElement The one and only root element of the document Document createElement tagName Create and return a new element node The element is not inserted into the document when it is created You need to explicitly insert it with one of the other methods such as insertBefore or appendChild Document createElementNS namespaceURI tagName Create and return a new element with a namespace The tagName may have a prefix The element is not inserted into the document when it is created You need to explicitly insert it with one of the other methods such as insertBefore or appendChild Document createTextNode data Create and return a text node containing the data passed as a parameter As with the other creation methods this one does not insert the node into the tree Document createComment data Create and return a comment node containing the data passed as a parameter As with the other creation methods this one does not insert the node into the tree Document createProcessingInstruction target data Create and return a processing instruction node containing the target and data passed as parameters As with the other creation methods this one does not insert the node into the tree Document createAttribute name Create and return an attribute node This method does not associate the attribute node with any particular element You must use setAttributeNode on the appropriate Element object to use the newly crea
en
null
2,490
ted attribute instance Document createAttributeNS namespaceURI qualifiedName Create and return an attribute node with a namespace The tagName may have a prefix This method does not associate the attribute node with any particular element You must use setAttributeNode on the appropriate Element object to use the newly created attribute instance Document getElementsByTagName tagName Search for all descendants direct children children s children etc with a particular element type name Document getElementsByTagNameNS namespaceURI localName Search for all descendants direct children children s children etc with a particular namespace URI and localname The localname is the part of the namespace after the prefix Element Objects Element is a subclass of Node so inherits all the attributes of that class Element tagName The element type name In a namespace using document it may have colons in it The value is a string Element getElementsByTagName tagName Same as equivalent method in the Document class Element getElementsByTagNameNS namespaceURI localName Same as equivalent method in the Document class Element hasAttribute name Return True if the element has an attribute named by name Element hasAttributeNS namespaceURI localName Return True if the element has an attribute named by namespaceURI and localName Element getAttribute name Return the value of the attribute named by name as a string If no such attribute exists an empty string is returned as if the attribute had no value Element getAttributeNode attrname Return the Attr node for the attribute named by attrname Element getAttributeNS namespaceURI localName Return the value of the attribute named by namespaceURI and localName as a string If no such attribute exists an empty string is returned as if the attribute had no value Element getAttributeNodeNS namespaceURI localName Return an attribute value as a node given a namespaceURI and localName Element removeAttribute name Remove an attribute by name If there is no matching attribute a NotFoundErr is raised Element removeAttributeNode oldAttr Remove and return oldAttr from the attribute list if present If oldAttr is not present NotFoundErr is raised Element removeAttributeNS namespaceURI localName Remove an attribute by name Note that it uses a localName not a qname No exception is raised if there is no matching attribute Element setAttribute name value Set an attribute value from a string Element setAttributeNode newAttr Add a new attribute node to the element replacing an existing attribute if necessary if the name attribute matches If a replacement occurs the old attribute node will be returned If newAttr is already in use InuseAttributeErr will be raised Element setAttributeNodeNS newAttr Add a new attribute node to the element replacing an existing attribute if necessary if the namespaceURI and localName attributes match If a replacement occurs the old attribute node will be returned If newAttr is already in use InuseAttributeErr will be raised Element setAttributeNS namespaceURI qname value Set an attribute value from a string given a namespaceURI and a qname Note that a qname is the whole attribute name This is different than above Attr Objects Attr inherits from Node so inherits all its attributes Attr name The attribute name In a namespace using document it may include a colon Attr localName The part of the name following the colon if there is one else the entire name This is a read only attribute Attr prefix The part of the name preceding the colon if there is one else the empty string Attr value The text value of the attribute This is a synonym for the nodeValue attribute NamedNodeMap Objects NamedNodeMap does not inherit from Node NamedNodeMap length The length of the attribute list NamedNodeMap item index Return an attribute with a particular index The order you get the attributes in is arbitrary but will be consistent for the life of a DOM Each item is an attribute node Get its value with the value attribute There are also experimental methods that give this class more mapping behavior You can use them or you can
en
null
2,491
use the standardized getAttribute family of methods on the Element objects Comment Objects Comment represents a comment in the XML document It is a subclass of Node but cannot have child nodes Comment data The content of the comment as a string The attribute contains all characters between the leading and trailing but does not include them Text and CDATASection Objects The Text interface represents text in the XML document If the parser and DOM implementation support the DOM s XML extension portions of the text enclosed in CDATA marked sections are stored in CDATASection objects These two interfaces are identical but provide different values for the nodeType attribute These interfaces extend the Node interface They cannot have child nodes Text data The content of the text node as a string Note The use of a CDATASection node does not indicate that the node represents a complete CDATA marked section only that the content of the node was part of a CDATA section A single CDATA section may be represented by more than one node in the document tree There is no way to determine whether two adjacent CDATASection nodes represent different CDATA marked sections ProcessingInstruction Objects Represents a processing instruction in the XML document this inherits from the Node interface and cannot have child nodes ProcessingInstruction target The content of the processing instruction up to the first whitespace character This is a read only attribute ProcessingInstruction data The content of the processing instruction following the first whitespace character Exceptions The DOM Level 2 recommendation defines a single exception DOMException and a number of constants that allow applications to determine what sort of error occurred DOMException instances carry a code attribute that provides the appropriate value for the specific exception The Python DOM interface provides the constants but also expands the set of exceptions so that a specific exception exists for each of the exception codes defined by the DOM The implementations must raise the appropriate specific exception each of which carries the appropriate value for the code attribute exception xml dom DOMException Base exception class used for all specific DOM exceptions This exception class cannot be directly instantiated exception xml dom DomstringSizeErr Raised when a specified range of text does not fit into a string This is not known to be used in the Python DOM implementations but may be received from DOM implementations not written in Python exception xml dom HierarchyRequestErr Raised when an attempt is made to insert a node where the node type is not allowed exception xml dom IndexSizeErr Raised when an index or size parameter to a method is negative or exceeds the allowed values exception xml dom InuseAttributeErr Raised when an attempt is made to insert an Attr node that is already present elsewhere in the document exception xml dom InvalidAccessErr Raised if a parameter or an operation is not supported on the underlying object exception xml dom InvalidCharacterErr This exception is raised when a string parameter contains a character that is not permitted in the context it s being used in by the XML 1 0 recommendation For example attempting to create an Element node with a space in the element type name will cause this error to be raised exception xml dom InvalidModificationErr Raised when an attempt is made to modify the type of a node exception xml dom InvalidStateErr Raised when an attempt is made to use an object that is not defined or is no longer usable exception xml dom NamespaceErr If an attempt is made to change any object in a way that is not permitted with regard to the Namespaces in XML recommendation this exception is raised exception xml dom NotFoundErr Exception when a node does not exist in the referenced context For example NamedNodeMap removeNamedItem will raise this if the node passed in does not exist in the map exception xml dom NotSupportedErr Raised when the implementation does not support the requested type of object or operation exception xml dom N
en
null
2,492
oDataAllowedErr This is raised if data is specified for a node which does not support data exception xml dom NoModificationAllowedErr Raised on attempts to modify an object where modifications are not allowed such as for read only nodes exception xml dom SyntaxErr Raised when an invalid or illegal string is specified exception xml dom WrongDocumentErr Raised when a node is inserted in a different document than it currently belongs to and the implementation does not support migrating the node from one document to the other The exception codes defined in the DOM recommendation map to the exceptions described above according to this table Constant Exception DOMSTRING_SIZE_ERR DomstringSizeErr HIERARCHY_REQUEST_ERR HierarchyRequestErr INDEX_SIZE_ERR IndexSizeErr INUSE_ATTRIBUTE_ERR InuseAttributeErr INVALID_ACCESS_ERR InvalidAccessErr INVALID_CHARACTER_ERR InvalidCharacterErr INVALID_MODIFICATION_ERR InvalidModificationErr INVALID_STATE_ERR InvalidStateErr NAMESPACE_ERR NamespaceErr NOT_FOUND_ERR NotFoundErr NOT_SUPPORTED_ERR NotSupportedErr NO_DATA_ALLOWED_ERR NoDataAllowedErr NO_MODIFICATION_ALLOWED_ERR NoModificationAllowedErr SYNTAX_ERR SyntaxErr WRONG_DOCUMENT_ERR WrongDocumentErr Conformance This section describes the conformance requirements and relationships between the Python DOM API the W3C DOM recommendations and the OMG IDL mapping for Python Type Mapping The IDL types used in the DOM specification are mapped to Python types according to the following table IDL Type Python Type boolean bool or int int int long int int unsigned int int DOMString str or bytes null None Accessor Methods The mapping from OMG IDL to Python defines accessor functions for IDL attribute declarations in much the way the Java mapping does Mapping the IDL declarations readonly attribute string someValue attribute string anotherValue yields three accessor functions a get method for someValue _get_someValue and get and set methods for anotherValue _get_anotherValue and _set_anotherValue The mapping in particular does not require that the IDL attributes are accessible as normal Python attributes object someValue is not required to work and may raise an AttributeError The Python DOM API however does require that normal attribute access work This means that the typical surrogates generated by Python IDL compilers are not likely to work and wrapper objects may be needed on the client if the DOM objects are accessed via CORBA While this does require some additional consideration for CORBA DOM clients the implementers with experience using DOM over CORBA from Python do not consider this a problem Attributes that are declared readonly may not restrict write access in all DOM implementations In the Python DOM API accessor functions are not required If provided they should take the form defined by the Python IDL mapping but these methods are considered unnecessary since the attributes are accessible directly from Python Set accessors should never be provided for readonly attributes The IDL definitions do not fully embody the requirements of the W3C DOM API such as the notion of certain objects such as the return value of getElementsByTagName being live The Python DOM API does not require implementations to enforce such requirements
en
null
2,493
nis Interface to Sun s NIS Yellow Pages Deprecated since version 3 11 will be removed in version 3 13 The nis module is deprecated see PEP 594 for details The nis module gives a thin wrapper around the NIS library useful for central administration of several hosts Because NIS exists only on Unix systems this module is only available for Unix Availability not Emscripten not WASI This module does not work or is not available on WebAssembly platforms wasm32 emscripten and wasm32 wasi See WebAssembly platforms for more information The nis module defines the following functions nis match key mapname domain default_domain Return the match for key in map mapname or raise an error nis error if there is none Both should be strings key is 8 bit clean Return value is an arbitrary array of bytes may contain NULL and other joys Note that mapname is first checked if it is an alias to another name The domain argument allows overriding the NIS domain used for the lookup If unspecified lookup is in the default NIS domain nis cat mapname domain default_domain Return a dictionary mapping key to value such that match key mapname value Note that both keys and values of the dictionary are arbitrary arrays of bytes Note that mapname is first checked if it is an alias to another name The domain argument allows overriding the NIS domain used for the lookup If unspecified lookup is in the default NIS domain nis maps domain default_domain Return a list of all valid maps The domain argument allows overriding the NIS domain used for the lookup If unspecified lookup is in the default NIS domain nis get_default_domain Return the system default NIS domain The nis module defines the following exception exception nis error An error raised when a NIS function returns an error code
en
null
2,494
collections Container datatypes Source code Lib collections __init__ py This module implements specialized container datatypes providing alternatives to Python s general purpose built in containers dict list set and tuple namedtuple factory function for creating tuple subclasses with named fields deque list like container with fast appends and pops on either end ChainMap dict like class for creating a single view of multiple mappings Counter dict subclass for counting hashable objects OrderedDict dict subclass that remembers the order entries were added defaultdict dict subclass that calls a factory function to supply missing values UserDict wrapper around dictionary objects for easier dict subclassing UserList wrapper around list objects for easier list subclassing UserString wrapper around string objects for easier string subclassing ChainMap objects New in version 3 3 A ChainMap class is provided for quickly linking a number of mappings so they can be treated as a single unit It is often much faster than creating a new dictionary and running multiple update calls The class can be used to simulate nested scopes and is useful in templating class collections ChainMap maps A ChainMap groups multiple dicts or other mappings together to create a single updateable view If no maps are specified a single empty dictionary is provided so that a new chain always has at least one mapping The underlying mappings are stored in a list That list is public and can be accessed or updated using the maps attribute There is no other state Lookups search the underlying mappings successively until a key is found In contrast writes updates and deletions only operate on the first mapping A ChainMap incorporates the underlying mappings by reference So if one of the underlying mappings gets updated those changes will be reflected in ChainMap All of the usual dictionary methods are supported In addition there is a maps attribute a method for creating new subcontexts and a property for accessing all but the first mapping maps A user updateable list of mappings The list is ordered from first searched to last searched It is the only stored state and can be modified to change which mappings are searched The list should always contain at least one mapping new_child m None kwargs Returns a new ChainMap containing a new map followed by all of the maps in the current instance If m is specified it becomes the new map at the front of the list of mappings if not specified an empty dict is used so that a call to d new_child is equivalent to ChainMap d maps If any keyword arguments are specified they update passed map or new empty dict This method is used for creating subcontexts that can be updated without altering values in any of the parent mappings Changed in version 3 4 The optional m parameter was added Changed in version 3 10 Keyword arguments support was added parents Property returning a new ChainMap containing all of the maps in the current instance except the first one This is useful for skipping the first map in the search Use cases are similar to those for the nonlocal keyword used in nested scopes The use cases also parallel those for the built in super function A reference to d parents is equivalent to ChainMap d maps 1 Note the iteration order of a ChainMap is determined by scanning the mappings last to first baseline music bach art rembrandt adjustments art van gogh opera carmen list ChainMap adjustments baseline music art opera This gives the same ordering as a series of dict update calls starting with the last mapping combined baseline copy combined update adjustments list combined music art opera Changed in version 3 9 Added support for and operators specified in PEP 584 See also The MultiContext class in the Enthought CodeTools package has options to support writing to any mapping in the chain Django s Context class for templating is a read only chain of mappings It also features pushing and popping of contexts similar to the new_child method and the parents property The Nested Contexts recipe has options to control whether writes and oth
en
null
2,495
er mutations apply only to the first mapping or to any mapping in the chain A greatly simplified read only version of Chainmap ChainMap Examples and Recipes This section shows various approaches to working with chained maps Example of simulating Python s internal lookup chain import builtins pylookup ChainMap locals globals vars builtins Example of letting user specified command line arguments take precedence over environment variables which in turn take precedence over default values import os argparse defaults color red user guest parser argparse ArgumentParser parser add_argument u user parser add_argument c color namespace parser parse_args command_line_args k v for k v in vars namespace items if v is not None combined ChainMap command_line_args os environ defaults print combined color print combined user Example patterns for using the ChainMap class to simulate nested contexts c ChainMap Create root context d c new_child Create nested child context e c new_child Child of c independent from d e maps 0 Current context dictionary like Python s locals e maps 1 Root context like Python s globals e parents Enclosing context chain like Python s nonlocals d x 1 Set value in current context d x Get first key in the chain of contexts del d x Delete from current context list d All nested values k in d Check all nested values len d Number of nested values d items All nested items dict d Flatten into a regular dictionary The ChainMap class only makes updates writes and deletions to the first mapping in the chain while lookups will search the full chain However if deep writes and deletions are desired it is easy to make a subclass that updates keys found deeper in the chain class DeepChainMap ChainMap Variant of ChainMap that allows direct updates to inner scopes def __setitem__ self key value for mapping in self maps if key in mapping mapping key value return self maps 0 key value def __delitem__ self key for mapping in self maps if key in mapping del mapping key return raise KeyError key d DeepChainMap zebra black elephant blue lion yellow d lion orange update an existing key two levels down d snake red new keys get added to the topmost dict del d elephant remove an existing key one level down d display result DeepChainMap zebra black snake red lion orange Counter objects A counter tool is provided to support convenient and rapid tallies For example Tally occurrences of words in a list cnt Counter for word in red blue red green blue blue cnt word 1 cnt Counter blue 3 red 2 green 1 Find the ten most common words in Hamlet import re words re findall r w open hamlet txt read lower Counter words most_common 10 the 1143 and 966 to 762 of 669 i 631 you 554 a 546 my 514 hamlet 471 in 451 class collections Counter iterable or mapping A Counter is a dict subclass for counting hashable objects It is a collection where elements are stored as dictionary keys and their counts are stored as dictionary values Counts are allowed to be any integer value including zero or negative counts The Counter class is similar to bags or multisets in other languages Elements are counted from an iterable or initialized from another mapping or counter c Counter a new empty counter c Counter gallahad a new counter from an iterable c Counter red 4 blue 2 a new counter from a mapping c Counter cats 4 dogs 8 a new counter from keyword args Counter objects have a dictionary interface except that they return a zero count for missing items instead of raising a KeyError c Counter eggs ham c bacon count of a missing element is zero 0 Setting a count to zero does not remove an element from a counter Use del to remove it entirely c sausage 0 counter entry with a zero count del c sausage del actually removes the entry New in version 3 1 Changed in version 3 7 As a dict subclass Counter inherited the capability to remember insertion order Math operations on Counter objects also preserve order Results are ordered according to when an element is first encountered in the left operand and then by the order encountered in the right operand Counter objects support additional m
en
null
2,496
ethods beyond those available for all dictionaries elements Return an iterator over elements repeating each as many times as its count Elements are returned in the order first encountered If an element s count is less than one elements will ignore it c Counter a 4 b 2 c 0 d 2 sorted c elements a a a a b b most_common n Return a list of the n most common elements and their counts from the most common to the least If n is omitted or None most_common returns all elements in the counter Elements with equal counts are ordered in the order first encountered Counter abracadabra most_common 3 a 5 b 2 r 2 subtract iterable or mapping Elements are subtracted from an iterable or from another mapping or counter Like dict update but subtracts counts instead of replacing them Both inputs and outputs may be zero or negative c Counter a 4 b 2 c 0 d 2 d Counter a 1 b 2 c 3 d 4 c subtract d c Counter a 3 b 0 c 3 d 6 New in version 3 2 total Compute the sum of the counts c Counter a 10 b 5 c 0 c total 15 New in version 3 10 The usual dictionary methods are available for Counter objects except for two which work differently for counters fromkeys iterable This class method is not implemented for Counter objects update iterable or mapping Elements are counted from an iterable or added in from another mapping or counter Like dict update but adds counts instead of replacing them Also the iterable is expected to be a sequence of elements not a sequence of key value pairs Counters support rich comparison operators for equality subset and superset relationships All of those tests treat missing elements as having zero counts so that Counter a 1 Counter a 1 b 0 returns true Changed in version 3 10 Rich comparison operations were added Changed in version 3 10 In equality tests missing elements are treated as having zero counts Formerly Counter a 3 and Counter a 3 b 0 were considered distinct Common patterns for working with Counter objects c total total of all counts c clear reset all counts list c list unique elements set c convert to a set dict c convert to a regular dictionary c items convert to a list of elem cnt pairs Counter dict list_of_pairs convert from a list of elem cnt pairs c most_common n 1 1 n least common elements c remove zero and negative counts Several mathematical operations are provided for combining Counter objects to produce multisets counters that have counts greater than zero Addition and subtraction combine counters by adding or subtracting the counts of corresponding elements Intersection and union return the minimum and maximum of corresponding counts Equality and inclusion compare corresponding counts Each operation can accept inputs with signed counts but the output will exclude results with counts of zero or less c Counter a 3 b 1 d Counter a 1 b 2 c d add two counters together c x d x Counter a 4 b 3 c d subtract keeping only positive counts Counter a 2 c d intersection min c x d x Counter a 1 b 1 c d union max c x d x Counter a 3 b 2 c d equality c x d x False c d inclusion c x d x False Unary addition and subtraction are shortcuts for adding an empty counter or subtracting from an empty counter c Counter a 2 b 4 c Counter a 2 c Counter b 4 New in version 3 3 Added support for unary plus unary minus and in place multiset operations Note Counters were primarily designed to work with positive integers to represent running counts however care was taken to not unnecessarily preclude use cases needing other types or negative values To help with those use cases this section documents the minimum range and type restrictions The Counter class itself is a dictionary subclass with no restrictions on its keys and values The values are intended to be numbers representing counts but you could store anything in the value field The most_common method requires only that the values be orderable For in place operations such as c key 1 the value type need only support addition and subtraction So fractions floats and decimals would work and negative values are supported The same is also true for update and subtract which allow negative a
en
null
2,497
nd zero values for both inputs and outputs The multiset methods are designed only for use cases with positive values The inputs may be negative or zero but only outputs with positive values are created There are no type restrictions but the value type needs to support addition subtraction and comparison The elements method requires integer counts It ignores zero and negative counts See also Bag class in Smalltalk Wikipedia entry for Multisets C multisets tutorial with examples For mathematical operations on multisets and their use cases see Knuth Donald The Art of Computer Programming Volume II Section 4 6 3 Exercise 19 To enumerate all distinct multisets of a given size over a given set of elements see itertools combinations_with_replacement map Counter combinations_with_replacement ABC 2 AA AB AC BB BC CC deque objects class collections deque iterable maxlen Returns a new deque object initialized left to right using append with data from iterable If iterable is not specified the new deque is empty Deques are a generalization of stacks and queues the name is pronounced deck and is short for double ended queue Deques support thread safe memory efficient appends and pops from either side of the deque with approximately the same O 1 performance in either direction Though list objects support similar operations they are optimized for fast fixed length operations and incur O n memory movement costs for pop 0 and insert 0 v operations which change both the size and position of the underlying data representation If maxlen is not specified or is None deques may grow to an arbitrary length Otherwise the deque is bounded to the specified maximum length Once a bounded length deque is full when new items are added a corresponding number of items are discarded from the opposite end Bounded length deques provide functionality similar to the tail filter in Unix They are also useful for tracking transactions and other pools of data where only the most recent activity is of interest Deque objects support the following methods append x Add x to the right side of the deque appendleft x Add x to the left side of the deque clear Remove all elements from the deque leaving it with length 0 copy Create a shallow copy of the deque New in version 3 5 count x Count the number of deque elements equal to x New in version 3 2 extend iterable Extend the right side of the deque by appending elements from the iterable argument extendleft iterable Extend the left side of the deque by appending elements from iterable Note the series of left appends results in reversing the order of elements in the iterable argument index x start stop Return the position of x in the deque at or after index start and before index stop Returns the first match or raises ValueError if not found New in version 3 5 insert i x Insert x into the deque at position i If the insertion would cause a bounded deque to grow beyond maxlen an IndexError is raised New in version 3 5 pop Remove and return an element from the right side of the deque If no elements are present raises an IndexError popleft Remove and return an element from the left side of the deque If no elements are present raises an IndexError remove value Remove the first occurrence of value If not found raises a ValueError reverse Reverse the elements of the deque in place and then return None New in version 3 2 rotate n 1 Rotate the deque n steps to the right If n is negative rotate to the left When the deque is not empty rotating one step to the right is equivalent to d appendleft d pop and rotating one step to the left is equivalent to d append d popleft Deque objects also provide one read only attribute maxlen Maximum size of a deque or None if unbounded New in version 3 1 In addition to the above deques support iteration pickling len d reversed d copy copy d copy deepcopy d membership testing with the in operator and subscript references such as d 0 to access the first element Indexed access is O 1 at both ends but slows to O n in the middle For fast random access use lists instead Starting in version 3 5 deques supp
en
null
2,498
ort __add__ __mul__ and __imul__ Example from collections import deque d deque ghi make a new deque with three items for elem in d iterate over the deque s elements print elem upper G H I d append j add a new entry to the right side d appendleft f add a new entry to the left side d show the representation of the deque deque f g h i j d pop return and remove the rightmost item j d popleft return and remove the leftmost item f list d list the contents of the deque g h i d 0 peek at leftmost item g d 1 peek at rightmost item i list reversed d list the contents of a deque in reverse i h g h in d search the deque True d extend jkl add multiple elements at once d deque g h i j k l d rotate 1 right rotation d deque l g h i j k d rotate 1 left rotation d deque g h i j k l deque reversed d make a new deque in reverse order deque l k j i h g d clear empty the deque d pop cannot pop from an empty deque Traceback most recent call last File pyshell 6 line 1 in toplevel d pop IndexError pop from an empty deque d extendleft abc extendleft reverses the input order d deque c b a deque Recipes This section shows various approaches to working with deques Bounded length deques provide functionality similar to the tail filter in Unix def tail filename n 10 Return the last n lines of a file with open filename as f return deque f n Another approach to using deques is to maintain a sequence of recently added elements by appending to the right and popping to the left def moving_average iterable n 3 moving_average 40 30 50 46 39 44 40 0 42 0 45 0 43 0 https en wikipedia org wiki Moving_average it iter iterable d deque itertools islice it n 1 d appendleft 0 s sum d for elem in it s elem d popleft d append elem yield s n A round robin scheduler can be implemented with input iterators stored in a deque Values are yielded from the active iterator in position zero If that iterator is exhausted it can be removed with popleft otherwise it can be cycled back to the end with the rotate method def roundrobin iterables roundrobin ABC D EF A D E B F C iterators deque map iter iterables while iterators try while True yield next iterators 0 iterators rotate 1 except StopIteration Remove an exhausted iterator iterators popleft The rotate method provides a way to implement deque slicing and deletion For example a pure Python implementation of del d n relies on the rotate method to position elements to be popped def delete_nth d n d rotate n d popleft d rotate n To implement deque slicing use a similar approach applying rotate to bring a target element to the left side of the deque Remove old entries with popleft add new entries with extend and then reverse the rotation With minor variations on that approach it is easy to implement Forth style stack manipulations such as dup drop swap over pick rot and roll defaultdict objects class collections defaultdict default_factory None Return a new dictionary like object defaultdict is a subclass of the built in dict class It overrides one method and adds one writable instance variable The remaining functionality is the same as for the dict class and is not documented here The first argument provides the initial value for the default_factory attribute it defaults to None All remaining arguments are treated the same as if they were passed to the dict constructor including keyword arguments defaultdict objects support the following method in addition to the standard dict operations __missing__ key If the default_factory attribute is None this raises a KeyError exception with the key as argument If default_factory is not None it is called without arguments to provide a default value for the given key this value is inserted in the dictionary for the key and returned If calling default_factory raises an exception this exception is propagated unchanged This method is called by the __getitem__ method of the dict class when the requested key is not found whatever it returns or raises is then returned or raised by __getitem__ Note that __missing__ is not called for any operations besides __getitem__ This means that get will like n
en
null
2,499
ormal dictionaries return None as a default rather than using default_factory defaultdict objects support the following instance variable default_factory This attribute is used by the __missing__ method it is initialized from the first argument to the constructor if present or to None if absent Changed in version 3 9 Added merge and update operators specified in PEP 584 defaultdict Examples Using list as the default_factory it is easy to group a sequence of key value pairs into a dictionary of lists s yellow 1 blue 2 yellow 3 blue 4 red 1 d defaultdict list for k v in s d k append v sorted d items blue 2 4 red 1 yellow 1 3 When each key is encountered for the first time it is not already in the mapping so an entry is automatically created using the default_factory function which returns an empty list The list append operation then attaches the value to the new list When keys are encountered again the look up proceeds normally returning the list for that key and the list append operation adds another value to the list This technique is simpler and faster than an equivalent technique using dict setdefault d for k v in s d setdefault k append v sorted d items blue 2 4 red 1 yellow 1 3 Setting the default_factory to int makes the defaultdict useful for counting like a bag or multiset in other languages s mississippi d defaultdict int for k in s d k 1 sorted d items i 4 m 1 p 2 s 4 When a letter is first encountered it is missing from the mapping so the default_factory function calls int to supply a default count of zero The increment operation then builds up the count for each letter The function int which always returns zero is just a special case of constant functions A faster and more flexible way to create constant functions is to use a lambda function which can supply any constant value not just zero def constant_factory value return lambda value d defaultdict constant_factory missing d update name John action ran name s action s to object s d John ran to missing Setting the default_factory to set makes the defaultdict useful for building a dictionary of sets s red 1 blue 2 red 3 blue 4 red 1 blue 4 d defaultdict set for k v in s d k add v sorted d items blue 2 4 red 1 3 namedtuple Factory Function for Tuples with Named Fields Named tuples assign meaning to each position in a tuple and allow for more readable self documenting code They can be used wherever regular tuples are used and they add the ability to access fields by name instead of position index collections namedtuple typename field_names rename False defaults None module None Returns a new tuple subclass named typename The new subclass is used to create tuple like objects that have fields accessible by attribute lookup as well as being indexable and iterable Instances of the subclass also have a helpful docstring with typename and field_names and a helpful __repr__ method which lists the tuple contents in a name value format The field_names are a sequence of strings such as x y Alternatively field_names can be a single string with each fieldname separated by whitespace and or commas for example x y or x y Any valid Python identifier may be used for a fieldname except for names starting with an underscore Valid identifiers consist of letters digits and underscores but do not start with a digit or underscore and cannot be a keyword such as class for return global pass or raise If rename is true invalid fieldnames are automatically replaced with positional names For example abc def ghi abc is converted to abc _1 ghi _3 eliminating the keyword def and the duplicate fieldname abc defaults can be None or an iterable of default values Since fields with a default value must come after any fields without a default the defaults are applied to the rightmost parameters For example if the fieldnames are x y z and the defaults are 1 2 then x will be a required argument y will default to 1 and z will default to 2 If module is defined the __module__ attribute of the named tuple is set to that value Named tuple instances do not have per instance dictionaries so they are lightw
en
null