id
int64
0
2.72k
content
stringlengths
5
4.1k
language
stringclasses
4 values
embedding
unknown
2,600
s__ value from the original exception or None if the exception does not have any notes If it is not None is it formatted in the traceback after the exception string New in version 3 11 stack A StackSummary representing the traceback exc_type The class of the original traceback filename For syntax errors the file name where the error occurred lineno For syntax errors the line number where the error occurred end_lineno For syntax errors the end line number where the error occurred Can be None if not present New in version 3 10 text For syntax errors the text where the error occurred offset For syntax errors the offset into the text where the error occurred end_offset For syntax errors the end offset into the text where the error occurred Can be None if not present New in version 3 10 msg For syntax errors the compiler error message classmethod from_exception exc limit None lookup_lines True capture_locals False Capture an exception for later rendering limit lookup_lines and capture_locals are as for the StackSummary class Note that when locals are captured they are also shown in the traceback print file None chain True Print to file default sys stderr the exception information returned by format New in version 3 11 format chain True Format the exception If chain is not True __cause__ and __context__ will not be formatted The return value is a generator of strings each ending in a newline and some containing internal newlines print_exception is a wrapper around this method which just prints the lines to a file format_exception_only Format the exception part of the traceback The return value is a generator of strings each ending in a newline The generator emits the exception s message followed by its notes if it has any The exception message is normally a single string however for SyntaxError exceptions it consists of several lines that when printed display detailed information about where the syntax error occurred Changed in version 3 11 The exception s notes are now included in the output StackSummary Objects New in version 3 5 StackSummary objects represent a call stack ready for formatting class traceback StackSummary classmethod extract frame_gen limit None lookup_lines True capture_locals False Construct a StackSummary object from a frame generator such as is returned by walk_stack or walk_tb If limit is supplied only this many frames are taken from frame_gen If lookup_lines is False the returned FrameSummary objects will not have read their lines in yet making the cost of creating the StackSummary cheaper which may be valuable if it may not actually get formatted If capture_locals is True the local variables in each FrameSummary are captured as object representations Changed in version 3 12 Exceptions raised from repr on a local variable when capture_locals is True are no longer propagated to the caller classmethod from_list a_list Construct a StackSummary object from a supplied list of FrameSummary objects or old style list of tuples Each tuple should be a 4 tuple with filename lineno name line as the elements format Returns a list of strings ready for printing Each string in the resulting list corresponds to a single frame from the stack Each string ends in a newline the strings may contain internal newlines as well for those items with source text lines For long sequences of the same frame and line the first few repetitions are shown followed by a summary line stating the exact number of further repetitions Changed in version 3 6 Long sequences of repeated frames are now abbreviated format_frame_summary frame_summary Returns a string for printing one of the frames involved in the stack This method is called for each FrameSummary object to be printed by StackSummary format If it returns None the frame is omitted from the output New in version 3 11 FrameSummary Objects New in version 3 5 A FrameSummary object represents a single frame in a traceback class traceback FrameSummary filename lineno name lookup_line True locals None line None Represents a single frame in the traceback or stack that is being formatted or pr
en
null
2,601
inted It may optionally have a stringified version of the frame s locals included in it If lookup_line is False the source code is not looked up until the FrameSummary has the line attribute accessed which also happens when casting it to a tuple line may be directly provided and will prevent line lookups happening at all locals is an optional local variable dictionary and if supplied the variable representations are stored in the summary for later display FrameSummary instances have the following attributes filename The filename of the source code for this frame Equivalent to accessing f f_code co_filename on a frame object f lineno The line number of the source code for this frame name Equivalent to accessing f f_code co_name on a frame object f line A string representing the source code for this frame with leading and trailing whitespace stripped If the source is not available it is None Traceback Examples This simple example implements a basic read eval print loop similar to but less useful than the standard Python interactive interpreter loop For a more complete implementation of the interpreter loop refer to the code module import sys traceback def run_user_code envdir source input try exec source envdir except Exception print Exception in user code print 60 traceback print_exc file sys stdout print 60 envdir while True run_user_code envdir The following example demonstrates the different ways to print and format the exception and traceback import sys traceback def lumberjack bright_side_of_life def bright_side_of_life return tuple 0 try lumberjack except IndexError exc sys exception print print_tb traceback print_tb exc __traceback__ limit 1 file sys stdout print print_exception traceback print_exception exc limit 2 file sys stdout print print_exc traceback print_exc limit 2 file sys stdout print format_exc first and last line formatted_lines traceback format_exc splitlines print formatted_lines 0 print formatted_lines 1 print format_exception print repr traceback format_exception exc print extract_tb print repr traceback extract_tb exc __traceback__ print format_tb print repr traceback format_tb exc __traceback__ print tb_lineno exc __traceback__ tb_lineno The output for the example would look similar to this print_tb File doctest line 10 in module lumberjack print_exception Traceback most recent call last File doctest line 10 in module lumberjack File doctest line 4 in lumberjack bright_side_of_life IndexError tuple index out of range print_exc Traceback most recent call last File doctest line 10 in module lumberjack File doctest line 4 in lumberjack bright_side_of_life IndexError tuple index out of range format_exc first and last line Traceback most recent call last IndexError tuple index out of range format_exception Traceback most recent call last n File doctest default 0 line 10 in module n lumberjack n File doctest default 0 line 4 in lumberjack n bright_side_of_life n File doctest default 0 line 7 in bright_side_of_life n return tuple 0 n n IndexError tuple index out of range n extract_tb FrameSummary file doctest line 10 in module FrameSummary file doctest line 4 in lumberjack FrameSummary file doctest line 7 in bright_side_of_life format_tb File doctest default 0 line 10 in module n lumberjack n File doctest default 0 line 4 in lumberjack n bright_side_of_life n File doctest default 0 line 7 in bright_side_of_life n return tuple 0 n n tb_lineno 10 The following example shows the different ways to print and format the stack import traceback def another_function lumberstack def lumberstack traceback print_stack print repr traceback extract_stack print repr traceback format_stack another_function File doctest line 10 in module another_function File doctest line 3 in another_function lumberstack File doctest line 6 in lumberstack traceback print_stack doctest 10 module another_function doctest 3 another_function lumberstack doctest 7 lumberstack print repr traceback extract_stack File doctest line 10 in module n another_function n File doctest line 3 in another_function n lumberstack n File doctest line 8 in
en
null
2,602
lumberstack n print repr traceback format_stack n This last example demonstrates the final few formatting functions import traceback traceback format_list spam py 3 module spam eggs eggs py 42 eggs return bacon File spam py line 3 in module n spam eggs n File eggs py line 42 in eggs n return bacon n an_error IndexError tuple index out of range traceback format_exception_only type an_error an_error IndexError tuple index out of range n
en
null
2,603
Data Persistence The modules described in this chapter support storing Python data in a persistent form on disk The pickle and marshal modules can turn many Python data types into a stream of bytes and then recreate the objects from the bytes The various DBM related modules support a family of hash based file formats that store a mapping of strings to other strings The list of modules described in this chapter is pickle Python object serialization Relationship to other Python modules Comparison with marshal Comparison with json Data stream format Module Interface What can be pickled and unpickled Pickling Class Instances Persistence of External Objects Dispatch Tables Handling Stateful Objects Custom Reduction for Types Functions and Other Objects Out of band Buffers Provider API Consumer API Example Restricting Globals Performance Examples copyreg Register pickle support functions Example shelve Python object persistence Restrictions Example marshal Internal Python object serialization dbm Interfaces to Unix databases dbm gnu GNU database manager dbm ndbm New Database Manager dbm dumb Portable DBM implementation sqlite3 DB API 2 0 interface for SQLite databases Tutorial Reference Module functions Module constants Connection objects Cursor objects Row objects Blob objects PrepareProtocol objects Exceptions SQLite and Python types Default adapters and converters deprecated Command line interface How to guides How to use placeholders to bind values in SQL queries How to adapt custom Python types to SQLite values How to write adaptable objects How to register adapter callables How to convert SQLite values to custom Python types Adapter and converter recipes How to use connection shortcut methods How to use the connection context manager How to work with SQLite URIs How to create and use row factories How to handle non UTF 8 text encodings Explanation Transaction control Transaction control via the autocommit attribute Transaction control via the isolation_level attribute
en
null
2,604
Streams Source code Lib asyncio streams py Streams are high level async await ready primitives to work with network connections Streams allow sending and receiving data without using callbacks or low level protocols and transports Here is an example of a TCP echo client written using asyncio streams import asyncio async def tcp_echo_client message reader writer await asyncio open_connection 127 0 0 1 8888 print f Send message r writer write message encode await writer drain data await reader read 100 print f Received data decode r print Close the connection writer close await writer wait_closed asyncio run tcp_echo_client Hello World See also the Examples section below Stream Functions The following top level asyncio functions can be used to create and work with streams coroutine asyncio open_connection host None port None limit None ssl None family 0 proto 0 flags 0 sock None local_addr None server_hostname None ssl_handshake_timeout None ssl_shutdown_timeout None happy_eyeballs_delay None interleave None Establish a network connection and return a pair of reader writer objects The returned reader and writer objects are instances of StreamReader and StreamWriter classes limit determines the buffer size limit used by the returned StreamReader instance By default the limit is set to 64 KiB The rest of the arguments are passed directly to loop create_connection Note The sock argument transfers ownership of the socket to the StreamWriter created To close the socket call its close method Changed in version 3 7 Added the ssl_handshake_timeout parameter Changed in version 3 8 Added the happy_eyeballs_delay and interleave parameters Changed in version 3 10 Removed the loop parameter Changed in version 3 11 Added the ssl_shutdown_timeout parameter coroutine asyncio start_server client_connected_cb host None port None limit None family socket AF_UNSPEC flags socket AI_PASSIVE sock None backlog 100 ssl None reuse_address None reuse_port None ssl_handshake_timeout None ssl_shutdown_timeout None start_serving True Start a socket server The client_connected_cb callback is called whenever a new client connection is established It receives a reader writer pair as two arguments instances of the StreamReader and StreamWriter classes client_connected_cb can be a plain callable or a coroutine function if it is a coroutine function it will be automatically scheduled as a Task limit determines the buffer size limit used by the returned StreamReader instance By default the limit is set to 64 KiB The rest of the arguments are passed directly to loop create_server Note The sock argument transfers ownership of the socket to the server created To close the socket call the server s close method Changed in version 3 7 Added the ssl_handshake_timeout and start_serving parameters Changed in version 3 10 Removed the loop parameter Changed in version 3 11 Added the ssl_shutdown_timeout parameter Unix Sockets coroutine asyncio open_unix_connection path None limit None ssl None sock None server_hostname None ssl_handshake_timeout None ssl_shutdown_timeout None Establish a Unix socket connection and return a pair of reader writer Similar to open_connection but operates on Unix sockets See also the documentation of loop create_unix_connection Note The sock argument transfers ownership of the socket to the StreamWriter created To close the socket call its close method Availability Unix Changed in version 3 7 Added the ssl_handshake_timeout parameter The path parameter can now be a path like object Changed in version 3 10 Removed the loop parameter Changed in version 3 11 Added the ssl_shutdown_timeout parameter coroutine asyncio start_unix_server client_connected_cb path None limit None sock None backlog 100 ssl None ssl_handshake_timeout None ssl_shutdown_timeout None start_serving True Start a Unix socket server Similar to start_server but works with Unix sockets See also the documentation of loop create_unix_server Note The sock argument transfers ownership of the socket to the server created To close the socket call the server s close method Availabilit
en
null
2,605
y Unix Changed in version 3 7 Added the ssl_handshake_timeout and start_serving parameters The path parameter can now be a path like object Changed in version 3 10 Removed the loop parameter Changed in version 3 11 Added the ssl_shutdown_timeout parameter StreamReader class asyncio StreamReader Represents a reader object that provides APIs to read data from the IO stream As an asynchronous iterable the object supports the async for statement It is not recommended to instantiate StreamReader objects directly use open_connection and start_server instead feed_eof Acknowledge the EOF coroutine read n 1 Read up to n bytes from the stream If n is not provided or set to 1 read until EOF then return all read bytes If EOF was received and the internal buffer is empty return an empty bytes object If n is 0 return an empty bytes object immediately If n is positive return at most n available bytes as soon as at least 1 byte is available in the internal buffer If EOF is received before any byte is read return an empty bytes object coroutine readline Read one line where line is a sequence of bytes ending with n If EOF is received and n was not found the method returns partially read data If EOF is received and the internal buffer is empty return an empty bytes object coroutine readexactly n Read exactly n bytes Raise an IncompleteReadError if EOF is reached before n can be read Use the IncompleteReadError partial attribute to get the partially read data coroutine readuntil separator b n Read data from the stream until separator is found On success the data and separator will be removed from the internal buffer consumed Returned data will include the separator at the end If the amount of data read exceeds the configured stream limit a LimitOverrunError exception is raised and the data is left in the internal buffer and can be read again If EOF is reached before the complete separator is found an IncompleteReadError exception is raised and the internal buffer is reset The IncompleteReadError partial attribute may contain a portion of the separator New in version 3 5 2 at_eof Return True if the buffer is empty and feed_eof was called StreamWriter class asyncio StreamWriter Represents a writer object that provides APIs to write data to the IO stream It is not recommended to instantiate StreamWriter objects directly use open_connection and start_server instead write data The method attempts to write the data to the underlying socket immediately If that fails the data is queued in an internal write buffer until it can be sent The method should be used along with the drain method stream write data await stream drain writelines data The method writes a list or any iterable of bytes to the underlying socket immediately If that fails the data is queued in an internal write buffer until it can be sent The method should be used along with the drain method stream writelines lines await stream drain close The method closes the stream and the underlying socket The method should be used though not mandatory along with the wait_closed method stream close await stream wait_closed can_write_eof Return True if the underlying transport supports the write_eof method False otherwise write_eof Close the write end of the stream after the buffered write data is flushed transport Return the underlying asyncio transport get_extra_info name default None Access optional transport information see BaseTransport get_extra_info for details coroutine drain Wait until it is appropriate to resume writing to the stream Example writer write data await writer drain This is a flow control method that interacts with the underlying IO write buffer When the size of the buffer reaches the high watermark drain blocks until the size of the buffer is drained down to the low watermark and writing can be resumed When there is nothing to wait for the drain returns immediately coroutine start_tls sslcontext server_hostname None ssl_handshake_timeout None ssl_shutdown_timeout None Upgrade an existing stream based connection to TLS Parameters sslcontext a configured instance of SSLContex
en
null
2,606
t server_hostname sets or overrides the host name that the target server s certificate will be matched against ssl_handshake_timeout is the time in seconds to wait for the TLS handshake to complete before aborting the connection 60 0 seconds if None default ssl_shutdown_timeout is the time in seconds to wait for the SSL shutdown to complete before aborting the connection 30 0 seconds if None default New in version 3 11 Changed in version 3 12 Added the ssl_shutdown_timeout parameter is_closing Return True if the stream is closed or in the process of being closed New in version 3 7 coroutine wait_closed Wait until the stream is closed Should be called after close to wait until the underlying connection is closed ensuring that all data has been flushed before e g exiting the program New in version 3 7 Examples TCP echo client using streams TCP echo client using the asyncio open_connection function import asyncio async def tcp_echo_client message reader writer await asyncio open_connection 127 0 0 1 8888 print f Send message r writer write message encode await writer drain data await reader read 100 print f Received data decode r print Close the connection writer close await writer wait_closed asyncio run tcp_echo_client Hello World See also The TCP echo client protocol example uses the low level loop create_connection method TCP echo server using streams TCP echo server using the asyncio start_server function import asyncio async def handle_echo reader writer data await reader read 100 message data decode addr writer get_extra_info peername print f Received message r from addr r print f Send message r writer write data await writer drain print Close the connection writer close await writer wait_closed async def main server await asyncio start_server handle_echo 127 0 0 1 8888 addrs join str sock getsockname for sock in server sockets print f Serving on addrs async with server await server serve_forever asyncio run main See also The TCP echo server protocol example uses the loop create_server method Get HTTP headers Simple example querying HTTP headers of the URL passed on the command line import asyncio import urllib parse import sys async def print_http_headers url url urllib parse urlsplit url if url scheme https reader writer await asyncio open_connection url hostname 443 ssl True else reader writer await asyncio open_connection url hostname 80 query f HEAD url path or HTTP 1 0 r n f Host url hostname r n f r n writer write query encode latin 1 while True line await reader readline if not line break line line decode latin1 rstrip if line print f HTTP header line Ignore the body close the socket writer close await writer wait_closed url sys argv 1 asyncio run print_http_headers url Usage python example py http example com path page html or with HTTPS python example py https example com path page html Register an open socket to wait for data using streams Coroutine waiting until a socket receives data using the open_connection function import asyncio import socket async def wait_for_data Get a reference to the current event loop because we want to access low level APIs loop asyncio get_running_loop Create a pair of connected sockets rsock wsock socket socketpair Register the open socket to wait for data reader writer await asyncio open_connection sock rsock Simulate the reception of data from the network loop call_soon wsock send abc encode Wait for data data await reader read 100 Got data we are done close the socket print Received data decode writer close await writer wait_closed Close the second socket wsock close asyncio run wait_for_data See also The register an open socket to wait for data using a protocol example uses a low level protocol and the loop create_connection method The watch a file descriptor for read events example uses the low level loop add_reader method to watch a file descriptor
en
null
2,607
netrc netrc file processing Source code Lib netrc py The netrc class parses and encapsulates the netrc file format used by the Unix ftp program and other FTP clients class netrc netrc file A netrc instance or subclass instance encapsulates data from a netrc file The initialization argument if present specifies the file to parse If no argument is given the file netrc in the user s home directory as determined by os path expanduser will be read Otherwise a FileNotFoundError exception will be raised Parse errors will raise NetrcParseError with diagnostic information including the file name line number and terminating token If no argument is specified on a POSIX system the presence of passwords in the netrc file will raise a NetrcParseError if the file ownership or permissions are insecure owned by a user other than the user running the process or accessible for read or write by any other user This implements security behavior equivalent to that of ftp and other programs that use netrc Changed in version 3 4 Added the POSIX permission check Changed in version 3 7 os path expanduser is used to find the location of the netrc file when file is not passed as argument Changed in version 3 10 netrc try UTF 8 encoding before using locale specific encoding The entry in the netrc file no longer needs to contain all tokens The missing tokens value default to an empty string All the tokens and their values now can contain arbitrary characters like whitespace and non ASCII characters If the login name is anonymous it won t trigger the security check exception netrc NetrcParseError Exception raised by the netrc class when syntactical errors are encountered in source text Instances of this exception provide three interesting attributes msg Textual explanation of the error filename The name of the source file lineno The line number on which the error was found netrc Objects A netrc instance has the following methods netrc authenticators host Return a 3 tuple login account password of authenticators for host If the netrc file did not contain an entry for the given host return the tuple associated with the default entry If neither matching host nor default entry is available return None netrc __repr__ Dump the class data as a string in the format of a netrc file This discards comments and may reorder the entries Instances of netrc have public instance variables netrc hosts Dictionary mapping host names to login account password tuples The default entry if any is represented as a pseudo host by that name netrc macros Dictionary mapping macro names to string lists
en
null
2,608
email header Internationalized headers Source code Lib email header py This module is part of the legacy Compat32 email API In the current API encoding and decoding of headers is handled transparently by the dictionary like API of the EmailMessage class In addition to uses in legacy code this module can be useful in applications that need to completely control the character sets used when encoding headers The remaining text in this section is the original documentation of the module RFC 2822 is the base standard that describes the format of email messages It derives from the older RFC 822 standard which came into widespread use at a time when most email was composed of ASCII characters only RFC 2822 is a specification written assuming email contains only 7 bit ASCII characters Of course as email has been deployed worldwide it has become internationalized such that language specific character sets can now be used in email messages The base standard still requires email messages to be transferred using only 7 bit ASCII characters so a slew of RFCs have been written describing how to encode email containing non ASCII characters into RFC 2822 compliant format These RFCs include RFC 2045 RFC 2046 RFC 2047 and RFC 2231 The email package supports these standards in its email header and email charset modules If you want to include non ASCII characters in your email headers say in the Subject or To fields you should use the Header class and assign the field in the Message object to an instance of Header instead of using a string for the header value Import the Header class from the email header module For example from email message import Message from email header import Header msg Message h Header p xf6stal iso 8859 1 msg Subject h msg as_string Subject iso 8859 1 q p F6stal n n Notice here how we wanted the Subject field to contain a non ASCII character We did this by creating a Header instance and passing in the character set that the byte string was encoded in When the subsequent Message instance was flattened the Subject field was properly RFC 2047 encoded MIME aware mail readers would show this header using the embedded ISO 8859 1 character Here is the Header class description class email header Header s None charset None maxlinelen None header_name None continuation_ws errors strict Create a MIME compliant header that can contain strings in different character sets Optional s is the initial header value If None the default the initial header value is not set You can later append to the header with append method calls s may be an instance of bytes or str but see the append documentation for semantics Optional charset serves two purposes it has the same meaning as the charset argument to the append method It also sets the default character set for all subsequent append calls that omit the charset argument If charset is not provided in the constructor the default the us ascii character set is used both as s s initial charset and as the default for subsequent append calls The maximum line length can be specified explicitly via maxlinelen For splitting the first line to a shorter value to account for the field header which isn t included in s e g Subject pass in the name of the field in header_name The default maxlinelen is 76 and the default value for header_name is None meaning it is not taken into account for the first line of a long split header Optional continuation_ws must be RFC 2822 compliant folding whitespace and is usually either a space or a hard tab character This character will be prepended to continuation lines continuation_ws defaults to a single space character Optional errors is passed straight through to the append method append s charset None errors strict Append the string s to the MIME header Optional charset if given should be a Charset instance see email charset or the name of a character set which will be converted to a Charset instance A value of None the default means that the charset given in the constructor is used s may be an instance of bytes or str If it is an instance of bytes then charset is th
en
null
2,609
e encoding of that byte string and a UnicodeError will be raised if the string cannot be decoded with that character set If s is an instance of str then charset is a hint specifying the character set of the characters in the string In either case when producing an RFC 2822 compliant header using RFC 2047 rules the string will be encoded using the output codec of the charset If the string cannot be encoded using the output codec a UnicodeError will be raised Optional errors is passed as the errors argument to the decode call if s is a byte string encode splitchars t maxlinelen None linesep n Encode a message header into an RFC compliant format possibly wrapping long lines and encapsulating non ASCII parts in base64 or quoted printable encodings Optional splitchars is a string containing characters which should be given extra weight by the splitting algorithm during normal header wrapping This is in very rough support of RFC 2822 s higher level syntactic breaks split points preceded by a splitchar are preferred during line splitting with the characters preferred in the order in which they appear in the string Space and tab may be included in the string to indicate whether preference should be given to one over the other as a split point when other split chars do not appear in the line being split Splitchars does not affect RFC 2047 encoded lines maxlinelen if given overrides the instance s value for the maximum line length linesep specifies the characters used to separate the lines of the folded header It defaults to the most useful value for Python application code n but r n can be specified in order to produce headers with RFC compliant line separators Changed in version 3 2 Added the linesep argument The Header class also provides a number of methods to support standard operators and built in functions __str__ Returns an approximation of the Header as a string using an unlimited line length All pieces are converted to unicode using the specified encoding and joined together appropriately Any pieces with a charset of unknown 8bit are decoded as ASCII using the replace error handler Changed in version 3 2 Added handling for the unknown 8bit charset __eq__ other This method allows you to compare two Header instances for equality __ne__ other This method allows you to compare two Header instances for inequality The email header module also provides the following convenient functions email header decode_header header Decode a message header value without converting the character set The header value is in header This function returns a list of decoded_string charset pairs containing each of the decoded parts of the header charset is None for non encoded parts of the header otherwise a lower case string containing the name of the character set specified in the encoded string Here s an example from email header import decode_header decode_header iso 8859 1 q p F6stal b p xf6stal iso 8859 1 email header make_header decoded_seq maxlinelen None header_name None continuation_ws Create a Header instance from a sequence of pairs as returned by decode_header decode_header takes a header value string and returns a sequence of pairs of the format decoded_string charset where charset is the name of the character set This function takes one of those sequence of pairs and returns a Header instance Optional maxlinelen header_name and continuation_ws are as in the Header constructor
en
null
2,610
File Formats The modules described in this chapter parse various miscellaneous file formats that aren t markup languages and are not related to e mail csv CSV File Reading and Writing Module Contents Dialects and Formatting Parameters Reader Objects Writer Objects Examples configparser Configuration file parser Quick Start Supported Datatypes Fallback Values Supported INI File Structure Interpolation of values Mapping Protocol Access Customizing Parser Behaviour Legacy API Examples ConfigParser Objects RawConfigParser Objects Exceptions tomllib Parse TOML files Examples Conversion Table netrc netrc file processing netrc Objects plistlib Generate and parse Apple plist files Examples
en
null
2,611
urllib error Exception classes raised by urllib request Source code Lib urllib error py The urllib error module defines the exception classes for exceptions raised by urllib request The base exception class is URLError The following exceptions are raised by urllib error as appropriate exception urllib error URLError The handlers raise this exception or derived exceptions when they run into a problem It is a subclass of OSError reason The reason for this error It can be a message string or another exception instance Changed in version 3 3 URLError used to be a subtype of IOError which is now an alias of OSError exception urllib error HTTPError url code msg hdrs fp Though being an exception a subclass of URLError an HTTPError can also function as a non exceptional file like return value the same thing that urlopen returns This is useful when handling exotic HTTP errors such as requests for authentication url Contains the request URL An alias for filename attribute code An HTTP status code as defined in RFC 2616 This numeric value corresponds to a value found in the dictionary of codes as found in http server BaseHTTPRequestHandler responses reason This is usually a string explaining the reason for this error An alias for msg attribute headers The HTTP response headers for the HTTP request that caused the HTTPError An alias for hdrs attribute New in version 3 4 fp A file like object where the HTTP error body can be read from exception urllib error ContentTooShortError msg content This exception is raised when the urlretrieve function detects that the amount of the downloaded data is less than the expected amount given by the Content Length header content The downloaded and supposedly truncated data
en
null
2,612
Frame Objects type PyFrameObject Part of the Limited API as an opaque struct The C structure of the objects used to describe frame objects There are no public members in this structure Changed in version 3 11 The members of this structure were removed from the public C API Refer to the What s New entry for details The PyEval_GetFrame and PyThreadState_GetFrame functions can be used to get a frame object See also Reflection PyTypeObject PyFrame_Type The type of frame objects It is the same object as types FrameType in the Python layer Changed in version 3 11 Previously this type was only available after including frameobject h int PyFrame_Check PyObject obj Return non zero if obj is a frame object Changed in version 3 11 Previously this function was only available after including frameobject h PyFrameObject PyFrame_GetBack PyFrameObject frame Get the frame next outer frame Return a strong reference or NULL if frame has no outer frame New in version 3 9 PyObject PyFrame_GetBuiltins PyFrameObject frame Get the frame s f_builtins attribute Return a strong reference The result cannot be NULL New in version 3 11 PyCodeObject PyFrame_GetCode PyFrameObject frame Part of the Stable ABI since version 3 10 Get the frame code Return a strong reference The result frame code cannot be NULL New in version 3 9 PyObject PyFrame_GetGenerator PyFrameObject frame Get the generator coroutine or async generator that owns this frame or NULL if this frame is not owned by a generator Does not raise an exception even if the return value is NULL Return a strong reference or NULL New in version 3 11 PyObject PyFrame_GetGlobals PyFrameObject frame Get the frame s f_globals attribute Return a strong reference The result cannot be NULL New in version 3 11 int PyFrame_GetLasti PyFrameObject frame Get the frame s f_lasti attribute Returns 1 if frame f_lasti is None New in version 3 11 PyObject PyFrame_GetVar PyFrameObject frame PyObject name Get the variable name of frame Return a strong reference to the variable value on success Raise NameError and return NULL if the variable does not exist Raise an exception and return NULL on error name type must be a str New in version 3 12 PyObject PyFrame_GetVarString PyFrameObject frame const char name Similar to PyFrame_GetVar but the variable name is a C string encoded in UTF 8 New in version 3 12 PyObject PyFrame_GetLocals PyFrameObject frame Get the frame s f_locals attribute dict Return a strong reference New in version 3 11 int PyFrame_GetLineNumber PyFrameObject frame Part of the Stable ABI since version 3 10 Return the line number that frame is currently executing Internal Frames Unless using PEP 523 you will not need this struct _PyInterpreterFrame The interpreter s internal frame representation New in version 3 11 PyObject PyUnstable_InterpreterFrame_GetCode struct _PyInterpreterFrame frame This is Unstable API It may change without warning in minor releases Return a strong reference to the code object for the frame New in version 3 12 int PyUnstable_InterpreterFrame_GetLasti struct _PyInterpreterFrame frame This is Unstable API It may change without warning in minor releases Return the byte offset into the last executed instruction New in version 3 12 int PyUnstable_InterpreterFrame_GetLine struct _PyInterpreterFrame frame This is Unstable API It may change without warning in minor releases Return the currently executing line number or 1 if there is no line number New in version 3 12
en
null
2,613
What s New In Python 3 1 Author Raymond Hettinger This article explains the new features in Python 3 1 compared to 3 0 Python 3 1 was released on June 27 2009 PEP 372 Ordered Dictionaries Regular Python dictionaries iterate over key value pairs in arbitrary order Over the years a number of authors have written alternative implementations that remember the order that the keys were originally inserted Based on the experiences from those implementations a new collections OrderedDict class has been introduced The OrderedDict API is substantially the same as regular dictionaries but will iterate over keys and values in a guaranteed order depending on when a key was first inserted If a new entry overwrites an existing entry the original insertion position is left unchanged Deleting an entry and reinserting it will move it to the end The standard library now supports use of ordered dictionaries in several modules The configparser module uses them by default This lets configuration files be read modified and then written back in their original order The _asdict method for collections namedtuple now returns an ordered dictionary with the values appearing in the same order as the underlying tuple indices The json module is being built out with an object_pairs_hook to allow OrderedDicts to be built by the decoder Support was also added for third party tools like PyYAML See also PEP 372 Ordered Dictionaries PEP written by Armin Ronacher and Raymond Hettinger Implementation written by Raymond Hettinger Since an ordered dictionary remembers its insertion order it can be used in conjunction with sorting to make a sorted dictionary regular unsorted dictionary d banana 3 apple 4 pear 1 orange 2 dictionary sorted by key OrderedDict sorted d items key lambda t t 0 OrderedDict apple 4 banana 3 orange 2 pear 1 dictionary sorted by value OrderedDict sorted d items key lambda t t 1 OrderedDict pear 1 orange 2 banana 3 apple 4 dictionary sorted by length of the key string OrderedDict sorted d items key lambda t len t 0 OrderedDict pear 1 apple 4 orange 2 banana 3 The new sorted dictionaries maintain their sort order when entries are deleted But when new keys are added the keys are appended to the end and the sort is not maintained PEP 378 Format Specifier for Thousands Separator The built in format function and the str format method use a mini language that now includes a simple non locale aware way to format a number with a thousands separator That provides a way to humanize a program s output improving its professional appearance and readability format 1234567 d 1 234 567 format 1234567 89 2f 1 234 567 89 format 12345 6 8901234 12j f 12 345 600000 8 901 234 120000j format Decimal 1234567 89 f 1 234 567 89 The supported types are int float complex and decimal Decimal Discussions are underway about how to specify alternative separators like dots spaces apostrophes or underscores Locale aware applications should use the existing n format specifier which already has some support for thousands separators See also PEP 378 Format Specifier for Thousands Separator PEP written by Raymond Hettinger and implemented by Eric Smith and Mark Dickinson Other Language Changes Some smaller changes made to the core Python language are Directories and zip archives containing a __main__ py file can now be executed directly by passing their name to the interpreter The directory zipfile is automatically inserted as the first entry in sys path Suggestion and initial patch by Andy Chu revised patch by Phillip J Eby and Nick Coghlan bpo 1739468 The int type gained a bit_length method that returns the number of bits necessary to represent its argument in binary n 37 bin 37 0b100101 n bit_length 6 n 2 123 1 n bit_length 123 n 1 bit_length 124 Contributed by Fredrik Johansson Victor Stinner Raymond Hettinger and Mark Dickinson bpo 3439 The fields in format strings can now be automatically numbered Sir of format Gallahad Camelot Sir Gallahad of Camelot Formerly the string would have required numbered fields such as Sir 0 of 1 Contributed by Eric Smith bpo 5237 The string
en
null
2,614
maketrans function is deprecated and is replaced by new static methods bytes maketrans and bytearray maketrans This change solves the confusion around which types were supported by the string module Now str bytes and bytearray each have their own maketrans and translate methods with intermediate translation tables of the appropriate type Contributed by Georg Brandl bpo 5675 The syntax of the with statement now allows multiple context managers in a single statement with open mylog txt as infile open a out w as outfile for line in infile if critical in line outfile write line With the new syntax the contextlib nested function is no longer needed and is now deprecated Contributed by Georg Brandl and Mattias Brändström appspot issue 53094 round x n now returns an integer if x is an integer Previously it returned a float round 1123 2 1100 Contributed by Mark Dickinson bpo 4707 Python now uses David Gay s algorithm for finding the shortest floating point representation that doesn t change its value This should help mitigate some of the confusion surrounding binary floating point numbers The significance is easily seen with a number like 1 1 which does not have an exact equivalent in binary floating point Since there is no exact equivalent an expression like float 1 1 evaluates to the nearest representable value which is 0x1 199999999999ap 0 in hex or 1 100000000000000088817841970012523233890533447265625 in decimal That nearest value was and still is used in subsequent floating point calculations What is new is how the number gets displayed Formerly Python used a simple approach The value of repr 1 1 was computed as format 1 1 17g which evaluated to 1 1000000000000001 The advantage of using 17 digits was that it relied on IEEE 754 guarantees to assure that eval repr 1 1 would round trip exactly to its original value The disadvantage is that many people found the output to be confusing mistaking intrinsic limitations of binary floating point representation as being a problem with Python itself The new algorithm for repr 1 1 is smarter and returns 1 1 Effectively it searches all equivalent string representations ones that get stored with the same underlying float value and returns the shortest representation The new algorithm tends to emit cleaner representations when possible but it does not change the underlying values So it is still the case that 1 1 2 2 3 3 even though the representations may suggest otherwise The new algorithm depends on certain features in the underlying floating point implementation If the required features are not found the old algorithm will continue to be used Also the text pickle protocols assure cross platform portability by using the old algorithm Contributed by Eric Smith and Mark Dickinson bpo 1580 New Improved and Deprecated Modules Added a collections Counter class to support convenient counting of unique items in a sequence or iterable Counter red blue red green blue blue Counter blue 3 red 2 green 1 Contributed by Raymond Hettinger bpo 1696199 Added a new module tkinter ttk for access to the Tk themed widget set The basic idea of ttk is to separate to the extent possible the code implementing a widget s behavior from the code implementing its appearance Contributed by Guilherme Polo bpo 2983 The gzip GzipFile and bz2 BZ2File classes now support the context management protocol Automatically close file after writing with gzip GzipFile filename wb as f f write b xxx Contributed by Antoine Pitrou The decimal module now supports methods for creating a decimal object from a binary float The conversion is exact but can sometimes be surprising Decimal from_float 1 1 Decimal 1 100000000000000088817841970012523233890533447265625 The long decimal result shows the actual binary fraction being stored for 1 1 The fraction has many digits because 1 1 cannot be exactly represented in binary Contributed by Raymond Hettinger and Mark Dickinson The itertools module grew two new functions The itertools combinations_with_replacement function is one of four for generating combinatorics including permutations and Car
en
null
2,615
tesian products The itertools compress function mimics its namesake from APL Also the existing itertools count function now has an optional step argument and can accept any type of counting sequence including fractions Fraction and decimal Decimal p q for p q in combinations_with_replacement LOVE 2 LL LO LV LE OO OV OE VV VE EE list compress data range 10 selectors 0 0 1 1 0 1 0 1 0 0 2 3 5 7 c count start Fraction 1 2 step Fraction 1 6 next c next c next c next c Fraction 1 2 Fraction 2 3 Fraction 5 6 Fraction 1 1 Contributed by Raymond Hettinger collections namedtuple now supports a keyword argument rename which lets invalid fieldnames be automatically converted to positional names in the form _0 _1 etc This is useful when the field names are being created by an external source such as a CSV header SQL field list or user input query input SELECT region dept count FROM main GROUPBY region dept cursor execute query query_fields desc 0 for desc in cursor description UserQuery namedtuple UserQuery query_fields rename True pprint pprint UserQuery row for row in cursor UserQuery region South dept Shipping _2 185 UserQuery region North dept Accounting _2 37 UserQuery region West dept Sales _2 419 Contributed by Raymond Hettinger bpo 1818 The re sub re subn and re split functions now accept a flags parameter Contributed by Gregory Smith The logging module now implements a simple logging NullHandler class for applications that are not using logging but are calling library code that does Setting up a null handler will suppress spurious warnings such as No handlers could be found for logger foo h logging NullHandler logging getLogger foo addHandler h Contributed by Vinay Sajip bpo 4384 The runpy module which supports the m command line switch now supports the execution of packages by looking for and executing a __main__ submodule when a package name is supplied Contributed by Andi Vajda bpo 4195 The pdb module can now access and display source code loaded via zipimport or any other conformant PEP 302 loader Contributed by Alexander Belopolsky bpo 4201 functools partial objects can now be pickled Suggested by Antoine Pitrou and Jesse Noller Implemented by Jack Diederich bpo 5228 Add pydoc help topics for symbols so that help works as expected in the interactive environment Contributed by David Laban bpo 4739 The unittest module now supports skipping individual tests or classes of tests And it supports marking a test as an expected failure a test that is known to be broken but shouldn t be counted as a failure on a TestResult class TestGizmo unittest TestCase unittest skipUnless sys platform startswith win requires Windows def test_gizmo_on_windows self unittest expectedFailure def test_gimzo_without_required_library self Also tests for exceptions have been builtout to work with context managers using the with statement def test_division_by_zero self with self assertRaises ZeroDivisionError x 0 In addition several new assertion methods were added including assertSetEqual assertDictEqual assertDictContainsSubset assertListEqual assertTupleEqual assertSequenceEqual assertRaisesRegexp assertIsNone and assertIsNotNone Contributed by Benjamin Peterson and Antoine Pitrou The io module has three new constants for the seek method SEEK_SET SEEK_CUR and SEEK_END The sys version_info tuple is now a named tuple sys version_info sys version_info major 3 minor 1 micro 0 releaselevel alpha serial 2 Contributed by Ross Light bpo 4285 The nntplib and imaplib modules now support IPv6 Contributed by Derek Morr bpo 1655 and bpo 1664 The pickle module has been adapted for better interoperability with Python 2 x when used with protocol 2 or lower The reorganization of the standard library changed the formal reference for many objects For example __builtin__ set in Python 2 is called builtins set in Python 3 This change confounded efforts to share data between different versions of Python But now when protocol 2 or lower is selected the pickler will automatically use the old Python 2 names for both loading and dumping This remapping is turned on by defau
en
null
2,616
lt but can be disabled with the fix_imports option s 1 2 3 pickle dumps s protocol 0 b c__builtin__ nset np0 n lp1 nL1L naL2L naL3L natp2 nRp3 n pickle dumps s protocol 0 fix_imports False b cbuiltins nset np0 n lp1 nL1L naL2L naL3L natp2 nRp3 n An unfortunate but unavoidable side effect of this change is that protocol 2 pickles produced by Python 3 1 won t be readable with Python 3 0 The latest pickle protocol protocol 3 should be used when migrating data between Python 3 x implementations as it doesn t attempt to remain compatible with Python 2 x Contributed by Alexandre Vassalotti and Antoine Pitrou bpo 6137 A new module importlib was added It provides a complete portable pure Python reference implementation of the import statement and its counterpart the __import__ function It represents a substantial step forward in documenting and defining the actions that take place during imports Contributed by Brett Cannon Optimizations Major performance enhancements have been added The new I O library as defined in PEP 3116 was mostly written in Python and quickly proved to be a problematic bottleneck in Python 3 0 In Python 3 1 the I O library has been entirely rewritten in C and is 2 to 20 times faster depending on the task at hand The pure Python version is still available for experimentation purposes through the _pyio module Contributed by Amaury Forgeot d Arc and Antoine Pitrou Added a heuristic so that tuples and dicts containing only untrackable objects are not tracked by the garbage collector This can reduce the size of collections and therefore the garbage collection overhead on long running programs depending on their particular use of datatypes Contributed by Antoine Pitrou bpo 4688 Enabling a configure option named with computed gotos on compilers that support it notably gcc SunPro icc the bytecode evaluation loop is compiled with a new dispatch mechanism which gives speedups of up to 20 depending on the system the compiler and the benchmark Contributed by Antoine Pitrou along with a number of other participants bpo 4753 The decoding of UTF 8 UTF 16 and LATIN 1 is now two to four times faster Contributed by Antoine Pitrou and Amaury Forgeot d Arc bpo 4868 The json module now has a C extension to substantially improve its performance In addition the API was modified so that json works only with str not with bytes That change makes the module closely match the JSON specification which is defined in terms of Unicode Contributed by Bob Ippolito and converted to Py3 1 by Antoine Pitrou and Benjamin Peterson bpo 4136 Unpickling now interns the attribute names of pickled objects This saves memory and allows pickles to be smaller Contributed by Jake McGuire and Antoine Pitrou bpo 5084 IDLE IDLE s format menu now provides an option to strip trailing whitespace from a source file Contributed by Roger D Serwy bpo 5150 Build and C API Changes Changes to Python s build process and to the C API include Integers are now stored internally either in base 2 15 or in base 2 30 the base being determined at build time Previously they were always stored in base 2 15 Using base 2 30 gives significant performance improvements on 64 bit machines but benchmark results on 32 bit machines have been mixed Therefore the default is to use base 2 30 on 64 bit machines and base 2 15 on 32 bit machines on Unix there s a new configure option enable big digits that can be used to override this default Apart from the performance improvements this change should be invisible to end users with one exception for testing and debugging purposes there s a new sys int_info that provides information about the internal format giving the number of bits per digit and the size in bytes of the C type used to store each digit import sys sys int_info sys int_info bits_per_digit 30 sizeof_digit 4 Contributed by Mark Dickinson bpo 4258 The PyLong_AsUnsignedLongLong function now handles a negative pylong by raising OverflowError instead of TypeError Contributed by Mark Dickinson and Lisandro Dalcrin bpo 5175 Deprecated PyNumber_Int Use PyNumber_Long instead Contributed b
en
null
2,617
y Mark Dickinson bpo 4910 Added a new PyOS_string_to_double function to replace the deprecated functions PyOS_ascii_strtod and PyOS_ascii_atof Contributed by Mark Dickinson bpo 5914 Added PyCapsule as a replacement for the PyCObject API The principal difference is that the new type has a well defined interface for passing typing safety information and a less complicated signature for calling a destructor The old type had a problematic API and is now deprecated Contributed by Larry Hastings bpo 5630 Porting to Python 3 1 This section lists previously described changes and other bugfixes that may require changes to your code The new floating point string representations can break existing doctests For example def e Compute the base of natural logarithms e 2 7182818284590451 return sum 1 math factorial x for x in reversed range 30 doctest testmod Failed example e Expected 2 7182818284590451 Got 2 718281828459045 The automatic name remapping in the pickle module for protocol 2 or lower can make Python 3 1 pickles unreadable in Python 3 0 One solution is to use protocol 3 Another solution is to set the fix_imports option to False See the discussion above for more details
en
null
2,618
cmd Support for line oriented command interpreters Source code Lib cmd py The Cmd class provides a simple framework for writing line oriented command interpreters These are often useful for test harnesses administrative tools and prototypes that will later be wrapped in a more sophisticated interface class cmd Cmd completekey tab stdin None stdout None A Cmd instance or subclass instance is a line oriented interpreter framework There is no good reason to instantiate Cmd itself rather it s useful as a superclass of an interpreter class you define yourself in order to inherit Cmd s methods and encapsulate action methods The optional argument completekey is the readline name of a completion key it defaults to Tab If completekey is not None and readline is available command completion is done automatically The optional arguments stdin and stdout specify the input and output file objects that the Cmd instance or subclass instance will use for input and output If not specified they will default to sys stdin and sys stdout If you want a given stdin to be used make sure to set the instance s use_rawinput attribute to False otherwise stdin will be ignored Cmd Objects A Cmd instance has the following methods Cmd cmdloop intro None Repeatedly issue a prompt accept input parse an initial prefix off the received input and dispatch to action methods passing them the remainder of the line as argument The optional argument is a banner or intro string to be issued before the first prompt this overrides the intro class attribute If the readline module is loaded input will automatically inherit bash like history list editing e g Control P scrolls back to the last command Control N forward to the next one Control F moves the cursor to the right non destructively Control B moves the cursor to the left non destructively etc An end of file on input is passed back as the string EOF An interpreter instance will recognize a command name foo if and only if it has a method do_foo As a special case a line beginning with the character is dispatched to the method do_help As another special case a line beginning with the character is dispatched to the method do_shell if such a method is defined This method will return when the postcmd method returns a true value The stop argument to postcmd is the return value from the command s corresponding do_ method If completion is enabled completing commands will be done automatically and completing of commands args is done by calling complete_foo with arguments text line begidx and endidx text is the string prefix we are attempting to match all returned matches must begin with it line is the current input line with leading whitespace removed begidx and endidx are the beginning and ending indexes of the prefix text which could be used to provide different completion depending upon which position the argument is in Cmd do_help arg All subclasses of Cmd inherit a predefined do_help This method called with an argument bar invokes the corresponding method help_bar and if that is not present prints the docstring of do_bar if available With no argument do_help lists all available help topics that is all commands with corresponding help_ methods or commands that have docstrings and also lists any undocumented commands Cmd onecmd str Interpret the argument as though it had been typed in response to the prompt This may be overridden but should not normally need to be see the precmd and postcmd methods for useful execution hooks The return value is a flag indicating whether interpretation of commands by the interpreter should stop If there is a do_ method for the command str the return value of that method is returned otherwise the return value from the default method is returned Cmd emptyline Method called when an empty line is entered in response to the prompt If this method is not overridden it repeats the last nonempty command entered Cmd default line Method called on an input line when the command prefix is not recognized If this method is not overridden it prints an error message and returns Cmd completedefault text
en
null
2,619
line begidx endidx Method called to complete an input line when no command specific complete_ method is available By default it returns an empty list Cmd columnize list displaywidth 80 Method called to display a list of strings as a compact set of columns Each column is only as wide as necessary Columns are separated by two spaces for readability Cmd precmd line Hook method executed just before the command line line is interpreted but after the input prompt is generated and issued This method is a stub in Cmd it exists to be overridden by subclasses The return value is used as the command which will be executed by the onecmd method the precmd implementation may re write the command or simply return line unchanged Cmd postcmd stop line Hook method executed just after a command dispatch is finished This method is a stub in Cmd it exists to be overridden by subclasses line is the command line which was executed and stop is a flag which indicates whether execution will be terminated after the call to postcmd this will be the return value of the onecmd method The return value of this method will be used as the new value for the internal flag which corresponds to stop returning false will cause interpretation to continue Cmd preloop Hook method executed once when cmdloop is called This method is a stub in Cmd it exists to be overridden by subclasses Cmd postloop Hook method executed once when cmdloop is about to return This method is a stub in Cmd it exists to be overridden by subclasses Instances of Cmd subclasses have some public instance variables Cmd prompt The prompt issued to solicit input Cmd identchars The string of characters accepted for the command prefix Cmd lastcmd The last nonempty command prefix seen Cmd cmdqueue A list of queued input lines The cmdqueue list is checked in cmdloop when new input is needed if it is nonempty its elements will be processed in order as if entered at the prompt Cmd intro A string to issue as an intro or banner May be overridden by giving the cmdloop method an argument Cmd doc_header The header to issue if the help output has a section for documented commands Cmd misc_header The header to issue if the help output has a section for miscellaneous help topics that is there are help_ methods without corresponding do_ methods Cmd undoc_header The header to issue if the help output has a section for undocumented commands that is there are do_ methods without corresponding help_ methods Cmd ruler The character used to draw separator lines under the help message headers If empty no ruler line is drawn It defaults to Cmd use_rawinput A flag defaulting to true If true cmdloop uses input to display a prompt and read the next command if false sys stdout write and sys stdin readline are used This means that by importing readline on systems that support it the interpreter will automatically support Emacs like line editing and command history keystrokes Cmd Example The cmd module is mainly useful for building custom shells that let a user work with a program interactively This section presents a simple example of how to build a shell around a few of the commands in the turtle module Basic turtle commands such as forward are added to a Cmd subclass with method named do_forward The argument is converted to a number and dispatched to the turtle module The docstring is used in the help utility provided by the shell The example also includes a basic record and playback facility implemented with the precmd method which is responsible for converting the input to lowercase and writing the commands to a file The do_playback method reads the file and adds the recorded commands to the cmdqueue for immediate playback import cmd sys from turtle import class TurtleShell cmd Cmd intro Welcome to the turtle shell Type help or to list commands n prompt turtle file None basic turtle commands def do_forward self arg Move the turtle forward by the specified distance FORWARD 10 forward parse arg def do_right self arg Turn turtle right by given number of degrees RIGHT 20 right parse arg def do_left self arg Turn turtle l
en
null
2,620
eft by given number of degrees LEFT 90 left parse arg def do_goto self arg Move turtle to an absolute position with changing orientation GOTO 100 200 goto parse arg def do_home self arg Return turtle to the home position HOME home def do_circle self arg Draw circle with given radius an options extent and steps CIRCLE 50 circle parse arg def do_position self arg Print the current turtle position POSITION print Current position is d d n position def do_heading self arg Print the current turtle heading in degrees HEADING print Current heading is d n heading def do_color self arg Set the color COLOR BLUE color arg lower def do_undo self arg Undo repeatedly the last turtle action s UNDO def do_reset self arg Clear the screen and return turtle to center RESET reset def do_bye self arg Stop recording close the turtle window and exit BYE print Thank you for using Turtle self close bye return True record and playback def do_record self arg Save future commands to filename RECORD rose cmd self file open arg w def do_playback self arg Playback commands from a file PLAYBACK rose cmd self close with open arg as f self cmdqueue extend f read splitlines def precmd self line line line lower if self file and playback not in line print line file self file return line def close self if self file self file close self file None def parse arg Convert a series of zero or more numbers to an argument tuple return tuple map int arg split if __name__ __main__ TurtleShell cmdloop Here is a sample session with the turtle shell showing the help functions using blank lines to repeat commands and the simple record and playback facility Welcome to the turtle shell Type help or to list commands turtle Documented commands type help topic bye color goto home playback record right circle forward heading left position reset undo turtle help forward Move the turtle forward by the specified distance FORWARD 10 turtle record spiral cmd turtle position Current position is 0 0 turtle heading Current heading is 0 turtle reset turtle circle 20 turtle right 30 turtle circle 40 turtle right 30 turtle circle 60 turtle right 30 turtle circle 80 turtle right 30 turtle circle 100 turtle right 30 turtle circle 120 turtle right 30 turtle circle 120 turtle heading Current heading is 180 turtle forward 100 turtle turtle right 90 turtle forward 100 turtle turtle right 90 turtle forward 400 turtle right 90 turtle forward 500 turtle right 90 turtle forward 400 turtle right 90 turtle forward 300 turtle playback spiral cmd Current position is 0 0 Current heading is 0 Current heading is 180 turtle bye Thank you for using Turtle
en
null
2,621
Argparse Tutorial author Tshepang Mbambo This tutorial is intended to be a gentle introduction to argparse the recommended command line parsing module in the Python standard library Note There are two other modules that fulfill the same task namely getopt an equivalent for getopt from the C language and the deprecated optparse Note also that argparse is based on optparse and therefore very similar in terms of usage Concepts Let s show the sort of functionality that we are going to explore in this introductory tutorial by making use of the ls command ls cpython devguide prog py pypy rm unused function patch ls pypy ctypes_configure demo dotviewer include lib_pypy lib python ls l total 20 drwxr xr x 19 wena wena 4096 Feb 18 18 51 cpython drwxr xr x 4 wena wena 4096 Feb 8 12 04 devguide rwxr xr x 1 wena wena 535 Feb 19 00 05 prog py drwxr xr x 14 wena wena 4096 Feb 7 00 59 pypy rw r r 1 wena wena 741 Feb 18 01 01 rm unused function patch ls help Usage ls OPTION FILE List information about the FILEs the current directory by default Sort entries alphabetically if none of cftuvSUX nor sort is specified A few concepts we can learn from the four commands The ls command is useful when run without any options at all It defaults to displaying the contents of the current directory If we want beyond what it provides by default we tell it a bit more In this case we want it to display a different directory pypy What we did is specify what is known as a positional argument It s named so because the program should know what to do with the value solely based on where it appears on the command line This concept is more relevant to a command like cp whose most basic usage is cp SRC DEST The first position is what you want copied and the second position is where you want it copied to Now say we want to change behaviour of the program In our example we display more info for each file instead of just showing the file names The l in that case is known as an optional argument That s a snippet of the help text It s very useful in that you can come across a program you have never used before and can figure out how it works simply by reading its help text The basics Let us start with a very simple example which does almost nothing import argparse parser argparse ArgumentParser parser parse_args Following is a result of running the code python prog py python prog py help usage prog py h options h help show this help message and exit python prog py verbose usage prog py h prog py error unrecognized arguments verbose python prog py foo usage prog py h prog py error unrecognized arguments foo Here is what is happening Running the script without any options results in nothing displayed to stdout Not so useful The second one starts to display the usefulness of the argparse module We have done almost nothing but already we get a nice help message The help option which can also be shortened to h is the only option we get for free i e no need to specify it Specifying anything else results in an error But even then we do get a useful usage message also for free Introducing Positional arguments An example import argparse parser argparse ArgumentParser parser add_argument echo args parser parse_args print args echo And running the code python prog py usage prog py h echo prog py error the following arguments are required echo python prog py help usage prog py h echo positional arguments echo options h help show this help message and exit python prog py foo foo Here is what s happening We ve added the add_argument method which is what we use to specify which command line options the program is willing to accept In this case I ve named it echo so that it s in line with its function Calling our program now requires us to specify an option The parse_args method actually returns some data from the options specified in this case echo The variable is some form of magic that argparse performs for free i e no need to specify which variable that value is stored in You will also notice that its name matches the string argument given to the method echo Note however that a
en
null
2,622
lthough the help display looks nice and all it currently is not as helpful as it can be For example we see that we got echo as a positional argument but we don t know what it does other than by guessing or by reading the source code So let s make it a bit more useful import argparse parser argparse ArgumentParser parser add_argument echo help echo the string you use here args parser parse_args print args echo And we get python prog py h usage prog py h echo positional arguments echo echo the string you use here options h help show this help message and exit Now how about doing something even more useful import argparse parser argparse ArgumentParser parser add_argument square help display a square of a given number args parser parse_args print args square 2 Following is a result of running the code python prog py 4 Traceback most recent call last File prog py line 5 in module print args square 2 TypeError unsupported operand type s for or pow str and int That didn t go so well That s because argparse treats the options we give it as strings unless we tell it otherwise So let s tell argparse to treat that input as an integer import argparse parser argparse ArgumentParser parser add_argument square help display a square of a given number type int args parser parse_args print args square 2 Following is a result of running the code python prog py 4 16 python prog py four usage prog py h square prog py error argument square invalid int value four That went well The program now even helpfully quits on bad illegal input before proceeding Introducing Optional arguments So far we have been playing with positional arguments Let us have a look on how to add optional ones import argparse parser argparse ArgumentParser parser add_argument verbosity help increase output verbosity args parser parse_args if args verbosity print verbosity turned on And the output python prog py verbosity 1 verbosity turned on python prog py python prog py help usage prog py h verbosity VERBOSITY options h help show this help message and exit verbosity VERBOSITY increase output verbosity python prog py verbosity usage prog py h verbosity VERBOSITY prog py error argument verbosity expected one argument Here is what is happening The program is written so as to display something when verbosity is specified and display nothing when not To show that the option is actually optional there is no error when running the program without it Note that by default if an optional argument isn t used the relevant variable in this case args verbosity is given None as a value which is the reason it fails the truth test of the if statement The help message is a bit different When using the verbosity option one must also specify some value any value The above example accepts arbitrary integer values for verbosity but for our simple program only two values are actually useful True or False Let s modify the code accordingly import argparse parser argparse ArgumentParser parser add_argument verbose help increase output verbosity action store_true args parser parse_args if args verbose print verbosity turned on And the output python prog py verbose verbosity turned on python prog py verbose 1 usage prog py h verbose prog py error unrecognized arguments 1 python prog py help usage prog py h verbose options h help show this help message and exit verbose increase output verbosity Here is what is happening The option is now more of a flag than something that requires a value We even changed the name of the option to match that idea Note that we now specify a new keyword action and give it the value store_true This means that if the option is specified assign the value True to args verbose Not specifying it implies False It complains when you specify a value in true spirit of what flags actually are Notice the different help text Short options If you are familiar with command line usage you will notice that I haven t yet touched on the topic of short versions of the options It s quite simple import argparse parser argparse ArgumentParser parser add_argument v verbose help increase out
en
null
2,623
put verbosity action store_true args parser parse_args if args verbose print verbosity turned on And here goes python prog py v verbosity turned on python prog py help usage prog py h v options h help show this help message and exit v verbose increase output verbosity Note that the new ability is also reflected in the help text Combining Positional and Optional arguments Our program keeps growing in complexity import argparse parser argparse ArgumentParser parser add_argument square type int help display a square of a given number parser add_argument v verbose action store_true help increase output verbosity args parser parse_args answer args square 2 if args verbose print f the square of args square equals answer else print answer And now the output python prog py usage prog py h v square prog py error the following arguments are required square python prog py 4 16 python prog py 4 verbose the square of 4 equals 16 python prog py verbose 4 the square of 4 equals 16 We ve brought back a positional argument hence the complaint Note that the order does not matter How about we give this program of ours back the ability to have multiple verbosity values and actually get to use them import argparse parser argparse ArgumentParser parser add_argument square type int help display a square of a given number parser add_argument v verbosity type int help increase output verbosity args parser parse_args answer args square 2 if args verbosity 2 print f the square of args square equals answer elif args verbosity 1 print f args square 2 answer else print answer And the output python prog py 4 16 python prog py 4 v usage prog py h v VERBOSITY square prog py error argument v verbosity expected one argument python prog py 4 v 1 4 2 16 python prog py 4 v 2 the square of 4 equals 16 python prog py 4 v 3 16 These all look good except the last one which exposes a bug in our program Let s fix it by restricting the values the verbosity option can accept import argparse parser argparse ArgumentParser parser add_argument square type int help display a square of a given number parser add_argument v verbosity type int choices 0 1 2 help increase output verbosity args parser parse_args answer args square 2 if args verbosity 2 print f the square of args square equals answer elif args verbosity 1 print f args square 2 answer else print answer And the output python prog py 4 v 3 usage prog py h v 0 1 2 square prog py error argument v verbosity invalid choice 3 choose from 0 1 2 python prog py 4 h usage prog py h v 0 1 2 square positional arguments square display a square of a given number options h help show this help message and exit v 0 1 2 verbosity 0 1 2 increase output verbosity Note that the change also reflects both in the error message as well as the help string Now let s use a different approach of playing with verbosity which is pretty common It also matches the way the CPython executable handles its own verbosity argument check the output of python help import argparse parser argparse ArgumentParser parser add_argument square type int help display the square of a given number parser add_argument v verbosity action count help increase output verbosity args parser parse_args answer args square 2 if args verbosity 2 print f the square of args square equals answer elif args verbosity 1 print f args square 2 answer else print answer We have introduced another action count to count the number of occurrences of specific options python prog py 4 16 python prog py 4 v 4 2 16 python prog py 4 vv the square of 4 equals 16 python prog py 4 verbosity verbosity the square of 4 equals 16 python prog py 4 v 1 usage prog py h v square prog py error unrecognized arguments 1 python prog py 4 h usage prog py h v square positional arguments square display a square of a given number options h help show this help message and exit v verbosity increase output verbosity python prog py 4 vvv 16 Yes it s now more of a flag similar to action store_true in the previous version of our script That should explain the complaint It also behaves similar to store_true action No
en
null
2,624
w here s a demonstration of what the count action gives You ve probably seen this sort of usage before And if you don t specify the v flag that flag is considered to have None value As should be expected specifying the long form of the flag we should get the same output Sadly our help output isn t very informative on the new ability our script has acquired but that can always be fixed by improving the documentation for our script e g via the help keyword argument That last output exposes a bug in our program Let s fix import argparse parser argparse ArgumentParser parser add_argument square type int help display a square of a given number parser add_argument v verbosity action count help increase output verbosity args parser parse_args answer args square 2 bugfix replace with if args verbosity 2 print f the square of args square equals answer elif args verbosity 1 print f args square 2 answer else print answer And this is what it gives python prog py 4 vvv the square of 4 equals 16 python prog py 4 vvvv the square of 4 equals 16 python prog py 4 Traceback most recent call last File prog py line 11 in module if args verbosity 2 TypeError not supported between instances of NoneType and int First output went well and fixes the bug we had before That is we want any value 2 to be as verbose as possible Third output not so good Let s fix that bug import argparse parser argparse ArgumentParser parser add_argument square type int help display a square of a given number parser add_argument v verbosity action count default 0 help increase output verbosity args parser parse_args answer args square 2 if args verbosity 2 print f the square of args square equals answer elif args verbosity 1 print f args square 2 answer else print answer We ve just introduced yet another keyword default We ve set it to 0 in order to make it comparable to the other int values Remember that by default if an optional argument isn t specified it gets the None value and that cannot be compared to an int value hence the TypeError exception And python prog py 4 16 You can go quite far just with what we ve learned so far and we have only scratched the surface The argparse module is very powerful and we ll explore a bit more of it before we end this tutorial Getting a little more advanced What if we wanted to expand our tiny program to perform other powers not just squares import argparse parser argparse ArgumentParser parser add_argument x type int help the base parser add_argument y type int help the exponent parser add_argument v verbosity action count default 0 args parser parse_args answer args x args y if args verbosity 2 print f args x to the power args y equals answer elif args verbosity 1 print f args x args y answer else print answer Output python prog py usage prog py h v x y prog py error the following arguments are required x y python prog py h usage prog py h v x y positional arguments x the base y the exponent options h help show this help message and exit v verbosity python prog py 4 2 v 4 2 16 Notice that so far we ve been using verbosity level to change the text that gets displayed The following example instead uses verbosity level to display more text instead import argparse parser argparse ArgumentParser parser add_argument x type int help the base parser add_argument y type int help the exponent parser add_argument v verbosity action count default 0 args parser parse_args answer args x args y if args verbosity 2 print f Running __file__ if args verbosity 1 print f args x args y end print answer Output python prog py 4 2 16 python prog py 4 2 v 4 2 16 python prog py 4 2 vv Running prog py 4 2 16 Specifying ambiguous arguments When there is ambiguity in deciding whether an argument is positional or for an argument can be used to tell parse_args that everything after that is a positional argument parser argparse ArgumentParser prog PROG parser add_argument n nargs parser add_argument args nargs ambiguous so parse_args assumes it s an option parser parse_args f usage PROG h n N N args PROG error unrecognized arguments f parser parse_args f Name
en
null
2,625
space args f n None ambiguous so the n option greedily accepts arguments parser parse_args n 1 2 3 Namespace args n 1 2 3 parser parse_args n 1 2 3 Namespace args 2 3 n 1 Conflicting options So far we have been working with two methods of an argparse ArgumentParser instance Let s introduce a third one add_mutually_exclusive_group It allows for us to specify options that conflict with each other Let s also change the rest of the program so that the new functionality makes more sense we ll introduce the quiet option which will be the opposite of the verbose one import argparse parser argparse ArgumentParser group parser add_mutually_exclusive_group group add_argument v verbose action store_true group add_argument q quiet action store_true parser add_argument x type int help the base parser add_argument y type int help the exponent args parser parse_args answer args x args y if args quiet print answer elif args verbose print f args x to the power args y equals answer else print f args x args y answer Our program is now simpler and we ve lost some functionality for the sake of demonstration Anyways here s the output python prog py 4 2 4 2 16 python prog py 4 2 q 16 python prog py 4 2 v 4 to the power 2 equals 16 python prog py 4 2 vq usage prog py h v q x y prog py error argument q quiet not allowed with argument v verbose python prog py 4 2 v quiet usage prog py h v q x y prog py error argument q quiet not allowed with argument v verbose That should be easy to follow I ve added that last output so you can see the sort of flexibility you get i e mixing long form options with short form ones Before we conclude you probably want to tell your users the main purpose of your program just in case they don t know import argparse parser argparse ArgumentParser description calculate X to the power of Y group parser add_mutually_exclusive_group group add_argument v verbose action store_true group add_argument q quiet action store_true parser add_argument x type int help the base parser add_argument y type int help the exponent args parser parse_args answer args x args y if args quiet print answer elif args verbose print f args x to the power args y equals answer else print f args x args y answer Note that slight difference in the usage text Note the v q which tells us that we can either use v or q but not both at the same time python prog py help usage prog py h v q x y calculate X to the power of Y positional arguments x the base y the exponent options h help show this help message and exit v verbose q quiet How to translate the argparse output The output of the argparse module such as its help text and error messages are all made translatable using the gettext module This allows applications to easily localize messages produced by argparse See also Internationalizing your programs and modules For instance in this argparse output python prog py help usage prog py h v q x y calculate X to the power of Y positional arguments x the base y the exponent options h help show this help message and exit v verbose q quiet The strings usage positional arguments options and show this help message and exit are all translatable In order to translate these strings they must first be extracted into a po file For example using Babel run this command pybabel extract o messages po usr lib python3 12 argparse py This command will extract all translatable strings from the argparse module and output them into a file named messages po This command assumes that your Python installation is in usr lib You can find out the location of the argparse module on your system using this script import argparse print argparse __file__ Once the messages in the po file are translated and the translations are installed using gettext argparse will be able to display the translated messages To translate your own strings in the argparse output use gettext Conclusion The argparse module offers a lot more than shown here Its docs are quite detailed and thorough and full of examples Having gone through this tutorial you should easily digest them without feeling overwhelmed
en
null
2,626
Event Loop Source code Lib asyncio events py Lib asyncio base_events py Preface The event loop is the core of every asyncio application Event loops run asynchronous tasks and callbacks perform network IO operations and run subprocesses Application developers should typically use the high level asyncio functions such as asyncio run and should rarely need to reference the loop object or call its methods This section is intended mostly for authors of lower level code libraries and frameworks who need finer control over the event loop behavior Obtaining the Event Loop The following low level functions can be used to get set or create an event loop asyncio get_running_loop Return the running event loop in the current OS thread Raise a RuntimeError if there is no running event loop This function can only be called from a coroutine or a callback New in version 3 7 asyncio get_event_loop Get the current event loop When called from a coroutine or a callback e g scheduled with call_soon or similar API this function will always return the running event loop If there is no running event loop set the function will return the result of the get_event_loop_policy get_event_loop call Because this function has rather complex behavior especially when custom event loop policies are in use using the get_running_loop function is preferred to get_event_loop in coroutines and callbacks As noted above consider using the higher level asyncio run function instead of using these lower level functions to manually create and close an event loop Deprecated since version 3 12 Deprecation warning is emitted if there is no current event loop In some future Python release this will become an error asyncio set_event_loop loop Set loop as the current event loop for the current OS thread asyncio new_event_loop Create and return a new event loop object Note that the behaviour of get_event_loop set_event_loop and new_event_loop functions can be altered by setting a custom event loop policy Contents This documentation page contains the following sections The Event Loop Methods section is the reference documentation of the event loop APIs The Callback Handles section documents the Handle and TimerHandle instances which are returned from scheduling methods such as loop call_soon and loop call_later The Server Objects section documents types returned from event loop methods like loop create_server The Event Loop Implementations section documents the SelectorEventLoop and ProactorEventLoop classes The Examples section showcases how to work with some event loop APIs Event Loop Methods Event loops have low level APIs for the following Running and stopping the loop Scheduling callbacks Scheduling delayed callbacks Creating Futures and Tasks Opening network connections Creating network servers Transferring files TLS Upgrade Watching file descriptors Working with socket objects directly DNS Working with pipes Unix signals Executing code in thread or process pools Error Handling API Enabling debug mode Running Subprocesses Running and stopping the loop loop run_until_complete future Run until the future an instance of Future has completed If the argument is a coroutine object it is implicitly scheduled to run as a asyncio Task Return the Future s result or raise its exception loop run_forever Run the event loop until stop is called If stop is called before run_forever is called the loop will poll the I O selector once with a timeout of zero run all callbacks scheduled in response to I O events and those that were already scheduled and then exit If stop is called while run_forever is running the loop will run the current batch of callbacks and then exit Note that new callbacks scheduled by callbacks will not run in this case instead they will run the next time run_forever or run_until_complete is called loop stop Stop the event loop loop is_running Return True if the event loop is currently running loop is_closed Return True if the event loop was closed loop close Close the event loop The loop must not be running when this function is called Any pending callbacks will be
en
null
2,627
discarded This method clears all queues and shuts down the executor but does not wait for the executor to finish This method is idempotent and irreversible No other methods should be called after the event loop is closed coroutine loop shutdown_asyncgens Schedule all currently open asynchronous generator objects to close with an aclose call After calling this method the event loop will issue a warning if a new asynchronous generator is iterated This should be used to reliably finalize all scheduled asynchronous generators Note that there is no need to call this function when asyncio run is used Example try loop run_forever finally loop run_until_complete loop shutdown_asyncgens loop close New in version 3 6 coroutine loop shutdown_default_executor timeout None Schedule the closure of the default executor and wait for it to join all of the threads in the ThreadPoolExecutor Once this method has been called using the default executor with loop run_in_executor will raise a RuntimeError The timeout parameter specifies the amount of time in float seconds the executor will be given to finish joining With the default None the executor is allowed an unlimited amount of time If the timeout is reached a RuntimeWarning is emitted and the default executor is terminated without waiting for its threads to finish joining Note Do not call this method when using asyncio run as the latter handles default executor shutdown automatically New in version 3 9 Changed in version 3 12 Added the timeout parameter Scheduling callbacks loop call_soon callback args context None Schedule the callback callback to be called with args arguments at the next iteration of the event loop Return an instance of asyncio Handle which can be used later to cancel the callback Callbacks are called in the order in which they are registered Each callback will be called exactly once The optional keyword only context argument specifies a custom contextvars Context for the callback to run in Callbacks use the current context when no context is provided Unlike call_soon_threadsafe this method is not thread safe loop call_soon_threadsafe callback args context None A thread safe variant of call_soon When scheduling callbacks from another thread this function must be used since call_soon is not thread safe Raises RuntimeError if called on a loop that s been closed This can happen on a secondary thread when the main application is shutting down See the concurrency and multithreading section of the documentation Changed in version 3 7 The context keyword only parameter was added See PEP 567 for more details Note Most asyncio scheduling functions don t allow passing keyword arguments To do that use functools partial will schedule print Hello flush True loop call_soon functools partial print Hello flush True Using partial objects is usually more convenient than using lambdas as asyncio can render partial objects better in debug and error messages Scheduling delayed callbacks Event loop provides mechanisms to schedule callback functions to be called at some point in the future Event loop uses monotonic clocks to track time loop call_later delay callback args context None Schedule callback to be called after the given delay number of seconds can be either an int or a float An instance of asyncio TimerHandle is returned which can be used to cancel the callback callback will be called exactly once If two callbacks are scheduled for exactly the same time the order in which they are called is undefined The optional positional args will be passed to the callback when it is called If you want the callback to be called with keyword arguments use functools partial An optional keyword only context argument allows specifying a custom contextvars Context for the callback to run in The current context is used when no context is provided Changed in version 3 7 The context keyword only parameter was added See PEP 567 for more details Changed in version 3 8 In Python 3 7 and earlier with the default event loop implementation the delay could not exceed one day This has been fixed in Python 3 8 l
en
null
2,628
oop call_at when callback args context None Schedule callback to be called at the given absolute timestamp when an int or a float using the same time reference as loop time This method s behavior is the same as call_later An instance of asyncio TimerHandle is returned which can be used to cancel the callback Changed in version 3 7 The context keyword only parameter was added See PEP 567 for more details Changed in version 3 8 In Python 3 7 and earlier with the default event loop implementation the difference between when and the current time could not exceed one day This has been fixed in Python 3 8 loop time Return the current time as a float value according to the event loop s internal monotonic clock Note Changed in version 3 8 In Python 3 7 and earlier timeouts relative delay or absolute when should not exceed one day This has been fixed in Python 3 8 See also The asyncio sleep function Creating Futures and Tasks loop create_future Create an asyncio Future object attached to the event loop This is the preferred way to create Futures in asyncio This lets third party event loops provide alternative implementations of the Future object with better performance or instrumentation New in version 3 5 2 loop create_task coro name None context None Schedule the execution of coroutine coro Return a Task object Third party event loops can use their own subclass of Task for interoperability In this case the result type is a subclass of Task If the name argument is provided and not None it is set as the name of the task using Task set_name An optional keyword only context argument allows specifying a custom contextvars Context for the coro to run in The current context copy is created when no context is provided Changed in version 3 8 Added the name parameter Changed in version 3 11 Added the context parameter loop set_task_factory factory Set a task factory that will be used by loop create_task If factory is None the default task factory will be set Otherwise factory must be a callable with the signature matching loop coro context None where loop is a reference to the active event loop and coro is a coroutine object The callable must return a asyncio Future compatible object loop get_task_factory Return a task factory or None if the default one is in use Opening network connections coroutine loop create_connection protocol_factory host None port None ssl None family 0 proto 0 flags 0 sock None local_addr None server_hostname None ssl_handshake_timeout None ssl_shutdown_timeout None happy_eyeballs_delay None interleave None all_errors False Open a streaming transport connection to a given address specified by host and port The socket family can be either AF_INET or AF_INET6 depending on host or the family argument if provided The socket type will be SOCK_STREAM protocol_factory must be a callable returning an asyncio protocol implementation This method will try to establish the connection in the background When successful it returns a transport protocol pair The chronological synopsis of the underlying operation is as follows 1 The connection is established and a transport is created for it 2 protocol_factory is called without arguments and is expected to return a protocol instance 3 The protocol instance is coupled with the transport by calling its connection_made method 4 A transport protocol tuple is returned on success The created transport is an implementation dependent bidirectional stream Other arguments ssl if given and not false a SSL TLS transport is created by default a plain TCP transport is created If ssl is a ssl SSLContext object this context is used to create the transport if ssl is True a default context returned from ssl create_default_context is used See also SSL TLS security considerations server_hostname sets or overrides the hostname that the target server s certificate will be matched against Should only be passed if ssl is not None By default the value of the host argument is used If host is empty there is no default and you must pass a value for server_hostname If server_hostname is an empty string hostna
en
null
2,629
me matching is disabled which is a serious security risk allowing for potential man in the middle attacks family proto flags are the optional address family protocol and flags to be passed through to getaddrinfo for host resolution If given these should all be integers from the corresponding socket module constants happy_eyeballs_delay if given enables Happy Eyeballs for this connection It should be a floating point number representing the amount of time in seconds to wait for a connection attempt to complete before starting the next attempt in parallel This is the Connection Attempt Delay as defined in RFC 8305 A sensible default value recommended by the RFC is 0 25 250 milliseconds interleave controls address reordering when a host name resolves to multiple IP addresses If 0 or unspecified no reordering is done and addresses are tried in the order returned by getaddrinfo If a positive integer is specified the addresses are interleaved by address family and the given integer is interpreted as First Address Family Count as defined in RFC 8305 The default is 0 if happy_eyeballs_delay is not specified and 1 if it is sock if given should be an existing already connected socket socket object to be used by the transport If sock is given none of host port family proto flags happy_eyeballs_delay interleave and local_addr should be specified Note The sock argument transfers ownership of the socket to the transport created To close the socket call the transport s close method local_addr if given is a local_host local_port tuple used to bind the socket locally The local_host and local_port are looked up using getaddrinfo similarly to host and port ssl_handshake_timeout is for a TLS connection the time in seconds to wait for the TLS handshake to complete before aborting the connection 60 0 seconds if None default ssl_shutdown_timeout is the time in seconds to wait for the SSL shutdown to complete before aborting the connection 30 0 seconds if None default all_errors determines what exceptions are raised when a connection cannot be created By default only a single Exception is raised the first exception if there is only one or all errors have same message or a single OSError with the error messages combined When all_errors is True an ExceptionGroup will be raised containing all exceptions even if there is only one Changed in version 3 5 Added support for SSL TLS in ProactorEventLoop Changed in version 3 6 The socket option socket TCP_NODELAY is set by default for all TCP connections Changed in version 3 7 Added the ssl_handshake_timeout parameter Changed in version 3 8 Added the happy_eyeballs_delay and interleave parameters Happy Eyeballs Algorithm Success with Dual Stack Hosts When a server s IPv4 path and protocol are working but the server s IPv6 path and protocol are not working a dual stack client application experiences significant connection delay compared to an IPv4 only client This is undesirable because it causes the dual stack client to have a worse user experience This document specifies requirements for algorithms that reduce this user visible delay and provides an algorithm For more information https datatracker ietf org doc html rfc6555 Changed in version 3 11 Added the ssl_shutdown_timeout parameter Changed in version 3 12 all_errors was added See also The open_connection function is a high level alternative API It returns a pair of StreamReader StreamWriter that can be used directly in async await code coroutine loop create_datagram_endpoint protocol_factory local_addr None remote_addr None family 0 proto 0 flags 0 reuse_port None allow_broadcast None sock None Create a datagram connection The socket family can be either AF_INET AF_INET6 or AF_UNIX depending on host or the family argument if provided The socket type will be SOCK_DGRAM protocol_factory must be a callable returning a protocol implementation A tuple of transport protocol is returned on success Other arguments local_addr if given is a local_host local_port tuple used to bind the socket locally The local_host and local_port are looked up using getaddrin
en
null
2,630
fo remote_addr if given is a remote_host remote_port tuple used to connect the socket to a remote address The remote_host and remote_port are looked up using getaddrinfo family proto flags are the optional address family protocol and flags to be passed through to getaddrinfo for host resolution If given these should all be integers from the corresponding socket module constants reuse_port tells the kernel to allow this endpoint to be bound to the same port as other existing endpoints are bound to so long as they all set this flag when being created This option is not supported on Windows and some Unixes If the socket SO_REUSEPORT constant is not defined then this capability is unsupported allow_broadcast tells the kernel to allow this endpoint to send messages to the broadcast address sock can optionally be specified in order to use a preexisting already connected socket socket object to be used by the transport If specified local_addr and remote_addr should be omitted must be None Note The sock argument transfers ownership of the socket to the transport created To close the socket call the transport s close method See UDP echo client protocol and UDP echo server protocol examples Changed in version 3 4 4 The family proto flags reuse_address reuse_port allow_broadcast and sock parameters were added Changed in version 3 8 Added support for Windows Changed in version 3 8 1 The reuse_address parameter is no longer supported as using socket SO_REUSEADDR poses a significant security concern for UDP Explicitly passing reuse_address True will raise an exception When multiple processes with differing UIDs assign sockets to an identical UDP socket address with SO_REUSEADDR incoming packets can become randomly distributed among the sockets For supported platforms reuse_port can be used as a replacement for similar functionality With reuse_port socket SO_REUSEPORT is used instead which specifically prevents processes with differing UIDs from assigning sockets to the same socket address Changed in version 3 11 The reuse_address parameter disabled since Python 3 8 1 3 7 6 and 3 6 10 has been entirely removed coroutine loop create_unix_connection protocol_factory path None ssl None sock None server_hostname None ssl_handshake_timeout None ssl_shutdown_timeout None Create a Unix connection The socket family will be AF_UNIX socket type will be SOCK_STREAM A tuple of transport protocol is returned on success path is the name of a Unix domain socket and is required unless a sock parameter is specified Abstract Unix sockets str bytes and Path paths are supported See the documentation of the loop create_connection method for information about arguments to this method Availability Unix Changed in version 3 7 Added the ssl_handshake_timeout parameter The path parameter can now be a path like object Changed in version 3 11 Added the ssl_shutdown_timeout parameter Creating network servers coroutine loop create_server protocol_factory host None port None family socket AF_UNSPEC flags socket AI_PASSIVE sock None backlog 100 ssl None reuse_address None reuse_port None ssl_handshake_timeout None ssl_shutdown_timeout None start_serving True Create a TCP server socket type SOCK_STREAM listening on port of the host address Returns a Server object Arguments protocol_factory must be a callable returning a protocol implementation The host parameter can be set to several types which determine where the server would be listening If host is a string the TCP server is bound to a single network interface specified by host If host is a sequence of strings the TCP server is bound to all network interfaces specified by the sequence If host is an empty string or None all interfaces are assumed and a list of multiple sockets will be returned most likely one for IPv4 and another one for IPv6 The port parameter can be set to specify which port the server should listen on If 0 or None the default a random unused port will be selected note that if host resolves to multiple network interfaces a different random port will be selected for each interface family can be set
en
null
2,631
to either socket AF_INET or AF_INET6 to force the socket to use IPv4 or IPv6 If not set the family will be determined from host name defaults to AF_UNSPEC flags is a bitmask for getaddrinfo sock can optionally be specified in order to use a preexisting socket object If specified host and port must not be specified Note The sock argument transfers ownership of the socket to the server created To close the socket call the server s close method backlog is the maximum number of queued connections passed to listen defaults to 100 ssl can be set to an SSLContext instance to enable TLS over the accepted connections reuse_address tells the kernel to reuse a local socket in TIME_WAIT state without waiting for its natural timeout to expire If not specified will automatically be set to True on Unix reuse_port tells the kernel to allow this endpoint to be bound to the same port as other existing endpoints are bound to so long as they all set this flag when being created This option is not supported on Windows ssl_handshake_timeout is for a TLS server the time in seconds to wait for the TLS handshake to complete before aborting the connection 60 0 seconds if None default ssl_shutdown_timeout is the time in seconds to wait for the SSL shutdown to complete before aborting the connection 30 0 seconds if None default start_serving set to True the default causes the created server to start accepting connections immediately When set to False the user should await on Server start_serving or Server serve_forever to make the server to start accepting connections Changed in version 3 5 Added support for SSL TLS in ProactorEventLoop Changed in version 3 5 1 The host parameter can be a sequence of strings Changed in version 3 6 Added ssl_handshake_timeout and start_serving parameters The socket option socket TCP_NODELAY is set by default for all TCP connections Changed in version 3 11 Added the ssl_shutdown_timeout parameter See also The start_server function is a higher level alternative API that returns a pair of StreamReader and StreamWriter that can be used in an async await code coroutine loop create_unix_server protocol_factory path None sock None backlog 100 ssl None ssl_handshake_timeout None ssl_shutdown_timeout None start_serving True Similar to loop create_server but works with the AF_UNIX socket family path is the name of a Unix domain socket and is required unless a sock argument is provided Abstract Unix sockets str bytes and Path paths are supported See the documentation of the loop create_server method for information about arguments to this method Availability Unix Changed in version 3 7 Added the ssl_handshake_timeout and start_serving parameters The path parameter can now be a Path object Changed in version 3 11 Added the ssl_shutdown_timeout parameter coroutine loop connect_accepted_socket protocol_factory sock ssl None ssl_handshake_timeout None ssl_shutdown_timeout None Wrap an already accepted connection into a transport protocol pair This method can be used by servers that accept connections outside of asyncio but that use asyncio to handle them Parameters protocol_factory must be a callable returning a protocol implementation sock is a preexisting socket object returned from socket accept Note The sock argument transfers ownership of the socket to the transport created To close the socket call the transport s close method ssl can be set to an SSLContext to enable SSL over the accepted connections ssl_handshake_timeout is for an SSL connection the time in seconds to wait for the SSL handshake to complete before aborting the connection 60 0 seconds if None default ssl_shutdown_timeout is the time in seconds to wait for the SSL shutdown to complete before aborting the connection 30 0 seconds if None default Returns a transport protocol pair New in version 3 5 3 Changed in version 3 7 Added the ssl_handshake_timeout parameter Changed in version 3 11 Added the ssl_shutdown_timeout parameter Transferring files coroutine loop sendfile transport file offset 0 count None fallback True Send a file over a transport Return the tota
en
null
2,632
l number of bytes sent The method uses high performance os sendfile if available file must be a regular file object opened in binary mode offset tells from where to start reading the file If specified count is the total number of bytes to transmit as opposed to sending the file until EOF is reached File position is always updated even when this method raises an error and file tell can be used to obtain the actual number of bytes sent fallback set to True makes asyncio to manually read and send the file when the platform does not support the sendfile system call e g Windows or SSL socket on Unix Raise SendfileNotAvailableError if the system does not support the sendfile syscall and fallback is False New in version 3 7 TLS Upgrade coroutine loop start_tls transport protocol sslcontext server_side False server_hostname None ssl_handshake_timeout None ssl_shutdown_timeout None Upgrade an existing transport based connection to TLS Create a TLS coder decoder instance and insert it between the transport and the protocol The coder decoder implements both transport facing protocol and protocol facing transport Return the created two interface instance After await the protocol must stop using the original transport and communicate with the returned object only because the coder caches protocol side data and sporadically exchanges extra TLS session packets with transport In some situations e g when the passed transport is already closing this may return None Parameters transport and protocol instances that methods like create_server and create_connection return sslcontext a configured instance of SSLContext server_side pass True when a server side connection is being upgraded like the one created by create_server server_hostname sets or overrides the host name that the target server s certificate will be matched against ssl_handshake_timeout is for a TLS connection the time in seconds to wait for the TLS handshake to complete before aborting the connection 60 0 seconds if None default ssl_shutdown_timeout is the time in seconds to wait for the SSL shutdown to complete before aborting the connection 30 0 seconds if None default New in version 3 7 Changed in version 3 11 Added the ssl_shutdown_timeout parameter Watching file descriptors loop add_reader fd callback args Start monitoring the fd file descriptor for read availability and invoke callback with the specified arguments once fd is available for reading loop remove_reader fd Stop monitoring the fd file descriptor for read availability Returns True if fd was previously being monitored for reads loop add_writer fd callback args Start monitoring the fd file descriptor for write availability and invoke callback with the specified arguments once fd is available for writing Use functools partial to pass keyword arguments to callback loop remove_writer fd Stop monitoring the fd file descriptor for write availability Returns True if fd was previously being monitored for writes See also Platform Support section for some limitations of these methods Working with socket objects directly In general protocol implementations that use transport based APIs such as loop create_connection and loop create_server are faster than implementations that work with sockets directly However there are some use cases when performance is not critical and working with socket objects directly is more convenient coroutine loop sock_recv sock nbytes Receive up to nbytes from sock Asynchronous version of socket recv Return the received data as a bytes object sock must be a non blocking socket Changed in version 3 7 Even though this method was always documented as a coroutine method releases before Python 3 7 returned a Future Since Python 3 7 this is an async def method coroutine loop sock_recv_into sock buf Receive data from sock into the buf buffer Modeled after the blocking socket recv_into method Return the number of bytes written to the buffer sock must be a non blocking socket New in version 3 7 coroutine loop sock_recvfrom sock bufsize Receive a datagram of up to bufsize from sock Asynchronous version of
en
null
2,633
socket recvfrom Return a tuple of received data remote address sock must be a non blocking socket New in version 3 11 coroutine loop sock_recvfrom_into sock buf nbytes 0 Receive a datagram of up to nbytes from sock into buf Asynchronous version of socket recvfrom_into Return a tuple of number of bytes received remote address sock must be a non blocking socket New in version 3 11 coroutine loop sock_sendall sock data Send data to the sock socket Asynchronous version of socket sendall This method continues to send to the socket until either all data in data has been sent or an error occurs None is returned on success On error an exception is raised Additionally there is no way to determine how much data if any was successfully processed by the receiving end of the connection sock must be a non blocking socket Changed in version 3 7 Even though the method was always documented as a coroutine method before Python 3 7 it returned a Future Since Python 3 7 this is an async def method coroutine loop sock_sendto sock data address Send a datagram from sock to address Asynchronous version of socket sendto Return the number of bytes sent sock must be a non blocking socket New in version 3 11 coroutine loop sock_connect sock address Connect sock to a remote socket at address Asynchronous version of socket connect sock must be a non blocking socket Changed in version 3 5 2 address no longer needs to be resolved sock_connect will try to check if the address is already resolved by calling socket inet_pton If not loop getaddrinfo will be used to resolve the address See also loop create_connection and asyncio open_connection coroutine loop sock_accept sock Accept a connection Modeled after the blocking socket accept method The socket must be bound to an address and listening for connections The return value is a pair conn address where conn is a new socket object usable to send and receive data on the connection and address is the address bound to the socket on the other end of the connection sock must be a non blocking socket Changed in version 3 7 Even though the method was always documented as a coroutine method before Python 3 7 it returned a Future Since Python 3 7 this is an async def method See also loop create_server and start_server coroutine loop sock_sendfile sock file offset 0 count None fallback True Send a file using high performance os sendfile if possible Return the total number of bytes sent Asynchronous version of socket sendfile sock must be a non blocking socket SOCK_STREAM socket file must be a regular file object open in binary mode offset tells from where to start reading the file If specified count is the total number of bytes to transmit as opposed to sending the file until EOF is reached File position is always updated even when this method raises an error and file tell can be used to obtain the actual number of bytes sent fallback when set to True makes asyncio manually read and send the file when the platform does not support the sendfile syscall e g Windows or SSL socket on Unix Raise SendfileNotAvailableError if the system does not support sendfile syscall and fallback is False sock must be a non blocking socket New in version 3 7 DNS coroutine loop getaddrinfo host port family 0 type 0 proto 0 flags 0 Asynchronous version of socket getaddrinfo coroutine loop getnameinfo sockaddr flags 0 Asynchronous version of socket getnameinfo Changed in version 3 7 Both getaddrinfo and getnameinfo methods were always documented to return a coroutine but prior to Python 3 7 they were in fact returning asyncio Future objects Starting with Python 3 7 both methods are coroutines Working with pipes coroutine loop connect_read_pipe protocol_factory pipe Register the read end of pipe in the event loop protocol_factory must be a callable returning an asyncio protocol implementation pipe is a file like object Return pair transport protocol where transport supports the ReadTransport interface and protocol is an object instantiated by the protocol_factory With SelectorEventLoop event loop the pipe is set to non blocking mode corou
en
null
2,634
tine loop connect_write_pipe protocol_factory pipe Register the write end of pipe in the event loop protocol_factory must be a callable returning an asyncio protocol implementation pipe is file like object Return pair transport protocol where transport supports WriteTransport interface and protocol is an object instantiated by the protocol_factory With SelectorEventLoop event loop the pipe is set to non blocking mode Note SelectorEventLoop does not support the above methods on Windows Use ProactorEventLoop instead for Windows See also The loop subprocess_exec and loop subprocess_shell methods Unix signals loop add_signal_handler signum callback args Set callback as the handler for the signum signal The callback will be invoked by loop along with other queued callbacks and runnable coroutines of that event loop Unlike signal handlers registered using signal signal a callback registered with this function is allowed to interact with the event loop Raise ValueError if the signal number is invalid or uncatchable Raise RuntimeError if there is a problem setting up the handler Use functools partial to pass keyword arguments to callback Like signal signal this function must be invoked in the main thread loop remove_signal_handler sig Remove the handler for the sig signal Return True if the signal handler was removed or False if no handler was set for the given signal Availability Unix See also The signal module Executing code in thread or process pools awaitable loop run_in_executor executor func args Arrange for func to be called in the specified executor The executor argument should be an concurrent futures Executor instance The default executor is used if executor is None Example import asyncio import concurrent futures def blocking_io File operations such as logging can block the event loop run them in a thread pool with open dev urandom rb as f return f read 100 def cpu_bound CPU bound operations will block the event loop in general it is preferable to run them in a process pool return sum i i for i in range 10 7 async def main loop asyncio get_running_loop Options 1 Run in the default loop s executor result await loop run_in_executor None blocking_io print default thread pool result 2 Run in a custom thread pool with concurrent futures ThreadPoolExecutor as pool result await loop run_in_executor pool blocking_io print custom thread pool result 3 Run in a custom process pool with concurrent futures ProcessPoolExecutor as pool result await loop run_in_executor pool cpu_bound print custom process pool result if __name__ __main__ asyncio run main Note that the entry point guard if __name__ __main__ is required for option 3 due to the peculiarities of multiprocessing which is used by ProcessPoolExecutor See Safe importing of main module This method returns a asyncio Future object Use functools partial to pass keyword arguments to func Changed in version 3 5 3 loop run_in_executor no longer configures the max_workers of the thread pool executor it creates instead leaving it up to the thread pool executor ThreadPoolExecutor to set the default loop set_default_executor executor Set executor as the default executor used by run_in_executor executor must be an instance of ThreadPoolExecutor Changed in version 3 11 executor must be an instance of ThreadPoolExecutor Error Handling API Allows customizing how exceptions are handled in the event loop loop set_exception_handler handler Set handler as the new event loop exception handler If handler is None the default exception handler will be set Otherwise handler must be a callable with the signature matching loop context where loop is a reference to the active event loop and context is a dict object containing the details of the exception see call_exception_handler documentation for details about context If the handler is called on behalf of a Task or Handle it is run in the contextvars Context of that task or callback handle Changed in version 3 12 The handler may be called in the Context of the task or handle where the exception originated loop get_exception_handler Return the current
en
null
2,635
exception handler or None if no custom exception handler was set New in version 3 5 2 loop default_exception_handler context Default exception handler This is called when an exception occurs and no exception handler is set This can be called by a custom exception handler that wants to defer to the default handler behavior context parameter has the same meaning as in call_exception_handler loop call_exception_handler context Call the current event loop exception handler context is a dict object containing the following keys new keys may be introduced in future Python versions message Error message exception optional Exception object future optional asyncio Future instance task optional asyncio Task instance handle optional asyncio Handle instance protocol optional Protocol instance transport optional Transport instance socket optional socket socket instance asyncgen optional Asynchronous generator that caused the exception Note This method should not be overloaded in subclassed event loops For custom exception handling use the set_exception_handler method Enabling debug mode loop get_debug Get the debug mode bool of the event loop The default value is True if the environment variable PYTHONASYNCIODEBUG is set to a non empty string False otherwise loop set_debug enabled bool Set the debug mode of the event loop Changed in version 3 7 The new Python Development Mode can now also be used to enable the debug mode loop slow_callback_duration This attribute can be used to set the minimum execution duration in seconds that is considered slow When debug mode is enabled slow callbacks are logged Default value is 100 milliseconds See also The debug mode of asyncio Running Subprocesses Methods described in this subsections are low level In regular async await code consider using the high level asyncio create_subprocess_shell and asyncio create_subprocess_exec convenience functions instead Note On Windows the default event loop ProactorEventLoop supports subprocesses whereas SelectorEventLoop does not See Subprocess Support on Windows for details coroutine loop subprocess_exec protocol_factory args stdin subprocess PIPE stdout subprocess PIPE stderr subprocess PIPE kwargs Create a subprocess from one or more string arguments specified by args args must be a list of strings represented by str or bytes encoded to the filesystem encoding The first string specifies the program executable and the remaining strings specify the arguments Together string arguments form the argv of the program This is similar to the standard library subprocess Popen class called with shell False and the list of strings passed as the first argument however where Popen takes a single argument which is list of strings subprocess_exec takes multiple string arguments The protocol_factory must be a callable returning a subclass of the asyncio SubprocessProtocol class Other parameters stdin can be any of these a file like object an existing file descriptor a positive integer for example those created with os pipe the subprocess PIPE constant default which will create a new pipe and connect it the value None which will make the subprocess inherit the file descriptor from this process the subprocess DEVNULL constant which indicates that the special os devnull file will be used stdout can be any of these a file like object the subprocess PIPE constant default which will create a new pipe and connect it the value None which will make the subprocess inherit the file descriptor from this process the subprocess DEVNULL constant which indicates that the special os devnull file will be used stderr can be any of these a file like object the subprocess PIPE constant default which will create a new pipe and connect it the value None which will make the subprocess inherit the file descriptor from this process the subprocess DEVNULL constant which indicates that the special os devnull file will be used the subprocess STDOUT constant which will connect the standard error stream to the process standard output stream All other keyword arguments are passed to subprocess Popen withou
en
null
2,636
t interpretation except for bufsize universal_newlines shell text encoding and errors which should not be specified at all The asyncio subprocess API does not support decoding the streams as text bytes decode can be used to convert the bytes returned from the stream to text If a file like object passed as stdin stdout or stderr represents a pipe then the other side of this pipe should be registered with connect_write_pipe or connect_read_pipe for use with the event loop See the constructor of the subprocess Popen class for documentation on other arguments Returns a pair of transport protocol where transport conforms to the asyncio SubprocessTransport base class and protocol is an object instantiated by the protocol_factory coroutine loop subprocess_shell protocol_factory cmd stdin subprocess PIPE stdout subprocess PIPE stderr subprocess PIPE kwargs Create a subprocess from cmd which can be a str or a bytes string encoded to the filesystem encoding using the platform s shell syntax This is similar to the standard library subprocess Popen class called with shell True The protocol_factory must be a callable returning a subclass of the SubprocessProtocol class See subprocess_exec for more details about the remaining arguments Returns a pair of transport protocol where transport conforms to the SubprocessTransport base class and protocol is an object instantiated by the protocol_factory Note It is the application s responsibility to ensure that all whitespace and special characters are quoted appropriately to avoid shell injection vulnerabilities The shlex quote function can be used to properly escape whitespace and special characters in strings that are going to be used to construct shell commands Callback Handles class asyncio Handle A callback wrapper object returned by loop call_soon loop call_soon_threadsafe get_context Return the contextvars Context object associated with the handle New in version 3 12 cancel Cancel the callback If the callback has already been canceled or executed this method has no effect cancelled Return True if the callback was cancelled New in version 3 7 class asyncio TimerHandle A callback wrapper object returned by loop call_later and loop call_at This class is a subclass of Handle when Return a scheduled callback time as float seconds The time is an absolute timestamp using the same time reference as loop time New in version 3 7 Server Objects Server objects are created by loop create_server loop create_unix_server start_server and start_unix_server functions Do not instantiate the Server class directly class asyncio Server Server objects are asynchronous context managers When used in an async with statement it s guaranteed that the Server object is closed and not accepting new connections when the async with statement is completed srv await loop create_server async with srv some code At this point srv is closed and no longer accepts new connections Changed in version 3 7 Server object is an asynchronous context manager since Python 3 7 Changed in version 3 11 This class was exposed publicly as asyncio Server in Python 3 9 11 3 10 3 and 3 11 close Stop serving close listening sockets and set the sockets attribute to None The sockets that represent existing incoming client connections are left open The server is closed asynchronously use the wait_closed coroutine to wait until the server is closed and no more connections are active get_loop Return the event loop associated with the server object New in version 3 7 coroutine start_serving Start accepting connections This method is idempotent so it can be called when the server is already serving The start_serving keyword only parameter to loop create_server and asyncio start_server allows creating a Server object that is not accepting connections initially In this case Server start_serving or Server serve_forever can be used to make the Server start accepting connections New in version 3 7 coroutine serve_forever Start accepting connections until the coroutine is cancelled Cancellation of serve_forever task causes the server to be closed This met
en
null
2,637
hod can be called if the server is already accepting connections Only one serve_forever task can exist per one Server object Example async def client_connected reader writer Communicate with the client with reader writer streams For example await reader readline async def main host port srv await asyncio start_server client_connected host port await srv serve_forever asyncio run main 127 0 0 1 0 New in version 3 7 is_serving Return True if the server is accepting new connections New in version 3 7 coroutine wait_closed Wait until the close method completes and all active connections have finished sockets List of socket like objects asyncio trsock TransportSocket which the server is listening on Changed in version 3 7 Prior to Python 3 7 Server sockets used to return an internal list of server sockets directly In 3 7 a copy of that list is returned Event Loop Implementations asyncio ships with two different event loop implementations SelectorEventLoop and ProactorEventLoop By default asyncio is configured to use SelectorEventLoop on Unix and ProactorEventLoop on Windows class asyncio SelectorEventLoop An event loop based on the selectors module Uses the most efficient selector available for the given platform It is also possible to manually configure the exact selector implementation to be used import asyncio import selectors class MyPolicy asyncio DefaultEventLoopPolicy def new_event_loop self selector selectors SelectSelector return asyncio SelectorEventLoop selector asyncio set_event_loop_policy MyPolicy Availability Unix Windows class asyncio ProactorEventLoop An event loop for Windows that uses I O Completion Ports IOCP Availability Windows See also MSDN documentation on I O Completion Ports class asyncio AbstractEventLoop Abstract base class for asyncio compliant event loops The Event Loop Methods section lists all methods that an alternative implementation of AbstractEventLoop should have defined Examples Note that all examples in this section purposefully show how to use the low level event loop APIs such as loop run_forever and loop call_soon Modern asyncio applications rarely need to be written this way consider using the high level functions like asyncio run Hello World with call_soon An example using the loop call_soon method to schedule a callback The callback displays Hello World and then stops the event loop import asyncio def hello_world loop A callback to print Hello World and stop the event loop print Hello World loop stop loop asyncio new_event_loop Schedule a call to hello_world loop call_soon hello_world loop Blocking call interrupted by loop stop try loop run_forever finally loop close See also A similar Hello World example created with a coroutine and the run function Display the current date with call_later An example of a callback displaying the current date every second The callback uses the loop call_later method to reschedule itself after 5 seconds and then stops the event loop import asyncio import datetime def display_date end_time loop print datetime datetime now if loop time 1 0 end_time loop call_later 1 display_date end_time loop else loop stop loop asyncio new_event_loop Schedule the first call to display_date end_time loop time 5 0 loop call_soon display_date end_time loop Blocking call interrupted by loop stop try loop run_forever finally loop close See also A similar current date example created with a coroutine and the run function Watch a file descriptor for read events Wait until a file descriptor received some data using the loop add_reader method and then close the event loop import asyncio from socket import socketpair Create a pair of connected file descriptors rsock wsock socketpair loop asyncio new_event_loop def reader data rsock recv 100 print Received data decode We are done unregister the file descriptor loop remove_reader rsock Stop the event loop loop stop Register the file descriptor for read event loop add_reader rsock reader Simulate the reception of data from the network loop call_soon wsock send abc encode try Run the event loop loop run_forever finally We are done
en
null
2,638
Close sockets and the event loop rsock close wsock close loop close See also A similar example using transports protocols and the loop create_connection method Another similar example using the high level asyncio open_connection function and streams Set signal handlers for SIGINT and SIGTERM This signals example only works on Unix Register handlers for signals SIGINT and SIGTERM using the loop add_signal_handler method import asyncio import functools import os import signal def ask_exit signame loop print got signal s exit signame loop stop async def main loop asyncio get_running_loop for signame in SIGINT SIGTERM loop add_signal_handler getattr signal signame functools partial ask_exit signame loop await asyncio sleep 3600 print Event loop running for 1 hour press Ctrl C to interrupt print f pid os getpid send SIGINT or SIGTERM to exit asyncio run main
en
null
2,639
msvcrt Useful routines from the MS VC runtime These functions provide access to some useful capabilities on Windows platforms Some higher level modules use these functions to build the Windows implementations of their services For example the getpass module uses this in the implementation of the getpass function Further documentation on these functions can be found in the Platform API documentation The module implements both the normal and wide char variants of the console I O api The normal API deals only with ASCII characters and is of limited use for internationalized applications The wide char API should be used where ever possible Changed in version 3 3 Operations in this module now raise OSError where IOError was raised File Operations msvcrt locking fd mode nbytes Lock part of a file based on file descriptor fd from the C runtime Raises OSError on failure The locked region of the file extends from the current file position for nbytes bytes and may continue beyond the end of the file mode must be one of the LK_ constants listed below Multiple regions in a file may be locked at the same time but may not overlap Adjacent regions are not merged they must be unlocked individually Raises an auditing event msvcrt locking with arguments fd mode nbytes msvcrt LK_LOCK msvcrt LK_RLCK Locks the specified bytes If the bytes cannot be locked the program immediately tries again after 1 second If after 10 attempts the bytes cannot be locked OSError is raised msvcrt LK_NBLCK msvcrt LK_NBRLCK Locks the specified bytes If the bytes cannot be locked OSError is raised msvcrt LK_UNLCK Unlocks the specified bytes which must have been previously locked msvcrt setmode fd flags Set the line end translation mode for the file descriptor fd To set it to text mode flags should be os O_TEXT for binary it should be os O_BINARY msvcrt open_osfhandle handle flags Create a C runtime file descriptor from the file handle handle The flags parameter should be a bitwise OR of os O_APPEND os O_RDONLY and os O_TEXT The returned file descriptor may be used as a parameter to os fdopen to create a file object Raises an auditing event msvcrt open_osfhandle with arguments handle flags msvcrt get_osfhandle fd Return the file handle for the file descriptor fd Raises OSError if fd is not recognized Raises an auditing event msvcrt get_osfhandle with argument fd Console I O msvcrt kbhit Return True if a keypress is waiting to be read msvcrt getch Read a keypress and return the resulting character as a byte string Nothing is echoed to the console This call will block if a keypress is not already available but will not wait for Enter to be pressed If the pressed key was a special function key this will return 000 or xe0 the next call will return the keycode The Control C keypress cannot be read with this function msvcrt getwch Wide char variant of getch returning a Unicode value msvcrt getche Similar to getch but the keypress will be echoed if it represents a printable character msvcrt getwche Wide char variant of getche returning a Unicode value msvcrt putch char Print the byte string char to the console without buffering msvcrt putwch unicode_char Wide char variant of putch accepting a Unicode value msvcrt ungetch char Cause the byte string char to be pushed back into the console buffer it will be the next character read by getch or getche msvcrt ungetwch unicode_char Wide char variant of ungetch accepting a Unicode value Other Functions msvcrt heapmin Force the malloc heap to clean itself up and return unused blocks to the operating system On failure this raises OSError msvcrt CRT_ASSEMBLY_VERSION The CRT Assembly version from the crtassem h header file msvcrt VC_ASSEMBLY_PUBLICKEYTOKEN The VC Assembly public key token from the crtassem h header file msvcrt LIBRARIES_ASSEMBLY_NAME_PREFIX The Libraries Assembly name prefix from the crtassem h header file
en
null
2,640
Capsules Refer to Providing a C API for an Extension Module for more information on using these objects New in version 3 1 type PyCapsule This subtype of PyObject represents an opaque value useful for C extension modules who need to pass an opaque value as a void pointer through Python code to other C code It is often used to make a C function pointer defined in one module available to other modules so the regular import mechanism can be used to access C APIs defined in dynamically loaded modules type PyCapsule_Destructor Part of the Stable ABI The type of a destructor callback for a capsule Defined as typedef void PyCapsule_Destructor PyObject See PyCapsule_New for the semantics of PyCapsule_Destructor callbacks int PyCapsule_CheckExact PyObject p Return true if its argument is a PyCapsule This function always succeeds PyObject PyCapsule_New void pointer const char name PyCapsule_Destructor destructor Return value New reference Part of the Stable ABI Create a PyCapsule encapsulating the pointer The pointer argument may not be NULL On failure set an exception and return NULL The name string may either be NULL or a pointer to a valid C string If non NULL this string must outlive the capsule Though it is permitted to free it inside the destructor If the destructor argument is not NULL it will be called with the capsule as its argument when it is destroyed If this capsule will be stored as an attribute of a module the name should be specified as modulename attributename This will enable other modules to import the capsule using PyCapsule_Import void PyCapsule_GetPointer PyObject capsule const char name Part of the Stable ABI Retrieve the pointer stored in the capsule On failure set an exception and return NULL The name parameter must compare exactly to the name stored in the capsule If the name stored in the capsule is NULL the name passed in must also be NULL Python uses the C function strcmp to compare capsule names PyCapsule_Destructor PyCapsule_GetDestructor PyObject capsule Part of the Stable ABI Return the current destructor stored in the capsule On failure set an exception and return NULL It is legal for a capsule to have a NULL destructor This makes a NULL return code somewhat ambiguous use PyCapsule_IsValid or PyErr_Occurred to disambiguate void PyCapsule_GetContext PyObject capsule Part of the Stable ABI Return the current context stored in the capsule On failure set an exception and return NULL It is legal for a capsule to have a NULL context This makes a NULL return code somewhat ambiguous use PyCapsule_IsValid or PyErr_Occurred to disambiguate const char PyCapsule_GetName PyObject capsule Part of the Stable ABI Return the current name stored in the capsule On failure set an exception and return NULL It is legal for a capsule to have a NULL name This makes a NULL return code somewhat ambiguous use PyCapsule_IsValid or PyErr_Occurred to disambiguate void PyCapsule_Import const char name int no_block Part of the Stable ABI Import a pointer to a C object from a capsule attribute in a module The name parameter should specify the full name to the attribute as in module attribute The name stored in the capsule must match this string exactly Return the capsule s internal pointer on success On failure set an exception and return NULL Changed in version 3 3 no_block has no effect anymore int PyCapsule_IsValid PyObject capsule const char name Part of the Stable ABI Determines whether or not capsule is a valid capsule A valid capsule is non NULL passes PyCapsule_CheckExact has a non NULL pointer stored in it and its internal name matches the name parameter See PyCapsule_GetPointer for information on how capsule names are compared In other words if PyCapsule_IsValid returns a true value calls to any of the accessors any function starting with PyCapsule_Get are guaranteed to succeed Return a nonzero value if the object is valid and matches the name passed in Return 0 otherwise This function will not fail int PyCapsule_SetContext PyObject capsule void context Part of the Stable ABI Set the context pointer inside capsule to co
en
null
2,641
ntext Return 0 on success Return nonzero and set an exception on failure int PyCapsule_SetDestructor PyObject capsule PyCapsule_Destructor destructor Part of the Stable ABI Set the destructor inside capsule to destructor Return 0 on success Return nonzero and set an exception on failure int PyCapsule_SetName PyObject capsule const char name Part of the Stable ABI Set the name inside capsule to name If non NULL the name must outlive the capsule If the previous name stored in the capsule was not NULL no attempt is made to free it Return 0 on success Return nonzero and set an exception on failure int PyCapsule_SetPointer PyObject capsule void pointer Part of the Stable ABI Set the void pointer inside capsule to pointer The pointer may not be NULL Return 0 on success Return nonzero and set an exception on failure
en
null
2,642
ossaudiodev Access to OSS compatible audio devices Deprecated since version 3 11 will be removed in version 3 13 The ossaudiodev module is deprecated see PEP 594 for details This module allows you to access the OSS Open Sound System audio interface OSS is available for a wide range of open source and commercial Unices and is the standard audio interface for Linux and recent versions of FreeBSD Changed in version 3 3 Operations in this module now raise OSError where IOError was raised See also Open Sound System Programmer s Guide the official documentation for the OSS C API The module defines a large number of constants supplied by the OSS device driver see sys soundcard h on either Linux or FreeBSD for a listing ossaudiodev defines the following variables and functions exception ossaudiodev OSSAudioError This exception is raised on certain errors The argument is a string describing what went wrong If ossaudiodev receives an error from a system call such as open write or ioctl it raises OSError Errors detected directly by ossaudiodev result in OSSAudioError For backwards compatibility the exception class is also available as ossaudiodev error ossaudiodev open mode ossaudiodev open device mode Open an audio device and return an OSS audio device object This object supports many file like methods such as read write and fileno although there are subtle differences between conventional Unix read write semantics and those of OSS audio devices It also supports a number of audio specific methods see below for the complete list of methods device is the audio device filename to use If it is not specified this module first looks in the environment variable AUDIODEV for a device to use If not found it falls back to dev dsp mode is one of r for read only record access w for write only playback access and rw for both Since many sound cards only allow one process to have the recorder or player open at a time it is a good idea to open the device only for the activity needed Further some sound cards are half duplex they can be opened for reading or writing but not both at once Note the unusual calling syntax the first argument is optional and the second is required This is a historical artifact for compatibility with the older linuxaudiodev module which ossaudiodev supersedes ossaudiodev openmixer device Open a mixer device and return an OSS mixer device object device is the mixer device filename to use If it is not specified this module first looks in the environment variable MIXERDEV for a device to use If not found it falls back to dev mixer Audio Device Objects Before you can write to or read from an audio device you must call three methods in the correct order 1 setfmt to set the output format 2 channels to set the number of channels 3 speed to set the sample rate Alternately you can use the setparameters method to set all three audio parameters at once This is more convenient but may not be as flexible in all cases The audio device objects returned by open define the following methods and read only attributes oss_audio_device close Explicitly close the audio device When you are done writing to or reading from an audio device you should explicitly close it A closed device cannot be used again oss_audio_device fileno Return the file descriptor associated with the device oss_audio_device read size Read size bytes from the audio input and return them as a Python string Unlike most Unix device drivers OSS audio devices in blocking mode the default will block read until the entire requested amount of data is available oss_audio_device write data Write a bytes like object data to the audio device and return the number of bytes written If the audio device is in blocking mode the default the entire data is always written again this is different from usual Unix device semantics If the device is in non blocking mode some data may not be written see writeall Changed in version 3 5 Writable bytes like object is now accepted oss_audio_device writeall data Write a bytes like object data to the audio device waits until the audio device is able to a
en
null
2,643
ccept data writes as much data as it will accept and repeats until data has been completely written If the device is in blocking mode the default this has the same effect as write writeall is only useful in non blocking mode Has no return value since the amount of data written is always equal to the amount of data supplied Changed in version 3 5 Writable bytes like object is now accepted Changed in version 3 2 Audio device objects also support the context management protocol i e they can be used in a with statement The following methods each map to exactly one ioctl system call The correspondence is obvious for example setfmt corresponds to the SNDCTL_DSP_SETFMT ioctl and sync to SNDCTL_DSP_SYNC this can be useful when consulting the OSS documentation If the underlying ioctl fails they all raise OSError oss_audio_device nonblock Put the device into non blocking mode Once in non blocking mode there is no way to return it to blocking mode oss_audio_device getfmts Return a bitmask of the audio output formats supported by the soundcard Some of the formats supported by OSS are Format Description AFMT_MU_LAW a logarithmic encoding used by Sun au files and dev audio AFMT_A_LAW a logarithmic encoding AFMT_IMA_ADPCM a 4 1 compressed format defined by the Interactive Multimedia Association AFMT_U8 Unsigned 8 bit audio AFMT_S16_LE Signed 16 bit audio little endian byte order as used by Intel processors AFMT_S16_BE Signed 16 bit audio big endian byte order as used by 68k PowerPC Sparc AFMT_S8 Signed 8 bit audio AFMT_U16_LE Unsigned 16 bit little endian audio AFMT_U16_BE Unsigned 16 bit big endian audio Consult the OSS documentation for a full list of audio formats and note that most devices support only a subset of these formats Some older devices only support AFMT_U8 the most common format used today is AFMT_S16_LE oss_audio_device setfmt format Try to set the current audio format to format see getfmts for a list Returns the audio format that the device was set to which may not be the requested format May also be used to return the current audio format do this by passing an audio format of AFMT_QUERY oss_audio_device channels nchannels Set the number of output channels to nchannels A value of 1 indicates monophonic sound 2 stereophonic Some devices may have more than 2 channels and some high end devices may not support mono Returns the number of channels the device was set to oss_audio_device speed samplerate Try to set the audio sampling rate to samplerate samples per second Returns the rate actually set Most sound devices don t support arbitrary sampling rates Common rates are Rate Description 8000 default rate for dev audio 11025 speech recording 22050 44100 CD quality audio at 16 bits sample and 2 channels 96000 DVD quality audio at 24 bits sample oss_audio_device sync Wait until the sound device has played every byte in its buffer This happens implicitly when the device is closed The OSS documentation recommends closing and re opening the device rather than using sync oss_audio_device reset Immediately stop playing or recording and return the device to a state where it can accept commands The OSS documentation recommends closing and re opening the device after calling reset oss_audio_device post Tell the driver that there is likely to be a pause in the output making it possible for the device to handle the pause more intelligently You might use this after playing a spot sound effect before waiting for user input or before doing disk I O The following convenience methods combine several ioctls or one ioctl and some simple calculations oss_audio_device setparameters format nchannels samplerate strict False Set the key audio sampling parameters sample format number of channels and sampling rate in one method call format nchannels and samplerate should be as specified in the setfmt channels and speed methods If strict is true setparameters checks to see if each parameter was actually set to the requested value and raises OSSAudioError if not Returns a tuple format nchannels samplerate indicating the parameter values that were actu
en
null
2,644
ally set by the device driver i e the same as the return values of setfmt channels and speed For example fmt channels rate dsp setparameters fmt channels rate is equivalent to fmt dsp setfmt fmt channels dsp channels channels rate dsp rate rate oss_audio_device bufsize Returns the size of the hardware buffer in samples oss_audio_device obufcount Returns the number of samples that are in the hardware buffer yet to be played oss_audio_device obuffree Returns the number of samples that could be queued into the hardware buffer to be played without blocking Audio device objects also support several read only attributes oss_audio_device closed Boolean indicating whether the device has been closed oss_audio_device name String containing the name of the device file oss_audio_device mode The I O mode for the file either r rw or w Mixer Device Objects The mixer object provides two file like methods oss_mixer_device close This method closes the open mixer device file Any further attempts to use the mixer after this file is closed will raise an OSError oss_mixer_device fileno Returns the file handle number of the open mixer device file Changed in version 3 2 Mixer objects also support the context management protocol The remaining methods are specific to audio mixing oss_mixer_device controls This method returns a bitmask specifying the available mixer controls Control being a specific mixable channel such as SOUND_MIXER_PCM or SOUND_MIXER_SYNTH This bitmask indicates a subset of all available mixer controls the SOUND_MIXER_ constants defined at module level To determine if for example the current mixer object supports a PCM mixer use the following Python code mixer ossaudiodev openmixer if mixer controls 1 ossaudiodev SOUND_MIXER_PCM PCM is supported code For most purposes the SOUND_MIXER_VOLUME master volume and SOUND_MIXER_PCM controls should suffice but code that uses the mixer should be flexible when it comes to choosing mixer controls On the Gravis Ultrasound for example SOUND_MIXER_VOLUME does not exist oss_mixer_device stereocontrols Returns a bitmask indicating stereo mixer controls If a bit is set the corresponding control is stereo if it is unset the control is either monophonic or not supported by the mixer use in combination with controls to determine which See the code example for the controls function for an example of getting data from a bitmask oss_mixer_device reccontrols Returns a bitmask specifying the mixer controls that may be used to record See the code example for controls for an example of reading from a bitmask oss_mixer_device get control Returns the volume of a given mixer control The returned volume is a 2 tuple left_volume right_volume Volumes are specified as numbers from 0 silent to 100 full volume If the control is monophonic a 2 tuple is still returned but both volumes are the same Raises OSSAudioError if an invalid control is specified or OSError if an unsupported control is specified oss_mixer_device set control left right Sets the volume for a given mixer control to left right left and right must be ints and between 0 silent and 100 full volume On success the new volume is returned as a 2 tuple Note that this may not be exactly the same as the volume specified because of the limited resolution of some soundcard s mixers Raises OSSAudioError if an invalid mixer control was specified or if the specified volumes were out of range oss_mixer_device get_recsrc This method returns a bitmask indicating which control s are currently being used as a recording source oss_mixer_device set_recsrc bitmask Call this function to specify a recording source Returns a bitmask indicating the new recording source or sources if successful raises OSError if an invalid source was specified To set the current recording source to the microphone input mixer setrecsrc 1 ossaudiodev SOUND_MIXER_MIC
en
null
2,645
sys monitoring Execution event monitoring New in version 3 12 Note sys monitoring is a namespace within the sys module not an independent module so there is no need to import sys monitoring simply import sys and then use sys monitoring This namespace provides access to the functions and constants necessary to activate and control event monitoring As programs execute events occur that might be of interest to tools that monitor execution The sys monitoring namespace provides means to receive callbacks when events of interest occur The monitoring API consists of three components Tool identifiers Events Callbacks Tool identifiers A tool identifier is an integer and the associated name Tool identifiers are used to discourage tools from interfering with each other and to allow multiple tools to operate at the same time Currently tools are completely independent and cannot be used to monitor each other This restriction may be lifted in the future Before registering or activating events a tool should choose an identifier Identifiers are integers in the range 0 to 5 inclusive Registering and using tools sys monitoring use_tool_id tool_id int name str None Must be called before tool_id can be used tool_id must be in the range 0 to 5 inclusive Raises a ValueError if tool_id is in use sys monitoring free_tool_id tool_id int None Should be called once a tool no longer requires tool_id Note free_tool_id will not disable global or local events associated with tool_id nor will it unregister any callback functions This function is only intended to be used to notify the VM that the particular tool_id is no longer in use sys monitoring get_tool tool_id int str None Returns the name of the tool if tool_id is in use otherwise it returns None tool_id must be in the range 0 to 5 inclusive All IDs are treated the same by the VM with regard to events but the following IDs are pre defined to make co operation of tools easier sys monitoring DEBUGGER_ID 0 sys monitoring COVERAGE_ID 1 sys monitoring PROFILER_ID 2 sys monitoring OPTIMIZER_ID 5 There is no obligation to set an ID nor is there anything preventing a tool from using an ID even it is already in use However tools are encouraged to use a unique ID and respect other tools Events The following events are supported sys monitoring events BRANCH A conditional branch is taken or not sys monitoring events CALL A call in Python code event occurs before the call sys monitoring events C_RAISE An exception raised from any callable except for Python functions event occurs after the exit sys monitoring events C_RETURN Return from any callable except for Python functions event occurs after the return sys monitoring events EXCEPTION_HANDLED An exception is handled sys monitoring events INSTRUCTION A VM instruction is about to be executed sys monitoring events JUMP An unconditional jump in the control flow graph is made sys monitoring events LINE An instruction is about to be executed that has a different line number from the preceding instruction sys monitoring events PY_RESUME Resumption of a Python function for generator and coroutine functions except for throw calls sys monitoring events PY_RETURN Return from a Python function occurs immediately before the return the callee s frame will be on the stack sys monitoring events PY_START Start of a Python function occurs immediately after the call the callee s frame will be on the stack sys monitoring events PY_THROW A Python function is resumed by a throw call sys monitoring events PY_UNWIND Exit from a Python function during exception unwinding sys monitoring events PY_YIELD Yield from a Python function occurs immediately before the yield the callee s frame will be on the stack sys monitoring events RAISE An exception is raised except those that cause a STOP_ITERATION event sys monitoring events RERAISE An exception is re raised for example at the end of a finally block sys monitoring events STOP_ITERATION An artificial StopIteration is raised see the STOP_ITERATION event More events may be added in the future These events are attributes of the sys monito
en
null
2,646
ring events namespace Each event is represented as a power of 2 integer constant To define a set of events simply bitwise or the individual events together For example to specify both PY_RETURN and PY_START events use the expression PY_RETURN PY_START sys monitoring events NO_EVENTS An alias for 0 so users can do explict comparisions like if get_events DEBUGGER_ID NO_EVENTS Events are divided into three groups Local events Local events are associated with normal execution of the program and happen at clearly defined locations All local events can be disabled The local events are PY_START PY_RESUME PY_RETURN PY_YIELD CALL LINE INSTRUCTION JUMP BRANCH STOP_ITERATION Ancillary events Ancillary events can be monitored like other events but are controlled by another event C_RAISE C_RETURN The C_RETURN and C_RAISE events are controlled by the CALL event C_RETURN and C_RAISE events will only be seen if the corresponding CALL event is being monitored Other events Other events are not necessarily tied to a specific location in the program and cannot be individually disabled The other events that can be monitored are PY_THROW PY_UNWIND RAISE EXCEPTION_HANDLED The STOP_ITERATION event PEP 380 specifies that a StopIteration exception is raised when returning a value from a generator or coroutine However this is a very inefficient way to return a value so some Python implementations notably CPython 3 12 do not raise an exception unless it would be visible to other code To allow tools to monitor for real exceptions without slowing down generators and coroutines the STOP_ITERATION event is provided STOP_ITERATION can be locally disabled unlike RAISE Turning events on and off In order to monitor an event it must be turned on and a corresponding callback must be registered Events can be turned on or off by setting the events either globally or for a particular code object Setting events globally Events can be controlled globally by modifying the set of events being monitored sys monitoring get_events tool_id int int Returns the int representing all the active events sys monitoring set_events tool_id int event_set int None Activates all events which are set in event_set Raises a ValueError if tool_id is not in use No events are active by default Per code object events Events can also be controlled on a per code object basis sys monitoring get_local_events tool_id int code CodeType int Returns all the local events for code sys monitoring set_local_events tool_id int code CodeType event_set int None Activates all the local events for code which are set in event_set Raises a ValueError if tool_id is not in use Local events add to global events but do not mask them In other words all global events will trigger for a code object regardless of the local events Disabling events sys monitoring DISABLE A special value that can be returned from a callback function to disable events for the current code location Local events can be disabled for a specific code location by returning sys monitoring DISABLE from a callback function This does not change which events are set or any other code locations for the same event Disabling events for specific locations is very important for high performance monitoring For example a program can be run under a debugger with no overhead if the debugger disables all monitoring except for a few breakpoints sys monitoring restart_events None Enable all the events that were disabled by sys monitoring DISABLE for all tools Registering callback functions To register a callable for events call sys monitoring register_callback tool_id int event int func Callable None Callable None Registers the callable func for the event with the given tool_id If another callback was registered for the given tool_id and event it is unregistered and returned Otherwise register_callback returns None Functions can be unregistered by calling sys monitoring register_callback tool_id event None Callback functions can be registered and unregistered at any time Registering or unregistering a callback function will generate a sys audit event Callba
en
null
2,647
ck function arguments sys monitoring MISSING A special value that is passed to a callback function to indicate that there are no arguments to the call When an active event occurs the registered callback function is called Different events will provide the callback function with different arguments as follows PY_START and PY_RESUME func code CodeType instruction_offset int DISABLE Any PY_RETURN and PY_YIELD func code CodeType instruction_offset int retval object DISABLE Any CALL C_RAISE and C_RETURN func code CodeType instruction_offset int callable object arg0 object MISSING DISABLE Any If there are no arguments arg0 is set to sys monitoring MISSING RAISE RERAISE EXCEPTION_HANDLED PY_UNWIND PY_THROW and STOP_ITERATION func code CodeType instruction_offset int exception BaseException DISABLE Any LINE func code CodeType line_number int DISABLE Any BRANCH and JUMP func code CodeType instruction_offset int destination_offset int DISABLE Any Note that the destination_offset is where the code will next execute For an untaken branch this will be the offset of the instruction following the branch INSTRUCTION func code CodeType instruction_offset int DISABLE Any
en
null
2,648
logging config Logging configuration Source code Lib logging config py Important This page contains only reference information For tutorials please see Basic Tutorial Advanced Tutorial Logging Cookbook This section describes the API for configuring the logging module Configuration functions The following functions configure the logging module They are located in the logging config module Their use is optional you can configure the logging module using these functions or by making calls to the main API defined in logging itself and defining handlers which are declared either in logging or logging handlers logging config dictConfig config Takes the logging configuration from a dictionary The contents of this dictionary are described in Configuration dictionary schema below If an error is encountered during configuration this function will raise a ValueError TypeError AttributeError or ImportError with a suitably descriptive message The following is a possibly incomplete list of conditions which will raise an error A level which is not a string or which is a string not corresponding to an actual logging level A propagate value which is not a boolean An id which does not have a corresponding destination A non existent handler id found during an incremental call An invalid logger name Inability to resolve to an internal or external object Parsing is performed by the DictConfigurator class whose constructor is passed the dictionary used for configuration and has a configure method The logging config module has a callable attribute dictConfigClass which is initially set to DictConfigurator You can replace the value of dictConfigClass with a suitable implementation of your own dictConfig calls dictConfigClass passing the specified dictionary and then calls the configure method on the returned object to put the configuration into effect def dictConfig config dictConfigClass config configure For example a subclass of DictConfigurator could call DictConfigurator __init__ in its own __init__ then set up custom prefixes which would be usable in the subsequent configure call dictConfigClass would be bound to this new subclass and then dictConfig could be called exactly as in the default uncustomized state New in version 3 2 logging config fileConfig fname defaults None disable_existing_loggers True encoding None Reads the logging configuration from a configparser format file The format of the file should be as described in Configuration file format This function can be called several times from an application allowing an end user to select from various pre canned configurations if the developer provides a mechanism to present the choices and load the chosen configuration It will raise FileNotFoundError if the file doesn t exist and RuntimeError if the file is invalid or empty Parameters fname A filename or a file like object or an instance derived from RawConfigParser If a RawConfigParser derived instance is passed it is used as is Otherwise a ConfigParser is instantiated and the configuration read by it from the object passed in fname If that has a readline method it is assumed to be a file like object and read using read_file otherwise it is assumed to be a filename and passed to read defaults Defaults to be passed to the ConfigParser can be specified in this argument disable_existing_loggers If specified as False loggers which exist when this call is made are left enabled The default is True because this enables old behaviour in a backward compatible way This behaviour is to disable any existing non root loggers unless they or their ancestors are explicitly named in the logging configuration encoding The encoding used to open file when fname is filename Changed in version 3 4 An instance of a subclass of RawConfigParser is now accepted as a value for fname This facilitates Use of a configuration file where logging configuration is just part of the overall application configuration Use of a configuration read from a file and then modified by the using application e g based on command line parameters or other aspects of the runtime e
en
null
2,649
nvironment before being passed to fileConfig Changed in version 3 10 Added the encoding parameter Changed in version 3 12 An exception will be thrown if the provided file doesn t exist or is invalid or empty logging config listen port DEFAULT_LOGGING_CONFIG_PORT verify None Starts up a socket server on the specified port and listens for new configurations If no port is specified the module s default DEFAULT_LOGGING_CONFIG_PORT is used Logging configurations will be sent as a file suitable for processing by dictConfig or fileConfig Returns a Thread instance on which you can call start to start the server and which you can join when appropriate To stop the server call stopListening The verify argument if specified should be a callable which should verify whether bytes received across the socket are valid and should be processed This could be done by encrypting and or signing what is sent across the socket such that the verify callable can perform signature verification and or decryption The verify callable is called with a single argument the bytes received across the socket and should return the bytes to be processed or None to indicate that the bytes should be discarded The returned bytes could be the same as the passed in bytes e g when only verification is done or they could be completely different perhaps if decryption were performed To send a configuration to the socket read in the configuration file and send it to the socket as a sequence of bytes preceded by a four byte length string packed in binary using struct pack L n Note Because portions of the configuration are passed through eval use of this function may open its users to a security risk While the function only binds to a socket on localhost and so does not accept connections from remote machines there are scenarios where untrusted code could be run under the account of the process which calls listen Specifically if the process calling listen runs on a multi user machine where users cannot trust each other then a malicious user could arrange to run essentially arbitrary code in a victim user s process simply by connecting to the victim s listen socket and sending a configuration which runs whatever code the attacker wants to have executed in the victim s process This is especially easy to do if the default port is used but not hard even if a different port is used To avoid the risk of this happening use the verify argument to listen to prevent unrecognised configurations from being applied Changed in version 3 4 The verify argument was added Note If you want to send configurations to the listener which don t disable existing loggers you will need to use a JSON format for the configuration which will use dictConfig for configuration This method allows you to specify disable_existing_loggers as False in the configuration you send logging config stopListening Stops the listening server which was created with a call to listen This is typically called before calling join on the return value from listen Security considerations The logging configuration functionality tries to offer convenience and in part this is done by offering the ability to convert text in configuration files into Python objects used in logging configuration for example as described in User defined objects However these same mechanisms importing callables from user defined modules and calling them with parameters from the configuration could be used to invoke any code you like and for this reason you should treat configuration files from untrusted sources with extreme caution and satisfy yourself that nothing bad can happen if you load them before actually loading them Configuration dictionary schema Describing a logging configuration requires listing the various objects to create and the connections between them for example you may create a handler named console and then say that the logger named startup will send its messages to the console handler These objects aren t limited to those provided by the logging module because you might write your own formatter or handler class The parameters t
en
null
2,650
o these classes may also need to include external objects such as sys stderr The syntax for describing these objects and connections is defined in Object connections below Dictionary Schema Details The dictionary passed to dictConfig must contain the following keys version to be set to an integer value representing the schema version The only valid value at present is 1 but having this key allows the schema to evolve while still preserving backwards compatibility All other keys are optional but if present they will be interpreted as described below In all cases below where a configuring dict is mentioned it will be checked for the special key to see if a custom instantiation is required If so the mechanism described in User defined objects below is used to create an instance otherwise the context is used to determine what to instantiate formatters the corresponding value will be a dict in which each key is a formatter id and each value is a dict describing how to configure the corresponding Formatter instance The configuring dict is searched for the following optional keys which correspond to the arguments passed to create a Formatter object format datefmt style validate since version 3 8 defaults since version 3 12 An optional class key indicates the name of the formatter s class as a dotted module and class name The instantiation arguments are as for Formatter thus this key is most useful for instantiating a customised subclass of Formatter For example the alternative class might present exception tracebacks in an expanded or condensed format If your formatter requires different or extra configuration keys you should use User defined objects filters the corresponding value will be a dict in which each key is a filter id and each value is a dict describing how to configure the corresponding Filter instance The configuring dict is searched for the key name defaulting to the empty string and this is used to construct a logging Filter instance handlers the corresponding value will be a dict in which each key is a handler id and each value is a dict describing how to configure the corresponding Handler instance The configuring dict is searched for the following keys class mandatory This is the fully qualified name of the handler class level optional The level of the handler formatter optional The id of the formatter for this handler filters optional A list of ids of the filters for this handler Changed in version 3 11 filters can take filter instances in addition to ids All other keys are passed through as keyword arguments to the handler s constructor For example given the snippet handlers console class logging StreamHandler formatter brief level INFO filters allow_foo stream ext sys stdout file class logging handlers RotatingFileHandler formatter precise filename logconfig log maxBytes 1024 backupCount 3 the handler with id console is instantiated as a logging StreamHandler using sys stdout as the underlying stream The handler with id file is instantiated as a logging handlers RotatingFileHandler with the keyword arguments filename logconfig log maxBytes 1024 backupCount 3 loggers the corresponding value will be a dict in which each key is a logger name and each value is a dict describing how to configure the corresponding Logger instance The configuring dict is searched for the following keys level optional The level of the logger propagate optional The propagation setting of the logger filters optional A list of ids of the filters for this logger Changed in version 3 11 filters can take filter instances in addition to ids handlers optional A list of ids of the handlers for this logger The specified loggers will be configured according to the level propagation filters and handlers specified root this will be the configuration for the root logger Processing of the configuration will be as for any logger except that the propagate setting will not be applicable incremental whether the configuration is to be interpreted as incremental to the existing configuration This value defaults to False which means that the specified c
en
null
2,651
onfiguration replaces the existing configuration with the same semantics as used by the existing fileConfig API If the specified value is True the configuration is processed as described in the section on Incremental Configuration disable_existing_loggers whether any existing non root loggers are to be disabled This setting mirrors the parameter of the same name in fileConfig If absent this parameter defaults to True This value is ignored if incremental is True Incremental Configuration It is difficult to provide complete flexibility for incremental configuration For example because objects such as filters and formatters are anonymous once a configuration is set up it is not possible to refer to such anonymous objects when augmenting a configuration Furthermore there is not a compelling case for arbitrarily altering the object graph of loggers handlers filters formatters at run time once a configuration is set up the verbosity of loggers and handlers can be controlled just by setting levels and in the case of loggers propagation flags Changing the object graph arbitrarily in a safe way is problematic in a multi threaded environment while not impossible the benefits are not worth the complexity it adds to the implementation Thus when the incremental key of a configuration dict is present and is True the system will completely ignore any formatters and filters entries and process only the level settings in the handlers entries and the level and propagate settings in the loggers and root entries Using a value in the configuration dict lets configurations to be sent over the wire as pickled dicts to a socket listener Thus the logging verbosity of a long running application can be altered over time with no need to stop and restart the application Object connections The schema describes a set of logging objects loggers handlers formatters filters which are connected to each other in an object graph Thus the schema needs to represent connections between the objects For example say that once configured a particular logger has attached to it a particular handler For the purposes of this discussion we can say that the logger represents the source and the handler the destination of a connection between the two Of course in the configured objects this is represented by the logger holding a reference to the handler In the configuration dict this is done by giving each destination object an id which identifies it unambiguously and then using the id in the source object s configuration to indicate that a connection exists between the source and the destination object with that id So for example consider the following YAML snippet formatters brief configuration for formatter with id brief goes here precise configuration for formatter with id precise goes here handlers h1 This is an id configuration of handler with id h1 goes here formatter brief h2 This is another id configuration of handler with id h2 goes here formatter precise loggers foo bar baz other configuration for logger foo bar baz handlers h1 h2 Note YAML used here because it s a little more readable than the equivalent Python source form for the dictionary The ids for loggers are the logger names which would be used programmatically to obtain a reference to those loggers e g foo bar baz The ids for Formatters and Filters can be any string value such as brief precise above and they are transient in that they are only meaningful for processing the configuration dictionary and used to determine connections between objects and are not persisted anywhere when the configuration call is complete The above snippet indicates that logger named foo bar baz should have two handlers attached to it which are described by the handler ids h1 and h2 The formatter for h1 is that described by id brief and the formatter for h2 is that described by id precise User defined objects The schema supports user defined objects for handlers filters and formatters Loggers do not need to have different types for different instances so there is no support in this configuration schema for user defined logge
en
null
2,652
r classes Objects to be configured are described by dictionaries which detail their configuration In some places the logging system will be able to infer from the context how an object is to be instantiated but when a user defined object is to be instantiated the system will not know how to do this In order to provide complete flexibility for user defined object instantiation the user needs to provide a factory a callable which is called with a configuration dictionary and which returns the instantiated object This is signalled by an absolute import path to the factory being made available under the special key Here s a concrete example formatters brief format message s default format asctime s levelname 8s name 15s message s datefmt Y m d H M S custom my package customFormatterFactory bar baz spam 99 9 answer 42 The above YAML snippet defines three formatters The first with id brief is a standard logging Formatter instance with the specified format string The second with id default has a longer format and also defines the time format explicitly and will result in a logging Formatter initialized with those two format strings Shown in Python source form the brief and default formatters have configuration sub dictionaries format message s and format asctime s levelname 8s name 15s message s datefmt Y m d H M S respectively and as these dictionaries do not contain the special key the instantiation is inferred from the context as a result standard logging Formatter instances are created The configuration sub dictionary for the third formatter with id custom is my package customFormatterFactory bar baz spam 99 9 answer 42 and this contains the special key which means that user defined instantiation is wanted In this case the specified factory callable will be used If it is an actual callable it will be used directly otherwise if you specify a string as in the example the actual callable will be located using normal import mechanisms The callable will be called with the remaining items in the configuration sub dictionary as keyword arguments In the above example the formatter with id custom will be assumed to be returned by the call my package customFormatterFactory bar baz spam 99 9 answer 42 Warning The values for keys such as bar spam and answer in the above example should not be configuration dictionaries or references such as cfg foo or ext bar because they will not be processed by the configuration machinery but passed to the callable as is The key has been used as the special key because it is not a valid keyword parameter name and so will not clash with the names of the keyword arguments used in the call The also serves as a mnemonic that the corresponding value is a callable Changed in version 3 11 The filters member of handlers and loggers can take filter instances in addition to ids You can also specify a special key whose value is a dictionary is a mapping of attribute names to values If found the specified attributes will be set on the user defined object before it is returned Thus with the following configuration my package customFormatterFactory bar baz spam 99 9 answer 42 foo bar baz bozz the returned formatter will have attribute foo set to bar and attribute baz set to bozz Warning The values for attributes such as foo and baz in the above example should not be configuration dictionaries or references such as cfg foo or ext bar because they will not be processed by the configuration machinery but set as attribute values as is Handler configuration order Handlers are configured in alphabetical order of their keys and a configured handler replaces the configuration dictionary in a working copy of the handlers dictionary in the schema If you use a construct such as cfg handlers foo then initially handlers foo points to the configuration dictionary for the handler named foo and later once that handler has been configured it points to the configured handler instance Thus cfg handlers foo could resolve to either a dictionary or a handler instance In general it is wise to name handlers in a way such that dependent ha
en
null
2,653
ndlers are configured _after_ any handlers they depend on that allows something like cfg handlers foo to be used in configuring a handler that depends on handler foo If that dependent handler were named bar problems would result because the configuration of bar would be attempted before that of foo and foo would not yet have been configured However if the dependent handler were named foobar it would be configured after foo with the result that cfg handlers foo would resolve to configured handler foo and not its configuration dictionary Access to external objects There are times where a configuration needs to refer to objects external to the configuration for example sys stderr If the configuration dict is constructed using Python code this is straightforward but a problem arises when the configuration is provided via a text file e g JSON YAML In a text file there is no standard way to distinguish sys stderr from the literal string sys stderr To facilitate this distinction the configuration system looks for certain special prefixes in string values and treat them specially For example if the literal string ext sys stderr is provided as a value in the configuration then the ext will be stripped off and the remainder of the value processed using normal import mechanisms The handling of such prefixes is done in a way analogous to protocol handling there is a generic mechanism to look for prefixes which match the regular expression P prefix a z P suffix whereby if the prefix is recognised the suffix is processed in a prefix dependent manner and the result of the processing replaces the string value If the prefix is not recognised then the string value will be left as is Access to internal objects As well as external objects there is sometimes also a need to refer to objects in the configuration This will be done implicitly by the configuration system for things that it knows about For example the string value DEBUG for a level in a logger or handler will automatically be converted to the value logging DEBUG and the handlers filters and formatter entries will take an object id and resolve to the appropriate destination object However a more generic mechanism is needed for user defined objects which are not known to the logging module For example consider logging handlers MemoryHandler which takes a target argument which is another handler to delegate to Since the system already knows about this class then in the configuration the given target just needs to be the object id of the relevant target handler and the system will resolve to the handler from the id If however a user defines a my package MyHandler which has an alternate handler the configuration system would not know that the alternate referred to a handler To cater for this a generic resolution system allows the user to specify handlers file configuration of file handler goes here custom my package MyHandler alternate cfg handlers file The literal string cfg handlers file will be resolved in an analogous way to strings with the ext prefix but looking in the configuration itself rather than the import namespace The mechanism allows access by dot or by index in a similar way to that provided by str format Thus given the following snippet handlers email class logging handlers SMTPHandler mailhost localhost fromaddr my_app domain tld toaddrs support_team domain tld dev_team domain tld subject Houston we have a problem in the configuration the string cfg handlers would resolve to the dict with key handlers the string cfg handlers email would resolve to the dict with key email in the handlers dict and so on The string cfg handlers email toaddrs 1 would resolve to dev_team domain tld and the string cfg handlers email toaddrs 0 would resolve to the value support_team domain tld The subject value could be accessed using either cfg handlers email subject or equivalently cfg handlers email subject The latter form only needs to be used if the key contains spaces or non alphanumeric characters If an index value consists only of decimal digits access will be attempted using the cor
en
null
2,654
responding integer value falling back to the string value if needed Given a string cfg handlers myhandler mykey 123 this will resolve to config_dict handlers myhandler mykey 123 If the string is specified as cfg handlers myhandler mykey 123 the system will attempt to retrieve the value from config_dict handlers myhandler mykey 123 and fall back to config_dict handlers myhandler mykey 123 if that fails Import resolution and custom importers Import resolution by default uses the builtin __import__ function to do its importing You may want to replace this with your own importing mechanism if so you can replace the importer attribute of the DictConfigurator or its superclass the BaseConfigurator class However you need to be careful because of the way functions are accessed from classes via descriptors If you are using a Python callable to do your imports and you want to define it at class level rather than instance level you need to wrap it with staticmethod For example from importlib import import_module from logging config import BaseConfigurator BaseConfigurator importer staticmethod import_module You don t need to wrap with staticmethod if you re setting the import callable on a configurator instance Configuring QueueHandler and QueueListener If you want to configure a QueueHandler noting that this is normally used in conjunction with a QueueListener you can configure both together After the configuration the QueueListener instance will be available as the listener attribute of the created handler and that in turn will be available to you using getHandlerByName and passing the name you have used for the QueueHandler in your configuration The dictionary schema for configuring the pair is shown in the example YAML snippet below handlers qhand class logging handlers QueueHandler queue my module queue_factory listener my package CustomListener handlers hand_name_1 hand_name_2 The queue and listener keys are optional If the queue key is present the corresponding value can be one of the following An actual instance of queue Queue or a subclass thereof This is of course only possible if you are constructing or modifying the configuration dictionary in code A string that resolves to a callable which when called with no arguments returns the queue Queue instance to use That callable could be a queue Queue subclass or a function which returns a suitable queue instance such as my module queue_factory A dict with a key which is constructed in the usual way as discussed in User defined objects The result of this construction should be a queue Queue instance If the queue key is absent a standard unbounded queue Queue instance is created and used If the listener key is present the corresponding value can be one of the following A subclass of logging handlers QueueListener This is of course only possible if you are constructing or modifying the configuration dictionary in code A string which resolves to a class which is a subclass of QueueListener such as my package CustomListener A dict with a key which is constructed in the usual way as discussed in User defined objects The result of this construction should be a callable with the same signature as the QueueListener initializer If the listener key is absent logging handlers QueueListener is used The values under the handlers key are the names of other handlers in the configuration not shown in the above snippet which will be passed to the queue listener Any custom queue handler and listener classes will need to be defined with the same initialization signatures as QueueHandler and QueueListener New in version 3 12 Configuration file format The configuration file format understood by fileConfig is based on configparser functionality The file must contain sections called loggers handlers and formatters which identify by name the entities of each type which are defined in the file For each such entity there is a separate section which identifies how that entity is configured Thus for a logger named log01 in the loggers section the relevant configuration details are held in a section logg
en
null
2,655
er_log01 Similarly a handler called hand01 in the handlers section will have its configuration held in a section called handler_hand01 while a formatter called form01 in the formatters section will have its configuration specified in a section called formatter_form01 The root logger configuration must be specified in a section called logger_root Note The fileConfig API is older than the dictConfig API and does not provide functionality to cover certain aspects of logging For example you cannot configure Filter objects which provide for filtering of messages beyond simple integer levels using fileConfig If you need to have instances of Filter in your logging configuration you will need to use dictConfig Note that future enhancements to configuration functionality will be added to dictConfig so it s worth considering transitioning to this newer API when it s convenient to do so Examples of these sections in the file are given below loggers keys root log02 log03 log04 log05 log06 log07 handlers keys hand01 hand02 hand03 hand04 hand05 hand06 hand07 hand08 hand09 formatters keys form01 form02 form03 form04 form05 form06 form07 form08 form09 The root logger must specify a level and a list of handlers An example of a root logger section is given below logger_root level NOTSET handlers hand01 The level entry can be one of DEBUG INFO WARNING ERROR CRITICAL or NOTSET For the root logger only NOTSET means that all messages will be logged Level values are evaluated in the context of the logging package s namespace The handlers entry is a comma separated list of handler names which must appear in the handlers section These names must appear in the handlers section and have corresponding sections in the configuration file For loggers other than the root logger some additional information is required This is illustrated by the following example logger_parser level DEBUG handlers hand01 propagate 1 qualname compiler parser The level and handlers entries are interpreted as for the root logger except that if a non root logger s level is specified as NOTSET the system consults loggers higher up the hierarchy to determine the effective level of the logger The propagate entry is set to 1 to indicate that messages must propagate to handlers higher up the logger hierarchy from this logger or 0 to indicate that messages are not propagated to handlers up the hierarchy The qualname entry is the hierarchical channel name of the logger that is to say the name used by the application to get the logger Sections which specify handler configuration are exemplified by the following handler_hand01 class StreamHandler level NOTSET formatter form01 args sys stdout The class entry indicates the handler s class as determined by eval in the logging package s namespace The level is interpreted as for loggers and NOTSET is taken to mean log everything The formatter entry indicates the key name of the formatter for this handler If blank a default formatter logging _defaultFormatter is used If a name is specified it must appear in the formatters section and have a corresponding section in the configuration file The args entry when evaluated in the context of the logging package s namespace is the list of arguments to the constructor for the handler class Refer to the constructors for the relevant handlers or to the examples below to see how typical entries are constructed If not provided it defaults to The optional kwargs entry when evaluated in the context of the logging package s namespace is the keyword argument dict to the constructor for the handler class If not provided it defaults to handler_hand02 class FileHandler level DEBUG formatter form02 args python log w handler_hand03 class handlers SocketHandler level INFO formatter form03 args localhost handlers DEFAULT_TCP_LOGGING_PORT handler_hand04 class handlers DatagramHandler level WARN formatter form04 args localhost handlers DEFAULT_UDP_LOGGING_PORT handler_hand05 class handlers SysLogHandler level ERROR formatter form05 args localhost handlers SYSLOG_UDP_PORT handlers SysLogHandler LOG_USER handler_hand
en
null
2,656
06 class handlers NTEventLogHandler level CRITICAL formatter form06 args Python Application Application handler_hand07 class handlers SMTPHandler level WARN formatter form07 args localhost from abc user1 abc user2 xyz Logger Subject kwargs timeout 10 0 handler_hand08 class handlers MemoryHandler level NOTSET formatter form08 target args 10 ERROR handler_hand09 class handlers HTTPHandler level NOTSET formatter form09 args localhost 9022 log GET kwargs secure True Sections which specify formatter configuration are typified by the following formatter_form01 format F1 asctime s levelname s message s customfield s datefmt style validate True defaults customfield defaultvalue class logging Formatter The arguments for the formatter configuration are the same as the keys in the dictionary schema formatters section The defaults entry when evaluated in the context of the logging package s namespace is a dictionary of default values for custom formatting fields If not provided it defaults to None Note Due to the use of eval as described above there are potential security risks which result from using the listen to send and receive configurations via sockets The risks are limited to where multiple users with no mutual trust run code on the same machine see the listen documentation for more information See also Module logging API reference for the logging module Module logging handlers Useful handlers included with the logging module
en
null
2,657
gc Garbage Collector interface This module provides an interface to the optional garbage collector It provides the ability to disable the collector tune the collection frequency and set debugging options It also provides access to unreachable objects that the collector found but cannot free Since the collector supplements the reference counting already used in Python you can disable the collector if you are sure your program does not create reference cycles Automatic collection can be disabled by calling gc disable To debug a leaking program call gc set_debug gc DEBUG_LEAK Notice that this includes gc DEBUG_SAVEALL causing garbage collected objects to be saved in gc garbage for inspection The gc module provides the following functions gc enable Enable automatic garbage collection gc disable Disable automatic garbage collection gc isenabled Return True if automatic collection is enabled gc collect generation 2 With no arguments run a full collection The optional argument generation may be an integer specifying which generation to collect from 0 to 2 A ValueError is raised if the generation number is invalid The number of unreachable objects found is returned The free lists maintained for a number of built in types are cleared whenever a full collection or collection of the highest generation 2 is run Not all items in some free lists may be freed due to the particular implementation in particular float The effect of calling gc collect while the interpreter is already performing a collection is undefined gc set_debug flags Set the garbage collection debugging flags Debugging information will be written to sys stderr See below for a list of debugging flags which can be combined using bit operations to control debugging gc get_debug Return the debugging flags currently set gc get_objects generation None Returns a list of all objects tracked by the collector excluding the list returned If generation is not None return only the objects tracked by the collector that are in that generation Changed in version 3 8 New generation parameter Raises an auditing event gc get_objects with argument generation gc get_stats Return a list of three per generation dictionaries containing collection statistics since interpreter start The number of keys may change in the future but currently each dictionary will contain the following items collections is the number of times this generation was collected collected is the total number of objects collected inside this generation uncollectable is the total number of objects which were found to be uncollectable and were therefore moved to the garbage list inside this generation New in version 3 4 gc set_threshold threshold0 threshold1 threshold2 Set the garbage collection thresholds the collection frequency Setting threshold0 to zero disables collection The GC classifies objects into three generations depending on how many collection sweeps they have survived New objects are placed in the youngest generation generation 0 If an object survives a collection it is moved into the next older generation Since generation 2 is the oldest generation objects in that generation remain there after a collection In order to decide when to run the collector keeps track of the number object allocations and deallocations since the last collection When the number of allocations minus the number of deallocations exceeds threshold0 collection starts Initially only generation 0 is examined If generation 0 has been examined more than threshold1 times since generation 1 has been examined then generation 1 is examined as well With the third generation things are a bit more complicated see Collecting the oldest generation for more information gc get_count Return the current collection counts as a tuple of count0 count1 count2 gc get_threshold Return the current collection thresholds as a tuple of threshold0 threshold1 threshold2 gc get_referrers objs Return the list of objects that directly refer to any of objs This function will only locate those containers which support garbage collection extension types which do refer to
en
null
2,658
other objects but do not support garbage collection will not be found Note that objects which have already been dereferenced but which live in cycles and have not yet been collected by the garbage collector can be listed among the resulting referrers To get only currently live objects call collect before calling get_referrers Warning Care must be taken when using objects returned by get_referrers because some of them could still be under construction and hence in a temporarily invalid state Avoid using get_referrers for any purpose other than debugging Raises an auditing event gc get_referrers with argument objs gc get_referents objs Return a list of objects directly referred to by any of the arguments The referents returned are those objects visited by the arguments C level tp_traverse methods if any and may not be all objects actually directly reachable tp_traverse methods are supported only by objects that support garbage collection and are only required to visit objects that may be involved in a cycle So for example if an integer is directly reachable from an argument that integer object may or may not appear in the result list Raises an auditing event gc get_referents with argument objs gc is_tracked obj Returns True if the object is currently tracked by the garbage collector False otherwise As a general rule instances of atomic types aren t tracked and instances of non atomic types containers user defined objects are However some type specific optimizations can be present in order to suppress the garbage collector footprint of simple instances e g dicts containing only atomic keys and values gc is_tracked 0 False gc is_tracked a False gc is_tracked True gc is_tracked False gc is_tracked a 1 False gc is_tracked a True New in version 3 1 gc is_finalized obj Returns True if the given object has been finalized by the garbage collector False otherwise x None class Lazarus def __del__ self global x x self lazarus Lazarus gc is_finalized lazarus False del lazarus gc is_finalized x True New in version 3 9 gc freeze Freeze all the objects tracked by the garbage collector move them to a permanent generation and ignore them in all the future collections If a process will fork without exec avoiding unnecessary copy on write in child processes will maximize memory sharing and reduce overall memory usage This requires both avoiding creation of freed holes in memory pages in the parent process and ensuring that GC collections in child processes won t touch the gc_refs counter of long lived objects originating in the parent process To accomplish both call gc disable early in the parent process gc freeze right before fork and gc enable early in child processes New in version 3 7 gc unfreeze Unfreeze the objects in the permanent generation put them back into the oldest generation New in version 3 7 gc get_freeze_count Return the number of objects in the permanent generation New in version 3 7 The following variables are provided for read only access you can mutate the values but should not rebind them gc garbage A list of objects which the collector found to be unreachable but could not be freed uncollectable objects Starting with Python 3 4 this list should be empty most of the time except when using instances of C extension types with a non NULL tp_del slot If DEBUG_SAVEALL is set then all unreachable objects will be added to this list rather than freed Changed in version 3 2 If this list is non empty at interpreter shutdown a ResourceWarning is emitted which is silent by default If DEBUG_UNCOLLECTABLE is set in addition all uncollectable objects are printed Changed in version 3 4 Following PEP 442 objects with a __del__ method don t end up in gc garbage anymore gc callbacks A list of callbacks that will be invoked by the garbage collector before and after collection The callbacks will be called with two arguments phase and info phase can be one of two values start The garbage collection is about to start stop The garbage collection has finished info is a dict providing more information for the callback The following keys are curr
en
null
2,659
ently defined generation The oldest generation being collected collected When phase is stop the number of objects successfully collected uncollectable When phase is stop the number of objects that could not be collected and were put in garbage Applications can add their own callbacks to this list The primary use cases are Gathering statistics about garbage collection such as how often various generations are collected and how long the collection takes Allowing applications to identify and clear their own uncollectable types when they appear in garbage New in version 3 3 The following constants are provided for use with set_debug gc DEBUG_STATS Print statistics during collection This information can be useful when tuning the collection frequency gc DEBUG_COLLECTABLE Print information on collectable objects found gc DEBUG_UNCOLLECTABLE Print information of uncollectable objects found objects which are not reachable but cannot be freed by the collector These objects will be added to the garbage list Changed in version 3 2 Also print the contents of the garbage list at interpreter shutdown if it isn t empty gc DEBUG_SAVEALL When set all unreachable objects found will be appended to garbage rather than being freed This can be useful for debugging a leaking program gc DEBUG_LEAK The debugging flags necessary for the collector to print information about a leaking program equal to DEBUG_COLLECTABLE DEBUG_UNCOLLECTABLE DEBUG_SAVEALL
en
null
2,660
The Python Profilers Source code Lib profile py and Lib pstats py Introduction to the profilers cProfile and profile provide deterministic profiling of Python programs A profile is a set of statistics that describes how often and for how long various parts of the program executed These statistics can be formatted into reports via the pstats module The Python standard library provides two different implementations of the same profiling interface 1 cProfile is recommended for most users it s a C extension with reasonable overhead that makes it suitable for profiling long running programs Based on lsprof contributed by Brett Rosen and Ted Czotter 2 profile a pure Python module whose interface is imitated by cProfile but which adds significant overhead to profiled programs If you re trying to extend the profiler in some way the task might be easier with this module Originally designed and written by Jim Roskind Note The profiler modules are designed to provide an execution profile for a given program not for benchmarking purposes for that there is timeit for reasonably accurate results This particularly applies to benchmarking Python code against C code the profilers introduce overhead for Python code but not for C level functions and so the C code would seem faster than any Python one Instant User s Manual This section is provided for users that don t want to read the manual It provides a very brief overview and allows a user to rapidly perform profiling on an existing application To profile a function that takes a single argument you can do import cProfile import re cProfile run re compile foo bar Use profile instead of cProfile if the latter is not available on your system The above action would run re compile and print profile results like the following 214 function calls 207 primitive calls in 0 002 seconds Ordered by cumulative time ncalls tottime percall cumtime percall filename lineno function 1 0 000 0 000 0 002 0 002 built in method builtins exec 1 0 000 0 000 0 001 0 001 string 1 module 1 0 000 0 000 0 001 0 001 __init__ py 250 compile 1 0 000 0 000 0 001 0 001 __init__ py 289 _compile 1 0 000 0 000 0 000 0 000 _compiler py 759 compile 1 0 000 0 000 0 000 0 000 _parser py 937 parse 1 0 000 0 000 0 000 0 000 _compiler py 598 _code 1 0 000 0 000 0 000 0 000 _parser py 435 _parse_sub The first line indicates that 214 calls were monitored Of those calls 207 were primitive meaning that the call was not induced via recursion The next line Ordered by cumulative time indicates the output is sorted by the cumtime values The column headings include ncalls for the number of calls tottime for the total time spent in the given function and excluding time made in calls to sub functions percall is the quotient of tottime divided by ncalls cumtime is the cumulative time spent in this and all subfunctions from invocation till exit This figure is accurate even for recursive functions percall is the quotient of cumtime divided by primitive calls filename lineno function provides the respective data of each function When there are two numbers in the first column for example 3 1 it means that the function recursed The second value is the number of primitive calls and the former is the total number of calls Note that when the function does not recurse these two values are the same and only the single figure is printed Instead of printing the output at the end of the profile run you can save the results to a file by specifying a filename to the run function import cProfile import re cProfile run re compile foo bar restats The pstats Stats class reads profile results from a file and formats them in various ways The files cProfile and profile can also be invoked as a script to profile another script For example python m cProfile o output_file s sort_order m module myscript py o writes the profile results to a file instead of to stdout s specifies one of the sort_stats sort values to sort the output by This only applies when o is not supplied m specifies that a module is being profiled instead of a script New in version 3 7 Added the m opt
en
null
2,661
ion to cProfile New in version 3 8 Added the m option to profile The pstats module s Stats class has a variety of methods for manipulating and printing the data saved into a profile results file import pstats from pstats import SortKey p pstats Stats restats p strip_dirs sort_stats 1 print_stats The strip_dirs method removed the extraneous path from all the module names The sort_stats method sorted all the entries according to the standard module line name string that is printed The print_stats method printed out all the statistics You might try the following sort calls p sort_stats SortKey NAME p print_stats The first call will actually sort the list by function name and the second call will print out the statistics The following are some interesting calls to experiment with p sort_stats SortKey CUMULATIVE print_stats 10 This sorts the profile by cumulative time in a function and then only prints the ten most significant lines If you want to understand what algorithms are taking time the above line is what you would use If you were looking to see what functions were looping a lot and taking a lot of time you would do p sort_stats SortKey TIME print_stats 10 to sort according to time spent within each function and then print the statistics for the top ten functions You might also try p sort_stats SortKey FILENAME print_stats __init__ This will sort all the statistics by file name and then print out statistics for only the class init methods since they are spelled with __init__ in them As one final example you could try p sort_stats SortKey TIME SortKey CUMULATIVE print_stats 5 init This line sorts statistics with a primary key of time and a secondary key of cumulative time and then prints out some of the statistics To be specific the list is first culled down to 50 re 5 of its original size then only lines containing init are maintained and that sub sub list is printed If you wondered what functions called the above functions you could now p is still sorted according to the last criteria do p print_callers 5 init and you would get a list of callers for each of the listed functions If you want more functionality you re going to have to read the manual or guess what the following functions do p print_callees p add restats Invoked as a script the pstats module is a statistics browser for reading and examining profile dumps It has a simple line oriented interface implemented using cmd and interactive help profile and cProfile Module Reference Both the profile and cProfile modules provide the following functions profile run command filename None sort 1 This function takes a single argument that can be passed to the exec function and an optional file name In all cases this routine executes exec command __main__ __dict__ __main__ __dict__ and gathers profiling statistics from the execution If no file name is present then this function automatically creates a Stats instance and prints a simple profiling report If the sort value is specified it is passed to this Stats instance to control how the results are sorted profile runctx command globals locals filename None sort 1 This function is similar to run with added arguments to supply the globals and locals dictionaries for the command string This routine executes exec command globals locals and gathers profiling statistics as in the run function above class profile Profile timer None timeunit 0 0 subcalls True builtins True This class is normally only used if more precise control over profiling is needed than what the cProfile run function provides A custom timer can be supplied for measuring how long code takes to run via the timer argument This must be a function that returns a single number representing the current time If the number is an integer the timeunit specifies a multiplier that specifies the duration of each unit of time For example if the timer returns times measured in thousands of seconds the time unit would be 001 Directly using the Profile class allows formatting profile results without writing the profile data to a file import cProfile pstats io from pstats imp
en
null
2,662
ort SortKey pr cProfile Profile pr enable do something pr disable s io StringIO sortby SortKey CUMULATIVE ps pstats Stats pr stream s sort_stats sortby ps print_stats print s getvalue The Profile class can also be used as a context manager supported only in cProfile module see Context Manager Types import cProfile with cProfile Profile as pr do something pr print_stats Changed in version 3 8 Added context manager support enable Start collecting profiling data Only in cProfile disable Stop collecting profiling data Only in cProfile create_stats Stop collecting profiling data and record the results internally as the current profile print_stats sort 1 Create a Stats object based on the current profile and print the results to stdout dump_stats filename Write the results of the current profile to filename run cmd Profile the cmd via exec runctx cmd globals locals Profile the cmd via exec with the specified global and local environment runcall func args kwargs Profile func args kwargs Note that profiling will only work if the called command function actually returns If the interpreter is terminated e g via a sys exit call during the called command function execution no profiling results will be printed The Stats Class Analysis of the profiler data is done using the Stats class class pstats Stats filenames or profile stream sys stdout This class constructor creates an instance of a statistics object from a filename or list of filenames or from a Profile instance Output will be printed to the stream specified by stream The file selected by the above constructor must have been created by the corresponding version of profile or cProfile To be specific there is no file compatibility guaranteed with future versions of this profiler and there is no compatibility with files produced by other profilers or the same profiler run on a different operating system If several files are provided all the statistics for identical functions will be coalesced so that an overall view of several processes can be considered in a single report If additional files need to be combined with data in an existing Stats object the add method can be used Instead of reading the profile data from a file a cProfile Profile or profile Profile object can be used as the profile data source Stats objects have the following methods strip_dirs This method for the Stats class removes all leading path information from file names It is very useful in reducing the size of the printout to fit within close to 80 columns This method modifies the object and the stripped information is lost After performing a strip operation the object is considered to have its entries in a random order as it was just after object initialization and loading If strip_dirs causes two function names to be indistinguishable they are on the same line of the same filename and have the same function name then the statistics for these two entries are accumulated into a single entry add filenames This method of the Stats class accumulates additional profiling information into the current profiling object Its arguments should refer to filenames created by the corresponding version of profile run or cProfile run Statistics for identically named re file line name functions are automatically accumulated into single function statistics dump_stats filename Save the data loaded into the Stats object to a file named filename The file is created if it does not exist and is overwritten if it already exists This is equivalent to the method of the same name on the profile Profile and cProfile Profile classes sort_stats keys This method modifies the Stats object by sorting it according to the supplied criteria The argument can be either a string or a SortKey enum identifying the basis of a sort example time name SortKey TIME or SortKey NAME The SortKey enums argument have advantage over the string argument in that it is more robust and less error prone When more than one key is provided then additional keys are used as secondary criteria when there is equality in all keys selected before them For example sort_stat
en
null
2,663
s SortKey NAME SortKey FILE will sort all the entries according to their function name and resolve all ties identical function names by sorting by file name For the string argument abbreviations can be used for any key names as long as the abbreviation is unambiguous The following are the valid string and SortKey Valid String Arg Valid enum Arg Meaning calls SortKey CALLS call count cumulative SortKey CUMULATIVE cumulative time cumtime N A cumulative time file N A file name filename SortKey FILENAME file name module N A file name ncalls N A call count pcalls SortKey PCALLS primitive call count line SortKey LINE line number name SortKey NAME function name nfl SortKey NFL name file line stdname SortKey STDNAME standard name time SortKey TIME internal time tottime N A internal time Note that all sorts on statistics are in descending order placing most time consuming items first where as name file and line number searches are in ascending order alphabetical The subtle distinction between SortKey NFL and SortKey STDNAME is that the standard name is a sort of the name as printed which means that the embedded line numbers get compared in an odd way For example lines 3 20 and 40 would if the file names were the same appear in the string order 20 3 and 40 In contrast SortKey NFL does a numeric compare of the line numbers In fact sort_stats SortKey NFL is the same as sort_stats SortKey NAME SortKey FILENAME SortKey LINE For backward compatibility reasons the numeric arguments 1 0 1 and 2 are permitted They are interpreted as stdname calls time and cumulative respectively If this old style format numeric is used only one sort key the numeric key will be used and additional arguments will be silently ignored New in version 3 7 Added the SortKey enum reverse_order This method for the Stats class reverses the ordering of the basic list within the object Note that by default ascending vs descending order is properly selected based on the sort key of choice print_stats restrictions This method for the Stats class prints out a report as described in the profile run definition The order of the printing is based on the last sort_stats operation done on the object subject to caveats in add and strip_dirs The arguments provided if any can be used to limit the list down to the significant entries Initially the list is taken to be the complete set of profiled functions Each restriction is either an integer to select a count of lines or a decimal fraction between 0 0 and 1 0 inclusive to select a percentage of lines or a string that will interpreted as a regular expression to pattern match the standard name that is printed If several restrictions are provided then they are applied sequentially For example print_stats 1 foo would first limit the printing to first 10 of list and then only print functions that were part of filename foo In contrast the command print_stats foo 1 would limit the list to all functions having file names foo and then proceed to only print the first 10 of them print_callers restrictions This method for the Stats class prints a list of all functions that called each function in the profiled database The ordering is identical to that provided by print_stats and the definition of the restricting argument is also identical Each caller is reported on its own line The format differs slightly depending on the profiler that produced the stats With profile a number is shown in parentheses after each caller to show how many times this specific call was made For convenience a second non parenthesized number repeats the cumulative time spent in the function at the right With cProfile each caller is preceded by three numbers the number of times this specific call was made and the total and cumulative times spent in the current function while it was invoked by this specific caller print_callees restrictions This method for the Stats class prints a list of all function that were called by the indicated function Aside from this reversal of direction of calls re called vs was called by the arguments and ordering are identical to the prin
en
null
2,664
t_callers method get_stats_profile This method returns an instance of StatsProfile which contains a mapping of function names to instances of FunctionProfile Each FunctionProfile instance holds information related to the function s profile such as how long the function took to run how many times it was called etc New in version 3 9 Added the following dataclasses StatsProfile FunctionProfile Added the following function get_stats_profile What Is Deterministic Profiling Deterministic profiling is meant to reflect the fact that all function call function return and exception events are monitored and precise timings are made for the intervals between these events during which time the user s code is executing In contrast statistical profiling which is not done by this module randomly samples the effective instruction pointer and deduces where time is being spent The latter technique traditionally involves less overhead as the code does not need to be instrumented but provides only relative indications of where time is being spent In Python since there is an interpreter active during execution the presence of instrumented code is not required in order to do deterministic profiling Python automatically provides a hook optional callback for each event In addition the interpreted nature of Python tends to add so much overhead to execution that deterministic profiling tends to only add small processing overhead in typical applications The result is that deterministic profiling is not that expensive yet provides extensive run time statistics about the execution of a Python program Call count statistics can be used to identify bugs in code surprising counts and to identify possible inline expansion points high call counts Internal time statistics can be used to identify hot loops that should be carefully optimized Cumulative time statistics should be used to identify high level errors in the selection of algorithms Note that the unusual handling of cumulative times in this profiler allows statistics for recursive implementations of algorithms to be directly compared to iterative implementations Limitations One limitation has to do with accuracy of timing information There is a fundamental problem with deterministic profilers involving accuracy The most obvious restriction is that the underlying clock is only ticking at a rate typically of about 001 seconds Hence no measurements will be more accurate than the underlying clock If enough measurements are taken then the error will tend to average out Unfortunately removing this first error induces a second source of error The second problem is that it takes a while from when an event is dispatched until the profiler s call to get the time actually gets the state of the clock Similarly there is a certain lag when exiting the profiler event handler from the time that the clock s value was obtained and then squirreled away until the user s code is once again executing As a result functions that are called many times or call many functions will typically accumulate this error The error that accumulates in this fashion is typically less than the accuracy of the clock less than one clock tick but it can accumulate and become very significant The problem is more important with profile than with the lower overhead cProfile For this reason profile provides a means of calibrating itself for a given platform so that this error can be probabilistically on the average removed After the profiler is calibrated it will be more accurate in a least square sense but it will sometimes produce negative numbers when call counts are exceptionally low and the gods of probability work against you Do not be alarmed by negative numbers in the profile They should only appear if you have calibrated your profiler and the results are actually better than without calibration Calibration The profiler of the profile module subtracts a constant from each event handling time to compensate for the overhead of calling the time function and socking away the results By default the constant is 0 The following procedure can b
en
null
2,665
e used to obtain a better constant for a given platform see Limitations import profile pr profile Profile for i in range 5 print pr calibrate 10000 The method executes the number of Python calls given by the argument directly and again under the profiler measuring the time for both It then computes the hidden overhead per profiler event and returns that as a float For example on a 1 8Ghz Intel Core i5 running macOS and using Python s time process_time as the timer the magical number is about 4 04e 6 The object of this exercise is to get a fairly consistent result If your computer is very fast or your timer function has poor resolution you might have to pass 100000 or even 1000000 to get consistent results When you have a consistent answer there are three ways you can use it import profile 1 Apply computed bias to all Profile instances created hereafter profile Profile bias your_computed_bias 2 Apply computed bias to a specific Profile instance pr profile Profile pr bias your_computed_bias 3 Specify computed bias in instance constructor pr profile Profile bias your_computed_bias If you have a choice you are better off choosing a smaller constant and then your results will less often show up as negative in profile statistics Using a custom timer If you want to change how current time is determined for example to force use of wall clock time or elapsed process time pass the timing function you want to the Profile class constructor pr profile Profile your_time_func The resulting profiler will then call your_time_func Depending on whether you are using profile Profile or cProfile Profile your_time_func s return value will be interpreted differently profile Profile your_time_func should return a single number or a list of numbers whose sum is the current time like what os times returns If the function returns a single time number or the list of returned numbers has length 2 then you will get an especially fast version of the dispatch routine Be warned that you should calibrate the profiler class for the timer function that you choose see Calibration For most machines a timer that returns a lone integer value will provide the best results in terms of low overhead during profiling os times is pretty bad as it returns a tuple of floating point values If you want to substitute a better timer in the cleanest fashion derive a class and hardwire a replacement dispatch method that best handles your timer call along with the appropriate calibration constant cProfile Profile your_time_func should return a single number If it returns integers you can also invoke the class constructor with a second argument specifying the real duration of one unit of time For example if your_integer_time_func returns times measured in thousands of seconds you would construct the Profile instance as follows pr cProfile Profile your_integer_time_func 0 001 As the cProfile Profile class cannot be calibrated custom timer functions should be used with care and should be as fast as possible For the best results with a custom timer it might be necessary to hard code it in the C source of the internal _lsprof module Python 3 3 adds several new functions in time that can be used to make precise measurements of process or wall clock time For example see time perf_counter
en
null
2,666
ssl TLS SSL wrapper for socket objects Source code Lib ssl py This module provides access to Transport Layer Security often known as Secure Sockets Layer encryption and peer authentication facilities for network sockets both client side and server side This module uses the OpenSSL library It is available on all modern Unix systems Windows macOS and probably additional platforms as long as OpenSSL is installed on that platform Note Some behavior may be platform dependent since calls are made to the operating system socket APIs The installed version of OpenSSL may also cause variations in behavior For example TLSv1 3 comes with OpenSSL version 1 1 1 Warning Don t use this module without reading the Security considerations Doing so may lead to a false sense of security as the default settings of the ssl module are not necessarily appropriate for your application Availability not Emscripten not WASI This module does not work or is not available on WebAssembly platforms wasm32 emscripten and wasm32 wasi See WebAssembly platforms for more information This section documents the objects and functions in the ssl module for more general information about TLS SSL and certificates the reader is referred to the documents in the See Also section at the bottom This module provides a class ssl SSLSocket which is derived from the socket socket type and provides a socket like wrapper that also encrypts and decrypts the data going over the socket with SSL It supports additional methods such as getpeercert which retrieves the certificate of the other side of the connection and cipher which retrieves the cipher being used for the secure connection For more sophisticated applications the ssl SSLContext class helps manage settings and certificates which can then be inherited by SSL sockets created through the SSLContext wrap_socket method Changed in version 3 5 3 Updated to support linking with OpenSSL 1 1 0 Changed in version 3 6 OpenSSL 0 9 8 1 0 0 and 1 0 1 are deprecated and no longer supported In the future the ssl module will require at least OpenSSL 1 0 2 or 1 1 0 Changed in version 3 10 PEP 644 has been implemented The ssl module requires OpenSSL 1 1 1 or newer Use of deprecated constants and functions result in deprecation warnings Functions Constants and Exceptions Socket creation Instances of SSLSocket must be created using the SSLContext wrap_socket method The helper function create_default_context returns a new context with secure default settings Client socket example with default context and IPv4 IPv6 dual stack import socket import ssl hostname www python org context ssl create_default_context with socket create_connection hostname 443 as sock with context wrap_socket sock server_hostname hostname as ssock print ssock version Client socket example with custom context and IPv4 hostname www python org PROTOCOL_TLS_CLIENT requires valid cert chain and hostname context ssl SSLContext ssl PROTOCOL_TLS_CLIENT context load_verify_locations path to cabundle pem with socket socket socket AF_INET socket SOCK_STREAM 0 as sock with context wrap_socket sock server_hostname hostname as ssock print ssock version Server socket example listening on localhost IPv4 context ssl SSLContext ssl PROTOCOL_TLS_SERVER context load_cert_chain path to certchain pem path to private key with socket socket socket AF_INET socket SOCK_STREAM 0 as sock sock bind 127 0 0 1 8443 sock listen 5 with context wrap_socket sock server_side True as ssock conn addr ssock accept Context creation A convenience function helps create SSLContext objects for common purposes ssl create_default_context purpose Purpose SERVER_AUTH cafile None capath None cadata None Return a new SSLContext object with default settings for the given purpose The settings are chosen by the ssl module and usually represent a higher security level than when calling the SSLContext constructor directly cafile capath cadata represent optional CA certificates to trust for certificate verification as in SSLContext load_verify_locations If all three are None this function can choose to trust the system s de
en
null
2,667
fault CA certificates instead The settings are PROTOCOL_TLS_CLIENT or PROTOCOL_TLS_SERVER OP_NO_SSLv2 and OP_NO_SSLv3 with high encryption cipher suites without RC4 and without unauthenticated cipher suites Passing SERVER_AUTH as purpose sets verify_mode to CERT_REQUIRED and either loads CA certificates when at least one of cafile capath or cadata is given or uses SSLContext load_default_certs to load default CA certificates When keylog_filename is supported and the environment variable SSLKEYLOGFILE is set create_default_context enables key logging Note The protocol options cipher and other settings may change to more restrictive values anytime without prior deprecation The values represent a fair balance between compatibility and security If your application needs specific settings you should create a SSLContext and apply the settings yourself Note If you find that when certain older clients or servers attempt to connect with a SSLContext created by this function that they get an error stating Protocol or cipher suite mismatch it may be that they only support SSL3 0 which this function excludes using the OP_NO_SSLv3 SSL3 0 is widely considered to be completely broken If you still wish to continue to use this function but still allow SSL 3 0 connections you can re enable them using ctx ssl create_default_context Purpose CLIENT_AUTH ctx options ssl OP_NO_SSLv3 New in version 3 4 Changed in version 3 4 4 RC4 was dropped from the default cipher string Changed in version 3 6 ChaCha20 Poly1305 was added to the default cipher string 3DES was dropped from the default cipher string Changed in version 3 8 Support for key logging to SSLKEYLOGFILE was added Changed in version 3 10 The context now uses PROTOCOL_TLS_CLIENT or PROTOCOL_TLS_SERVER protocol instead of generic PROTOCOL_TLS Exceptions exception ssl SSLError Raised to signal an error from the underlying SSL implementation currently provided by the OpenSSL library This signifies some problem in the higher level encryption and authentication layer that s superimposed on the underlying network connection This error is a subtype of OSError The error code and message of SSLError instances are provided by the OpenSSL library Changed in version 3 3 SSLError used to be a subtype of socket error library A string mnemonic designating the OpenSSL submodule in which the error occurred such as SSL PEM or X509 The range of possible values depends on the OpenSSL version New in version 3 3 reason A string mnemonic designating the reason this error occurred for example CERTIFICATE_VERIFY_FAILED The range of possible values depends on the OpenSSL version New in version 3 3 exception ssl SSLZeroReturnError A subclass of SSLError raised when trying to read or write and the SSL connection has been closed cleanly Note that this doesn t mean that the underlying transport read TCP has been closed New in version 3 3 exception ssl SSLWantReadError A subclass of SSLError raised by a non blocking SSL socket when trying to read or write data but more data needs to be received on the underlying TCP transport before the request can be fulfilled New in version 3 3 exception ssl SSLWantWriteError A subclass of SSLError raised by a non blocking SSL socket when trying to read or write data but more data needs to be sent on the underlying TCP transport before the request can be fulfilled New in version 3 3 exception ssl SSLSyscallError A subclass of SSLError raised when a system error was encountered while trying to fulfill an operation on a SSL socket Unfortunately there is no easy way to inspect the original errno number New in version 3 3 exception ssl SSLEOFError A subclass of SSLError raised when the SSL connection has been terminated abruptly Generally you shouldn t try to reuse the underlying transport when this error is encountered New in version 3 3 exception ssl SSLCertVerificationError A subclass of SSLError raised when certificate validation has failed New in version 3 7 verify_code A numeric error number that denotes the verification error verify_message A human readable string of the verificat
en
null
2,668
ion error exception ssl CertificateError An alias for SSLCertVerificationError Changed in version 3 7 The exception is now an alias for SSLCertVerificationError Random generation ssl RAND_bytes num Return num cryptographically strong pseudo random bytes Raises an SSLError if the PRNG has not been seeded with enough data or if the operation is not supported by the current RAND method RAND_status can be used to check the status of the PRNG and RAND_add can be used to seed the PRNG For almost all applications os urandom is preferable Read the Wikipedia article Cryptographically secure pseudorandom number generator CSPRNG to get the requirements of a cryptographically strong generator New in version 3 3 ssl RAND_status Return True if the SSL pseudo random number generator has been seeded with enough randomness and False otherwise You can use ssl RAND_egd and ssl RAND_add to increase the randomness of the pseudo random number generator ssl RAND_add bytes entropy Mix the given bytes into the SSL pseudo random number generator The parameter entropy a float is a lower bound on the entropy contained in string so you can always use 0 0 See RFC 1750 for more information on sources of entropy Changed in version 3 5 Writable bytes like object is now accepted Certificate handling ssl cert_time_to_seconds cert_time Return the time in seconds since the Epoch given the cert_time string representing the notBefore or notAfter date from a certificate in b d H M S Y Z strptime format C locale Here s an example import ssl timestamp ssl cert_time_to_seconds Jan 5 09 34 43 2018 GMT timestamp 1515144883 from datetime import datetime print datetime utcfromtimestamp timestamp 2018 01 05 09 34 43 notBefore or notAfter dates must use GMT RFC 5280 Changed in version 3 5 Interpret the input time as a time in UTC as specified by GMT timezone in the input string Local timezone was used previously Return an integer no fractions of a second in the input format ssl get_server_certificate addr ssl_version PROTOCOL_TLS_CLIENT ca_certs None timeout Given the address addr of an SSL protected server as a hostname port number pair fetches the server s certificate and returns it as a PEM encoded string If ssl_version is specified uses that version of the SSL protocol to attempt to connect to the server If ca_certs is specified it should be a file containing a list of root certificates the same format as used for the cafile parameter in SSLContext load_verify_locations The call will attempt to validate the server certificate against that set of root certificates and will fail if the validation attempt fails A timeout can be specified with the timeout parameter Changed in version 3 3 This function is now IPv6 compatible Changed in version 3 5 The default ssl_version is changed from PROTOCOL_SSLv3 to PROTOCOL_TLS for maximum compatibility with modern servers Changed in version 3 10 The timeout parameter was added ssl DER_cert_to_PEM_cert DER_cert_bytes Given a certificate as a DER encoded blob of bytes returns a PEM encoded string version of the same certificate ssl PEM_cert_to_DER_cert PEM_cert_string Given a certificate as an ASCII PEM string returns a DER encoded sequence of bytes for that same certificate ssl get_default_verify_paths Returns a named tuple with paths to OpenSSL s default cafile and capath The paths are the same as used by SSLContext set_default_verify_paths The return value is a named tuple DefaultVerifyPaths cafile resolved path to cafile or None if the file doesn t exist capath resolved path to capath or None if the directory doesn t exist openssl_cafile_env OpenSSL s environment key that points to a cafile openssl_cafile hard coded path to a cafile openssl_capath_env OpenSSL s environment key that points to a capath openssl_capath hard coded path to a capath directory New in version 3 4 ssl enum_certificates store_name Retrieve certificates from Windows system cert store store_name may be one of CA ROOT or MY Windows may provide additional cert stores too The function returns a list of cert_bytes encoding_type trust tuples The encoding_type sp
en
null
2,669
ecifies the encoding of cert_bytes It is either x509_asn for X 509 ASN 1 data or pkcs_7_asn for PKCS 7 ASN 1 data Trust specifies the purpose of the certificate as a set of OIDS or exactly True if the certificate is trustworthy for all purposes Example ssl enum_certificates CA b data x509_asn 1 3 6 1 5 5 7 3 1 1 3 6 1 5 5 7 3 2 b data x509_asn True Availability Windows New in version 3 4 ssl enum_crls store_name Retrieve CRLs from Windows system cert store store_name may be one of CA ROOT or MY Windows may provide additional cert stores too The function returns a list of cert_bytes encoding_type trust tuples The encoding_type specifies the encoding of cert_bytes It is either x509_asn for X 509 ASN 1 data or pkcs_7_asn for PKCS 7 ASN 1 data Availability Windows New in version 3 4 Constants All constants are now enum IntEnum or enum IntFlag collections New in version 3 6 ssl CERT_NONE Possible value for SSLContext verify_mode Except for PROTOCOL_TLS_CLIENT it is the default mode With client side sockets just about any cert is accepted Validation errors such as untrusted or expired cert are ignored and do not abort the TLS SSL handshake In server mode no certificate is requested from the client so the client does not send any for client cert authentication See the discussion of Security considerations below ssl CERT_OPTIONAL Possible value for SSLContext verify_mode In client mode CERT_OPTIONAL has the same meaning as CERT_REQUIRED It is recommended to use CERT_REQUIRED for client side sockets instead In server mode a client certificate request is sent to the client The client may either ignore the request or send a certificate in order perform TLS client cert authentication If the client chooses to send a certificate it is verified Any verification error immediately aborts the TLS handshake Use of this setting requires a valid set of CA certificates to be passed to SSLContext load_verify_locations ssl CERT_REQUIRED Possible value for SSLContext verify_mode In this mode certificates are required from the other side of the socket connection an SSLError will be raised if no certificate is provided or if its validation fails This mode is not sufficient to verify a certificate in client mode as it does not match hostnames check_hostname must be enabled as well to verify the authenticity of a cert PROTOCOL_TLS_CLIENT uses CERT_REQUIRED and enables check_hostname by default With server socket this mode provides mandatory TLS client cert authentication A client certificate request is sent to the client and the client must provide a valid and trusted certificate Use of this setting requires a valid set of CA certificates to be passed to SSLContext load_verify_locations class ssl VerifyMode enum IntEnum collection of CERT_ constants New in version 3 6 ssl VERIFY_DEFAULT Possible value for SSLContext verify_flags In this mode certificate revocation lists CRLs are not checked By default OpenSSL does neither require nor verify CRLs New in version 3 4 ssl VERIFY_CRL_CHECK_LEAF Possible value for SSLContext verify_flags In this mode only the peer cert is checked but none of the intermediate CA certificates The mode requires a valid CRL that is signed by the peer cert s issuer its direct ancestor CA If no proper CRL has been loaded with SSLContext load_verify_locations validation will fail New in version 3 4 ssl VERIFY_CRL_CHECK_CHAIN Possible value for SSLContext verify_flags In this mode CRLs of all certificates in the peer cert chain are checked New in version 3 4 ssl VERIFY_X509_STRICT Possible value for SSLContext verify_flags to disable workarounds for broken X 509 certificates New in version 3 4 ssl VERIFY_ALLOW_PROXY_CERTS Possible value for SSLContext verify_flags to enables proxy certificate verification New in version 3 10 ssl VERIFY_X509_TRUSTED_FIRST Possible value for SSLContext verify_flags It instructs OpenSSL to prefer trusted certificates when building the trust chain to validate a certificate This flag is enabled by default New in version 3 4 4 ssl VERIFY_X509_PARTIAL_CHAIN Possible value for SSLContext verify_flags It
en
null
2,670
instructs OpenSSL to accept intermediate CAs in the trust store to be treated as trust anchors in the same way as the self signed root CA certificates This makes it possible to trust certificates issued by an intermediate CA without having to trust its ancestor root CA New in version 3 10 class ssl VerifyFlags enum IntFlag collection of VERIFY_ constants New in version 3 6 ssl PROTOCOL_TLS Selects the highest protocol version that both the client and server support Despite the name this option can select both SSL and TLS protocols New in version 3 6 Deprecated since version 3 10 TLS clients and servers require different default settings for secure communication The generic TLS protocol constant is deprecated in favor of PROTOCOL_TLS_CLIENT and PROTOCOL_TLS_SERVER ssl PROTOCOL_TLS_CLIENT Auto negotiate the highest protocol version that both the client and server support and configure the context client side connections The protocol enables CERT_REQUIRED and check_hostname by default New in version 3 6 ssl PROTOCOL_TLS_SERVER Auto negotiate the highest protocol version that both the client and server support and configure the context server side connections New in version 3 6 ssl PROTOCOL_SSLv23 Alias for PROTOCOL_TLS Deprecated since version 3 6 Use PROTOCOL_TLS instead ssl PROTOCOL_SSLv3 Selects SSL version 3 as the channel encryption protocol This protocol is not available if OpenSSL is compiled with the no ssl3 option Warning SSL version 3 is insecure Its use is highly discouraged Deprecated since version 3 6 OpenSSL has deprecated all version specific protocols Use the default protocol PROTOCOL_TLS_SERVER or PROTOCOL_TLS_CLIENT with SSLContext minimum_version and SSLContext maximum_version instead ssl PROTOCOL_TLSv1 Selects TLS version 1 0 as the channel encryption protocol Deprecated since version 3 6 OpenSSL has deprecated all version specific protocols ssl PROTOCOL_TLSv1_1 Selects TLS version 1 1 as the channel encryption protocol Available only with openssl version 1 0 1 New in version 3 4 Deprecated since version 3 6 OpenSSL has deprecated all version specific protocols ssl PROTOCOL_TLSv1_2 Selects TLS version 1 2 as the channel encryption protocol Available only with openssl version 1 0 1 New in version 3 4 Deprecated since version 3 6 OpenSSL has deprecated all version specific protocols ssl OP_ALL Enables workarounds for various bugs present in other SSL implementations This option is set by default It does not necessarily set the same flags as OpenSSL s SSL_OP_ALL constant New in version 3 2 ssl OP_NO_SSLv2 Prevents an SSLv2 connection This option is only applicable in conjunction with PROTOCOL_TLS It prevents the peers from choosing SSLv2 as the protocol version New in version 3 2 Deprecated since version 3 6 SSLv2 is deprecated ssl OP_NO_SSLv3 Prevents an SSLv3 connection This option is only applicable in conjunction with PROTOCOL_TLS It prevents the peers from choosing SSLv3 as the protocol version New in version 3 2 Deprecated since version 3 6 SSLv3 is deprecated ssl OP_NO_TLSv1 Prevents a TLSv1 connection This option is only applicable in conjunction with PROTOCOL_TLS It prevents the peers from choosing TLSv1 as the protocol version New in version 3 2 Deprecated since version 3 7 The option is deprecated since OpenSSL 1 1 0 use the new SSLContext minimum_version and SSLContext maximum_version instead ssl OP_NO_TLSv1_1 Prevents a TLSv1 1 connection This option is only applicable in conjunction with PROTOCOL_TLS It prevents the peers from choosing TLSv1 1 as the protocol version Available only with openssl version 1 0 1 New in version 3 4 Deprecated since version 3 7 The option is deprecated since OpenSSL 1 1 0 ssl OP_NO_TLSv1_2 Prevents a TLSv1 2 connection This option is only applicable in conjunction with PROTOCOL_TLS It prevents the peers from choosing TLSv1 2 as the protocol version Available only with openssl version 1 0 1 New in version 3 4 Deprecated since version 3 7 The option is deprecated since OpenSSL 1 1 0 ssl OP_NO_TLSv1_3 Prevents a TLSv1 3 connection This option is only applicable in conj
en
null
2,671
unction with PROTOCOL_TLS It prevents the peers from choosing TLSv1 3 as the protocol version TLS 1 3 is available with OpenSSL 1 1 1 or later When Python has been compiled against an older version of OpenSSL the flag defaults to 0 New in version 3 6 3 Deprecated since version 3 7 The option is deprecated since OpenSSL 1 1 0 It was added to 2 7 15 and 3 6 3 for backwards compatibility with OpenSSL 1 0 2 ssl OP_NO_RENEGOTIATION Disable all renegotiation in TLSv1 2 and earlier Do not send HelloRequest messages and ignore renegotiation requests via ClientHello This option is only available with OpenSSL 1 1 0h and later New in version 3 7 ssl OP_CIPHER_SERVER_PREFERENCE Use the server s cipher ordering preference rather than the client s This option has no effect on client sockets and SSLv2 server sockets New in version 3 3 ssl OP_SINGLE_DH_USE Prevents re use of the same DH key for distinct SSL sessions This improves forward secrecy but requires more computational resources This option only applies to server sockets New in version 3 3 ssl OP_SINGLE_ECDH_USE Prevents re use of the same ECDH key for distinct SSL sessions This improves forward secrecy but requires more computational resources This option only applies to server sockets New in version 3 3 ssl OP_ENABLE_MIDDLEBOX_COMPAT Send dummy Change Cipher Spec CCS messages in TLS 1 3 handshake to make a TLS 1 3 connection look more like a TLS 1 2 connection This option is only available with OpenSSL 1 1 1 and later New in version 3 8 ssl OP_NO_COMPRESSION Disable compression on the SSL channel This is useful if the application protocol supports its own compression scheme New in version 3 3 class ssl Options enum IntFlag collection of OP_ constants ssl OP_NO_TICKET Prevent client side from requesting a session ticket New in version 3 6 ssl OP_IGNORE_UNEXPECTED_EOF Ignore unexpected shutdown of TLS connections This option is only available with OpenSSL 3 0 0 and later New in version 3 10 ssl OP_ENABLE_KTLS Enable the use of the kernel TLS To benefit from the feature OpenSSL must have been compiled with support for it and the negotiated cipher suites and extensions must be supported by it a list of supported ones may vary by platform and kernel version Note that with enabled kernel TLS some cryptographic operations are performed by the kernel directly and not via any available OpenSSL Providers This might be undesirable if for example the application requires all cryptographic operations to be performed by the FIPS provider This option is only available with OpenSSL 3 0 0 and later New in version 3 12 ssl OP_LEGACY_SERVER_CONNECT Allow legacy insecure renegotiation between OpenSSL and unpatched servers only New in version 3 12 ssl HAS_ALPN Whether the OpenSSL library has built in support for the Application Layer Protocol Negotiation TLS extension as described in RFC 7301 New in version 3 5 ssl HAS_NEVER_CHECK_COMMON_NAME Whether the OpenSSL library has built in support not checking subject common name and SSLContext hostname_checks_common_name is writeable New in version 3 7 ssl HAS_ECDH Whether the OpenSSL library has built in support for the Elliptic Curve based Diffie Hellman key exchange This should be true unless the feature was explicitly disabled by the distributor New in version 3 3 ssl HAS_SNI Whether the OpenSSL library has built in support for the Server Name Indication extension as defined in RFC 6066 New in version 3 2 ssl HAS_NPN Whether the OpenSSL library has built in support for the Next Protocol Negotiation as described in the Application Layer Protocol Negotiation When true you can use the SSLContext set_npn_protocols method to advertise which protocols you want to support New in version 3 3 ssl HAS_SSLv2 Whether the OpenSSL library has built in support for the SSL 2 0 protocol New in version 3 7 ssl HAS_SSLv3 Whether the OpenSSL library has built in support for the SSL 3 0 protocol New in version 3 7 ssl HAS_TLSv1 Whether the OpenSSL library has built in support for the TLS 1 0 protocol New in version 3 7 ssl HAS_TLSv1_1 Whether the OpenSSL library has bui
en
null
2,672
lt in support for the TLS 1 1 protocol New in version 3 7 ssl HAS_TLSv1_2 Whether the OpenSSL library has built in support for the TLS 1 2 protocol New in version 3 7 ssl HAS_TLSv1_3 Whether the OpenSSL library has built in support for the TLS 1 3 protocol New in version 3 7 ssl CHANNEL_BINDING_TYPES List of supported TLS channel binding types Strings in this list can be used as arguments to SSLSocket get_channel_binding New in version 3 3 ssl OPENSSL_VERSION The version string of the OpenSSL library loaded by the interpreter ssl OPENSSL_VERSION OpenSSL 1 0 2k 26 Jan 2017 New in version 3 2 ssl OPENSSL_VERSION_INFO A tuple of five integers representing version information about the OpenSSL library ssl OPENSSL_VERSION_INFO 1 0 2 11 15 New in version 3 2 ssl OPENSSL_VERSION_NUMBER The raw version number of the OpenSSL library as a single integer ssl OPENSSL_VERSION_NUMBER 268443839 hex ssl OPENSSL_VERSION_NUMBER 0x100020bf New in version 3 2 ssl ALERT_DESCRIPTION_HANDSHAKE_FAILURE ssl ALERT_DESCRIPTION_INTERNAL_ERROR ALERT_DESCRIPTION_ Alert Descriptions from RFC 5246 and others The IANA TLS Alert Registry contains this list and references to the RFCs where their meaning is defined Used as the return value of the callback function in SSLContext set_servername_callback New in version 3 4 class ssl AlertDescription enum IntEnum collection of ALERT_DESCRIPTION_ constants New in version 3 6 Purpose SERVER_AUTH Option for create_default_context and SSLContext load_default_certs This value indicates that the context may be used to authenticate web servers therefore it will be used to create client side sockets New in version 3 4 Purpose CLIENT_AUTH Option for create_default_context and SSLContext load_default_certs This value indicates that the context may be used to authenticate web clients therefore it will be used to create server side sockets New in version 3 4 class ssl SSLErrorNumber enum IntEnum collection of SSL_ERROR_ constants New in version 3 6 class ssl TLSVersion enum IntEnum collection of SSL and TLS versions for SSLContext maximum_version and SSLContext minimum_version New in version 3 7 TLSVersion MINIMUM_SUPPORTED TLSVersion MAXIMUM_SUPPORTED The minimum or maximum supported SSL or TLS version These are magic constants Their values don t reflect the lowest and highest available TLS SSL versions TLSVersion SSLv3 TLSVersion TLSv1 TLSVersion TLSv1_1 TLSVersion TLSv1_2 TLSVersion TLSv1_3 SSL 3 0 to TLS 1 3 Deprecated since version 3 10 All TLSVersion members except TLSVersion TLSv1_2 and TLSVersion TLSv1_3 are deprecated SSL Sockets class ssl SSLSocket socket socket SSL sockets provide the following methods of Socket Objects accept bind close connect detach fileno getpeername getsockname getsockopt setsockopt gettimeout settimeout setblocking listen makefile recv recv_into but passing a non zero flags argument is not allowed send sendall with the same limitation sendfile but os sendfile will be used for plain text sockets only else send will be used shutdown However since the SSL and TLS protocol has its own framing atop of TCP the SSL sockets abstraction can in certain respects diverge from the specification of normal OS level sockets See especially the notes on non blocking sockets Instances of SSLSocket must be created using the SSLContext wrap_socket method Changed in version 3 5 The sendfile method was added Changed in version 3 5 The shutdown does not reset the socket timeout each time bytes are received or sent The socket timeout is now the maximum total duration of the shutdown Deprecated since version 3 6 It is deprecated to create a SSLSocket instance directly use SSLContext wrap_socket to wrap a socket Changed in version 3 7 SSLSocket instances must to created with wrap_socket In earlier versions it was possible to create instances directly This was never documented or officially supported Changed in version 3 10 Python now uses SSL_read_ex and SSL_write_ex internally The functions support reading and writing of data larger than 2 GB Writing zero length data no longer fails with a protocol violation error
en
null
2,673
SSL sockets also have the following additional methods and attributes SSLSocket read len 1024 buffer None Read up to len bytes of data from the SSL socket and return the result as a bytes instance If buffer is specified then read into the buffer instead and return the number of bytes read Raise SSLWantReadError or SSLWantWriteError if the socket is non blocking and the read would block As at any time a re negotiation is possible a call to read can also cause write operations Changed in version 3 5 The socket timeout is no longer reset each time bytes are received or sent The socket timeout is now the maximum total duration to read up to len bytes Deprecated since version 3 6 Use recv instead of read SSLSocket write buf Write buf to the SSL socket and return the number of bytes written The buf argument must be an object supporting the buffer interface Raise SSLWantReadError or SSLWantWriteError if the socket is non blocking and the write would block As at any time a re negotiation is possible a call to write can also cause read operations Changed in version 3 5 The socket timeout is no longer reset each time bytes are received or sent The socket timeout is now the maximum total duration to write buf Deprecated since version 3 6 Use send instead of write Note The read and write methods are the low level methods that read and write unencrypted application level data and decrypt encrypt it to encrypted wire level data These methods require an active SSL connection i e the handshake was completed and SSLSocket unwrap was not called Normally you should use the socket API methods like recv and send instead of these methods SSLSocket do_handshake Perform the SSL setup handshake Changed in version 3 4 The handshake method also performs match_hostname when the check_hostname attribute of the socket s context is true Changed in version 3 5 The socket timeout is no longer reset each time bytes are received or sent The socket timeout is now the maximum total duration of the handshake Changed in version 3 7 Hostname or IP address is matched by OpenSSL during handshake The function match_hostname is no longer used In case OpenSSL refuses a hostname or IP address the handshake is aborted early and a TLS alert message is sent to the peer SSLSocket getpeercert binary_form False If there is no certificate for the peer on the other end of the connection return None If the SSL handshake hasn t been done yet raise ValueError If the binary_form parameter is False and a certificate was received from the peer this method returns a dict instance If the certificate was not validated the dict is empty If the certificate was validated it returns a dict with several keys amongst them subject the principal for which the certificate was issued and issuer the principal issuing the certificate If a certificate contains an instance of the Subject Alternative Name extension see RFC 3280 there will also be a subjectAltName key in the dictionary The subject and issuer fields are tuples containing the sequence of relative distinguished names RDNs given in the certificate s data structure for the respective fields and each RDN is a sequence of name value pairs Here is a real world example issuer countryName IL organizationName StartCom Ltd organizationalUnitName Secure Digital Certificate Signing commonName StartCom Class 2 Primary Intermediate Server CA notAfter Nov 22 08 15 19 2013 GMT notBefore Nov 21 03 09 52 2011 GMT serialNumber 95F0 subject description 571208 SLe257oHY9fVQ07Z countryName US stateOrProvinceName California localityName San Francisco organizationName Electronic Frontier Foundation Inc commonName eff org emailAddress hostmaster eff org subjectAltName DNS eff org DNS eff org version 3 If the binary_form parameter is True and a certificate was provided this method returns the DER encoded form of the entire certificate as a sequence of bytes or None if the peer did not provide a certificate Whether the peer provides a certificate depends on the SSL socket s role for a client SSL socket the server will always provide a certificate regardless o
en
null
2,674
f whether validation was required for a server SSL socket the client will only provide a certificate when requested by the server therefore getpeercert will return None if you used CERT_NONE rather than CERT_OPTIONAL or CERT_REQUIRED See also SSLContext check_hostname Changed in version 3 2 The returned dictionary includes additional items such as issuer and notBefore Changed in version 3 4 ValueError is raised when the handshake isn t done The returned dictionary includes additional X509v3 extension items such as crlDistributionPoints caIssuers and OCSP URIs Changed in version 3 9 IPv6 address strings no longer have a trailing new line SSLSocket cipher Returns a three value tuple containing the name of the cipher being used the version of the SSL protocol that defines its use and the number of secret bits being used If no connection has been established returns None SSLSocket shared_ciphers Return the list of ciphers available in both the client and server Each entry of the returned list is a three value tuple containing the name of the cipher the version of the SSL protocol that defines its use and the number of secret bits the cipher uses shared_ciphers returns None if no connection has been established or the socket is a client socket New in version 3 5 SSLSocket compression Return the compression algorithm being used as a string or None if the connection isn t compressed If the higher level protocol supports its own compression mechanism you can use OP_NO_COMPRESSION to disable SSL level compression New in version 3 3 SSLSocket get_channel_binding cb_type tls unique Get channel binding data for current connection as a bytes object Returns None if not connected or the handshake has not been completed The cb_type parameter allow selection of the desired channel binding type Valid channel binding types are listed in the CHANNEL_BINDING_TYPES list Currently only the tls unique channel binding defined by RFC 5929 is supported ValueError will be raised if an unsupported channel binding type is requested New in version 3 3 SSLSocket selected_alpn_protocol Return the protocol that was selected during the TLS handshake If SSLContext set_alpn_protocols was not called if the other party does not support ALPN if this socket does not support any of the client s proposed protocols or if the handshake has not happened yet None is returned New in version 3 5 SSLSocket selected_npn_protocol Return the higher level protocol that was selected during the TLS SSL handshake If SSLContext set_npn_protocols was not called or if the other party does not support NPN or if the handshake has not yet happened this will return None New in version 3 3 Deprecated since version 3 10 NPN has been superseded by ALPN SSLSocket unwrap Performs the SSL shutdown handshake which removes the TLS layer from the underlying socket and returns the underlying socket object This can be used to go from encrypted operation over a connection to unencrypted The returned socket should always be used for further communication with the other side of the connection rather than the original socket SSLSocket verify_client_post_handshake Requests post handshake authentication PHA from a TLS 1 3 client PHA can only be initiated for a TLS 1 3 connection from a server side socket after the initial TLS handshake and with PHA enabled on both sides see SSLContext post_handshake_auth The method does not perform a cert exchange immediately The server side sends a CertificateRequest during the next write event and expects the client to respond with a certificate on the next read event If any precondition isn t met e g not TLS 1 3 PHA not enabled an SSLError is raised Note Only available with OpenSSL 1 1 1 and TLS 1 3 enabled Without TLS 1 3 support the method raises NotImplementedError New in version 3 8 SSLSocket version Return the actual SSL protocol version negotiated by the connection as a string or None if no secure connection is established As of this writing possible return values include SSLv2 SSLv3 TLSv1 TLSv1 1 and TLSv1 2 Recent OpenSSL versions may define more return va
en
null
2,675
lues New in version 3 5 SSLSocket pending Returns the number of already decrypted bytes available for read pending on the connection SSLSocket context The SSLContext object this SSL socket is tied to New in version 3 2 SSLSocket server_side A boolean which is True for server side sockets and False for client side sockets New in version 3 2 SSLSocket server_hostname Hostname of the server str type or None for server side socket or if the hostname was not specified in the constructor New in version 3 2 Changed in version 3 7 The attribute is now always ASCII text When server_hostname is an internationalized domain name IDN this attribute now stores the A label form xn pythn mua org rather than the U label form pythön org SSLSocket session The SSLSession for this SSL connection The session is available for client and server side sockets after the TLS handshake has been performed For client sockets the session can be set before do_handshake has been called to reuse a session New in version 3 6 SSLSocket session_reused New in version 3 6 SSL Contexts New in version 3 2 An SSL context holds various data longer lived than single SSL connections such as SSL configuration options certificate s and private key s It also manages a cache of SSL sessions for server side sockets in order to speed up repeated connections from the same clients class ssl SSLContext protocol None Create a new SSL context You may pass protocol which must be one of the PROTOCOL_ constants defined in this module The parameter specifies which version of the SSL protocol to use Typically the server chooses a particular protocol version and the client must adapt to the server s choice Most of the versions are not interoperable with the other versions If not specified the default is PROTOCOL_TLS it provides the most compatibility with other versions Here s a table showing which versions in a client down the side can connect to which versions in a server along the top client server SSLv2 SSLv3 TLS 3 TLSv1 TLSv1 1 TLSv1 2 SSLv2 yes no no 1 no no no SSLv3 no yes no 2 no no no TLS SSLv23 3 no 1 no 2 yes yes yes yes TLSv1 no no yes yes no no TLSv1 1 no no yes no yes no TLSv1 2 no no yes no no yes Footnotes 1 SSLContext disables SSLv2 with OP_NO_SSLv2 by default 2 SSLContext disables SSLv3 with OP_NO_SSLv3 by default 3 TLS 1 3 protocol will be available with PROTOCOL_TLS in OpenSSL 1 1 1 There is no dedicated PROTOCOL constant for just TLS 1 3 See also create_default_context lets the ssl module choose security settings for a given purpose Changed in version 3 6 The context is created with secure default values The options OP_NO_COMPRESSION OP_CIPHER_SERVER_PREFERENCE OP_SINGLE_DH_USE OP_SINGLE_ECDH_USE OP_NO_SSLv2 and OP_NO_SSLv3 except for PROTOCOL_SSLv3 are set by default The initial cipher suite list contains only HIGH ciphers no NULL ciphers and no MD5 ciphers Deprecated since version 3 10 SSLContext without protocol argument is deprecated The context class will either require PROTOCOL_TLS_CLIENT or PROTOCOL_TLS_SERVER protocol in the future Changed in version 3 10 The default cipher suites now include only secure AES and ChaCha20 ciphers with forward secrecy and security level 2 RSA and DH keys with less than 2048 bits and ECC keys with less than 224 bits are prohibited PROTOCOL_TLS PROTOCOL_TLS_CLIENT and PROTOCOL_TLS_SERVER use TLS 1 2 as minimum TLS version SSLContext objects have the following methods and attributes SSLContext cert_store_stats Get statistics about quantities of loaded X 509 certificates count of X 509 certificates flagged as CA certificates and certificate revocation lists as dictionary Example for a context with one CA cert and one other cert context cert_store_stats crl 0 x509_ca 1 x509 2 New in version 3 4 SSLContext load_cert_chain certfile keyfile None password None Load a private key and the corresponding certificate The certfile string must be the path to a single file in PEM format containing the certificate as well as any number of CA certificates needed to establish the certificate s authenticity The keyfile string if present must p
en
null
2,676
oint to a file containing the private key Otherwise the private key will be taken from certfile as well See the discussion of Certificates for more information on how the certificate is stored in the certfile The password argument may be a function to call to get the password for decrypting the private key It will only be called if the private key is encrypted and a password is necessary It will be called with no arguments and it should return a string bytes or bytearray If the return value is a string it will be encoded as UTF 8 before using it to decrypt the key Alternatively a string bytes or bytearray value may be supplied directly as the password argument It will be ignored if the private key is not encrypted and no password is needed If the password argument is not specified and a password is required OpenSSL s built in password prompting mechanism will be used to interactively prompt the user for a password An SSLError is raised if the private key doesn t match with the certificate Changed in version 3 3 New optional argument password SSLContext load_default_certs purpose Purpose SERVER_AUTH Load a set of default certification authority CA certificates from default locations On Windows it loads CA certs from the CA and ROOT system stores On all systems it calls SSLContext set_default_verify_paths In the future the method may load CA certificates from other locations too The purpose flag specifies what kind of CA certificates are loaded The default settings Purpose SERVER_AUTH loads certificates that are flagged and trusted for TLS web server authentication client side sockets Purpose CLIENT_AUTH loads CA certificates for client certificate verification on the server side New in version 3 4 SSLContext load_verify_locations cafile None capath None cadata None Load a set of certification authority CA certificates used to validate other peers certificates when verify_mode is other than CERT_NONE At least one of cafile or capath must be specified This method can also load certification revocation lists CRLs in PEM or DER format In order to make use of CRLs SSLContext verify_flags must be configured properly The cafile string if present is the path to a file of concatenated CA certificates in PEM format See the discussion of Certificates for more information about how to arrange the certificates in this file The capath string if present is the path to a directory containing several CA certificates in PEM format following an OpenSSL specific layout The cadata object if present is either an ASCII string of one or more PEM encoded certificates or a bytes like object of DER encoded certificates Like with capath extra lines around PEM encoded certificates are ignored but at least one certificate must be present Changed in version 3 4 New optional argument cadata SSLContext get_ca_certs binary_form False Get a list of loaded certification authority CA certificates If the binary_form parameter is False each list entry is a dict like the output of SSLSocket getpeercert Otherwise the method returns a list of DER encoded certificates The returned list does not contain certificates from capath unless a certificate was requested and loaded by a SSL connection Note Certificates in a capath directory aren t loaded unless they have been used at least once New in version 3 4 SSLContext get_ciphers Get a list of enabled ciphers The list is in order of cipher priority See SSLContext set_ciphers Example ctx ssl SSLContext ssl PROTOCOL_SSLv23 ctx set_ciphers ECDHE AESGCM ECDSA ctx get_ciphers aead True alg_bits 256 auth auth rsa description ECDHE RSA AES256 GCM SHA384 TLSv1 2 Kx ECDH Au RSA Enc AESGCM 256 Mac AEAD digest None id 50380848 kea kx ecdhe name ECDHE RSA AES256 GCM SHA384 protocol TLSv1 2 strength_bits 256 symmetric aes 256 gcm aead True alg_bits 128 auth auth rsa description ECDHE RSA AES128 GCM SHA256 TLSv1 2 Kx ECDH Au RSA Enc AESGCM 128 Mac AEAD digest None id 50380847 kea kx ecdhe name ECDHE RSA AES128 GCM SHA256 protocol TLSv1 2 strength_bits 128 symmetric aes 128 gcm New in version 3 6 SSLContext set_default_verify_paths
en
null
2,677
Load a set of default certification authority CA certificates from a filesystem path defined when building the OpenSSL library Unfortunately there s no easy way to know whether this method succeeds no error is returned if no certificates are to be found When the OpenSSL library is provided as part of the operating system though it is likely to be configured properly SSLContext set_ciphers ciphers Set the available ciphers for sockets created with this context It should be a string in the OpenSSL cipher list format If no cipher can be selected because compile time options or other configuration forbids use of all the specified ciphers an SSLError will be raised Note when connected the SSLSocket cipher method of SSL sockets will give the currently selected cipher TLS 1 3 cipher suites cannot be disabled with set_ciphers SSLContext set_alpn_protocols protocols Specify which protocols the socket should advertise during the SSL TLS handshake It should be a list of ASCII strings like http 1 1 spdy 2 ordered by preference The selection of a protocol will happen during the handshake and will play out according to RFC 7301 After a successful handshake the SSLSocket selected_alpn_protocol method will return the agreed upon protocol This method will raise NotImplementedError if HAS_ALPN is False New in version 3 5 SSLContext set_npn_protocols protocols Specify which protocols the socket should advertise during the SSL TLS handshake It should be a list of strings like http 1 1 spdy 2 ordered by preference The selection of a protocol will happen during the handshake and will play out according to the Application Layer Protocol Negotiation After a successful handshake the SSLSocket selected_npn_protocol method will return the agreed upon protocol This method will raise NotImplementedError if HAS_NPN is False New in version 3 3 Deprecated since version 3 10 NPN has been superseded by ALPN SSLContext sni_callback Register a callback function that will be called after the TLS Client Hello handshake message has been received by the SSL TLS server when the TLS client specifies a server name indication The server name indication mechanism is specified in RFC 6066 section 3 Server Name Indication Only one callback can be set per SSLContext If sni_callback is set to None then the callback is disabled Calling this function a subsequent time will disable the previously registered callback The callback function will be called with three arguments the first being the ssl SSLSocket the second is a string that represents the server name that the client is intending to communicate or None if the TLS Client Hello does not contain a server name and the third argument is the original SSLContext The server name argument is text For internationalized domain name the server name is an IDN A label xn pythn mua org A typical use of this callback is to change the ssl SSLSocket s SSLSocket context attribute to a new object of type SSLContext representing a certificate chain that matches the server name Due to the early negotiation phase of the TLS connection only limited methods and attributes are usable like SSLSocket selected_alpn_protocol and SSLSocket context The SSLSocket getpeercert SSLSocket cipher and SSLSocket compression methods require that the TLS connection has progressed beyond the TLS Client Hello and therefore will not return meaningful values nor can they be called safely The sni_callback function must return None to allow the TLS negotiation to continue If a TLS failure is required a constant ALERT_DESCRIPTION_ can be returned Other return values will result in a TLS fatal error with ALERT_DESCRIPTION_INTERNAL_ERROR If an exception is raised from the sni_callback function the TLS connection will terminate with a fatal TLS alert message ALERT_DESCRIPTION_HANDSHAKE_FAILURE This method will raise NotImplementedError if the OpenSSL library had OPENSSL_NO_TLSEXT defined when it was built New in version 3 7 SSLContext set_servername_callback server_name_callback This is a legacy API retained for backwards compatibility When possible you should use
en
null
2,678
sni_callback instead The given server_name_callback is similar to sni_callback except that when the server hostname is an IDN encoded internationalized domain name the server_name_callback receives a decoded U label pythön org If there is an decoding error on the server name the TLS connection will terminate with an ALERT_DESCRIPTION_INTERNAL_ERROR fatal TLS alert message to the client New in version 3 4 SSLContext load_dh_params dhfile Load the key generation parameters for Diffie Hellman DH key exchange Using DH key exchange improves forward secrecy at the expense of computational resources both on the server and on the client The dhfile parameter should be the path to a file containing DH parameters in PEM format This setting doesn t apply to client sockets You can also use the OP_SINGLE_DH_USE option to further improve security New in version 3 3 SSLContext set_ecdh_curve curve_name Set the curve name for Elliptic Curve based Diffie Hellman ECDH key exchange ECDH is significantly faster than regular DH while arguably as secure The curve_name parameter should be a string describing a well known elliptic curve for example prime256v1 for a widely supported curve This setting doesn t apply to client sockets You can also use the OP_SINGLE_ECDH_USE option to further improve security This method is not available if HAS_ECDH is False New in version 3 3 See also SSL TLS Perfect Forward Secrecy Vincent Bernat SSLContext wrap_socket sock server_side False do_handshake_on_connect True suppress_ragged_eofs True server_hostname None session None Wrap an existing Python socket sock and return an instance of SSLContext sslsocket_class default SSLSocket The returned SSL socket is tied to the context its settings and certificates sock must be a SOCK_STREAM socket other socket types are unsupported The parameter server_side is a boolean which identifies whether server side or client side behavior is desired from this socket For client side sockets the context construction is lazy if the underlying socket isn t connected yet the context construction will be performed after connect is called on the socket For server side sockets if the socket has no remote peer it is assumed to be a listening socket and the server side SSL wrapping is automatically performed on client connections accepted via the accept method The method may raise SSLError On client connections the optional parameter server_hostname specifies the hostname of the service which we are connecting to This allows a single server to host multiple SSL based services with distinct certificates quite similarly to HTTP virtual hosts Specifying server_hostname will raise a ValueError if server_side is true The parameter do_handshake_on_connect specifies whether to do the SSL handshake automatically after doing a socket connect or whether the application program will call it explicitly by invoking the SSLSocket do_handshake method Calling SSLSocket do_handshake explicitly gives the program control over the blocking behavior of the socket I O involved in the handshake The parameter suppress_ragged_eofs specifies how the SSLSocket recv method should signal unexpected EOF from the other end of the connection If specified as True the default it returns a normal EOF an empty bytes object in response to unexpected EOF errors raised from the underlying socket if False it will raise the exceptions back to the caller session see session To wrap an SSLSocket in another SSLSocket use SSLContext wrap_bio Changed in version 3 5 Always allow a server_hostname to be passed even if OpenSSL does not have SNI Changed in version 3 6 session argument was added Changed in version 3 7 The method returns an instance of SSLContext sslsocket_class instead of hard coded SSLSocket SSLContext sslsocket_class The return type of SSLContext wrap_socket defaults to SSLSocket The attribute can be overridden on instance of class in order to return a custom subclass of SSLSocket New in version 3 7 SSLContext wrap_bio incoming outgoing server_side False server_hostname None session None Wrap the BIO objects incoming
en
null
2,679
and outgoing and return an instance of SSLContext sslobject_class default SSLObject The SSL routines will read input data from the incoming BIO and write data to the outgoing BIO The server_side server_hostname and session parameters have the same meaning as in SSLContext wrap_socket Changed in version 3 6 session argument was added Changed in version 3 7 The method returns an instance of SSLContext sslobject_class instead of hard coded SSLObject SSLContext sslobject_class The return type of SSLContext wrap_bio defaults to SSLObject The attribute can be overridden on instance of class in order to return a custom subclass of SSLObject New in version 3 7 SSLContext session_stats Get statistics about the SSL sessions created or managed by this context A dictionary is returned which maps the names of each piece of information to their numeric values For example here is the total number of hits and misses in the session cache since the context was created stats context session_stats stats hits stats misses 0 0 SSLContext check_hostname Whether to match the peer cert s hostname in SSLSocket do_handshake The context s verify_mode must be set to CERT_OPTIONAL or CERT_REQUIRED and you must pass server_hostname to wrap_socket in order to match the hostname Enabling hostname checking automatically sets verify_mode from CERT_NONE to CERT_REQUIRED It cannot be set back to CERT_NONE as long as hostname checking is enabled The PROTOCOL_TLS_CLIENT protocol enables hostname checking by default With other protocols hostname checking must be enabled explicitly Example import socket ssl context ssl SSLContext ssl PROTOCOL_TLSv1_2 context verify_mode ssl CERT_REQUIRED context check_hostname True context load_default_certs s socket socket socket AF_INET socket SOCK_STREAM ssl_sock context wrap_socket s server_hostname www verisign com ssl_sock connect www verisign com 443 New in version 3 4 Changed in version 3 7 verify_mode is now automatically changed to CERT_REQUIRED when hostname checking is enabled and verify_mode is CERT_NONE Previously the same operation would have failed with a ValueError SSLContext keylog_filename Write TLS keys to a keylog file whenever key material is generated or received The keylog file is designed for debugging purposes only The file format is specified by NSS and used by many traffic analyzers such as Wireshark The log file is opened in append only mode Writes are synchronized between threads but not between processes New in version 3 8 SSLContext maximum_version A TLSVersion enum member representing the highest supported TLS version The value defaults to TLSVersion MAXIMUM_SUPPORTED The attribute is read only for protocols other than PROTOCOL_TLS PROTOCOL_TLS_CLIENT and PROTOCOL_TLS_SERVER The attributes maximum_version minimum_version and SSLContext options all affect the supported SSL and TLS versions of the context The implementation does not prevent invalid combination For example a context with OP_NO_TLSv1_2 in options and maximum_version set to TLSVersion TLSv1_2 will not be able to establish a TLS 1 2 connection New in version 3 7 SSLContext minimum_version Like SSLContext maximum_version except it is the lowest supported version or TLSVersion MINIMUM_SUPPORTED New in version 3 7 SSLContext num_tickets Control the number of TLS 1 3 session tickets of a PROTOCOL_TLS_SERVER context The setting has no impact on TLS 1 0 to 1 2 connections New in version 3 8 SSLContext options An integer representing the set of SSL options enabled on this context The default value is OP_ALL but you can specify other options such as OP_NO_SSLv2 by ORing them together Changed in version 3 6 SSLContext options returns Options flags ssl create_default_context options Options OP_ALL OP_NO_SSLv3 OP_NO_SSLv2 OP_NO_COMPRESSION 2197947391 Deprecated since version 3 7 All OP_NO_SSL and OP_NO_TLS options have been deprecated since Python 3 7 Use SSLContext minimum_version and SSLContext maximum_version instead SSLContext post_handshake_auth Enable TLS 1 3 post handshake client authentication Post handshake auth is disabled by default a
en
null
2,680
nd a server can only request a TLS client certificate during the initial handshake When enabled a server may request a TLS client certificate at any time after the handshake When enabled on client side sockets the client signals the server that it supports post handshake authentication When enabled on server side sockets SSLContext verify_mode must be set to CERT_OPTIONAL or CERT_REQUIRED too The actual client cert exchange is delayed until SSLSocket verify_client_post_handshake is called and some I O is performed New in version 3 8 SSLContext protocol The protocol version chosen when constructing the context This attribute is read only SSLContext hostname_checks_common_name Whether check_hostname falls back to verify the cert s subject common name in the absence of a subject alternative name extension default true New in version 3 7 Changed in version 3 10 The flag had no effect with OpenSSL before version 1 1 1l Python 3 8 9 3 9 3 and 3 10 include workarounds for previous versions SSLContext security_level An integer representing the security level for the context This attribute is read only New in version 3 10 SSLContext verify_flags The flags for certificate verification operations You can set flags like VERIFY_CRL_CHECK_LEAF by ORing them together By default OpenSSL does neither require nor verify certificate revocation lists CRLs New in version 3 4 Changed in version 3 6 SSLContext verify_flags returns VerifyFlags flags ssl create_default_context verify_flags VerifyFlags VERIFY_X509_TRUSTED_FIRST 32768 SSLContext verify_mode Whether to try to verify other peers certificates and how to behave if verification fails This attribute must be one of CERT_NONE CERT_OPTIONAL or CERT_REQUIRED Changed in version 3 6 SSLContext verify_mode returns VerifyMode enum ssl create_default_context verify_mode VerifyMode CERT_REQUIRED 2 Certificates Certificates in general are part of a public key private key system In this system each principal which may be a machine or a person or an organization is assigned a unique two part encryption key One part of the key is public and is called the public key the other part is kept secret and is called the private key The two parts are related in that if you encrypt a message with one of the parts you can decrypt it with the other part and only with the other part A certificate contains information about two principals It contains the name of a subject and the subject s public key It also contains a statement by a second principal the issuer that the subject is who they claim to be and that this is indeed the subject s public key The issuer s statement is signed with the issuer s private key which only the issuer knows However anyone can verify the issuer s statement by finding the issuer s public key decrypting the statement with it and comparing it to the other information in the certificate The certificate also contains information about the time period over which it is valid This is expressed as two fields called notBefore and notAfter In the Python use of certificates a client or server can use a certificate to prove who they are The other side of a network connection can also be required to produce a certificate and that certificate can be validated to the satisfaction of the client or server that requires such validation The connection attempt can be set to raise an exception if the validation fails Validation is done automatically by the underlying OpenSSL framework the application need not concern itself with its mechanics But the application does usually need to provide sets of certificates to allow this process to take place Python uses files to contain certificates They should be formatted as PEM see RFC 1422 which is a base 64 encoded form wrapped with a header line and a footer line BEGIN CERTIFICATE certificate in base64 PEM encoding END CERTIFICATE Certificate chains The Python files which contain certificates can contain a sequence of certificates sometimes called a certificate chain This chain should start with the specific certificate for the principal who is the client or se
en
null
2,681
rver and then the certificate for the issuer of that certificate and then the certificate for the issuer of that certificate and so on up the chain till you get to a certificate which is self signed that is a certificate which has the same subject and issuer sometimes called a root certificate The certificates should just be concatenated together in the certificate file For example suppose we had a three certificate chain from our server certificate to the certificate of the certification authority that signed our server certificate to the root certificate of the agency which issued the certification authority s certificate BEGIN CERTIFICATE certificate for your server END CERTIFICATE BEGIN CERTIFICATE the certificate for the CA END CERTIFICATE BEGIN CERTIFICATE the root certificate for the CA s issuer END CERTIFICATE CA certificates If you are going to require validation of the other side of the connection s certificate you need to provide a CA certs file filled with the certificate chains for each issuer you are willing to trust Again this file just contains these chains concatenated together For validation Python will use the first chain it finds in the file which matches The platform s certificates file can be used by calling SSLContext load_default_certs this is done automatically with create_default_context Combined key and certificate Often the private key is stored in the same file as the certificate in this case only the certfile parameter to SSLContext load_cert_chain needs to be passed If the private key is stored with the certificate it should come before the first certificate in the certificate chain BEGIN RSA PRIVATE KEY private key in base64 encoding END RSA PRIVATE KEY BEGIN CERTIFICATE certificate in base64 PEM encoding END CERTIFICATE Self signed certificates If you are going to create a server that provides SSL encrypted connection services you will need to acquire a certificate for that service There are many ways of acquiring appropriate certificates such as buying one from a certification authority Another common practice is to generate a self signed certificate The simplest way to do this is with the OpenSSL package using something like the following openssl req new x509 days 365 nodes out cert pem keyout cert pem Generating a 1024 bit RSA private key writing new private key to cert pem You are about to be asked to enter information that will be incorporated into your certificate request What you are about to enter is what is called a Distinguished Name or a DN There are quite a few fields but you can leave some blank For some fields there will be a default value If you enter the field will be left blank Country Name 2 letter code AU US State or Province Name full name Some State MyState Locality Name eg city Some City Organization Name eg company Internet Widgits Pty Ltd My Organization Inc Organizational Unit Name eg section My Group Common Name eg YOUR name myserver mygroup myorganization com Email Address ops myserver mygroup myorganization com The disadvantage of a self signed certificate is that it is its own root certificate and no one else will have it in their cache of known and trusted root certificates Examples Testing for SSL support To test for the presence of SSL support in a Python installation user code should use the following idiom try import ssl except ImportError pass else do something that requires SSL support Client side operation This example creates a SSL context with the recommended security settings for client sockets including automatic certificate verification context ssl create_default_context If you prefer to tune security settings yourself you might create a context from scratch but beware that you might not get the settings right context ssl SSLContext ssl PROTOCOL_TLS_CLIENT context load_verify_locations etc ssl certs ca bundle crt this snippet assumes your operating system places a bundle of all CA certificates in etc ssl certs ca bundle crt if not you ll get an error and have to adjust the location The PROTOCOL_TLS_CLIENT protocol configures the context for cert v
en
null
2,682
alidation and hostname verification verify_mode is set to CERT_REQUIRED and check_hostname is set to True All other protocols create SSL contexts with insecure defaults When you use the context to connect to a server CERT_REQUIRED and check_hostname validate the server certificate it ensures that the server certificate was signed with one of the CA certificates checks the signature for correctness and verifies other properties like validity and identity of the hostname conn context wrap_socket socket socket socket AF_INET server_hostname www python org conn connect www python org 443 You may then fetch the certificate cert conn getpeercert Visual inspection shows that the certificate does identify the desired service that is the HTTPS host www python org pprint pprint cert OCSP http ocsp digicert com caIssuers http cacerts digicert com DigiCertSHA2ExtendedValidationServerCA crt crlDistributionPoints http crl3 digicert com sha2 ev server g1 crl http crl4 digicert com sha2 ev server g1 crl issuer countryName US organizationName DigiCert Inc organizationalUnitName www digicert com commonName DigiCert SHA2 Extended Validation Server CA notAfter Sep 9 12 00 00 2016 GMT notBefore Sep 5 00 00 00 2014 GMT serialNumber 01BB6F00122B177F36CAB49CEA8B6B26 subject businessCategory Private Organization 1 3 6 1 4 1 311 60 2 1 3 US 1 3 6 1 4 1 311 60 2 1 2 Delaware serialNumber 3359300 streetAddress 16 Allen Rd postalCode 03894 4801 countryName US stateOrProvinceName NH localityName Wolfeboro organizationName Python Software Foundation commonName www python org subjectAltName DNS www python org DNS python org DNS pypi org DNS docs python org DNS testpypi org DNS bugs python org DNS wiki python org DNS hg python org DNS mail python org DNS packaging python org DNS pythonhosted org DNS www pythonhosted org DNS test pythonhosted org DNS us pycon org DNS id python org version 3 Now the SSL channel is established and the certificate verified you can proceed to talk with the server conn sendall b HEAD HTTP 1 0 r nHost linuxfr org r n r n pprint pprint conn recv 1024 split b r n b HTTP 1 1 200 OK b Date Sat 18 Oct 2014 18 27 20 GMT b Server nginx b Content Type text html charset utf 8 b X Frame Options SAMEORIGIN b Content Length 45679 b Accept Ranges bytes b Via 1 1 varnish b Age 2188 b X Served By cache lcy1134 LCY b X Cache HIT b X Cache Hits 11 b Vary Cookie b Strict Transport Security max age 63072000 includeSubDomains b Connection close b b See the discussion of Security considerations below Server side operation For server operation typically you ll need to have a server certificate and private key each in a file You ll first create a context holding the key and the certificate so that clients can check your authenticity Then you ll open a socket bind it to a port call listen on it and start waiting for clients to connect import socket ssl context ssl create_default_context ssl Purpose CLIENT_AUTH context load_cert_chain certfile mycertfile keyfile mykeyfile bindsocket socket socket bindsocket bind myaddr example com 10023 bindsocket listen 5 When a client connects you ll call accept on the socket to get the new socket from the other end and use the context s SSLContext wrap_socket method to create a server side SSL socket for the connection while True newsocket fromaddr bindsocket accept connstream context wrap_socket newsocket server_side True try deal_with_client connstream finally connstream shutdown socket SHUT_RDWR connstream close Then you ll read data from the connstream and do something with it till you are finished with the client or the client is finished with you def deal_with_client connstream data connstream recv 1024 empty data means the client is finished with us while data if not do_something connstream data we ll assume do_something returns False when we re finished with client break data connstream recv 1024 finished with client And go back to listening for new client connections of course a real server would probably handle each client connection in a separate thread or put the sockets in non blocking mode and use an
en
null
2,683
event loop Notes on non blocking sockets SSL sockets behave slightly different than regular sockets in non blocking mode When working with non blocking sockets there are thus several things you need to be aware of Most SSLSocket methods will raise either SSLWantWriteError or SSLWantReadError instead of BlockingIOError if an I O operation would block SSLWantReadError will be raised if a read operation on the underlying socket is necessary and SSLWantWriteError for a write operation on the underlying socket Note that attempts to write to an SSL socket may require reading from the underlying socket first and attempts to read from the SSL socket may require a prior write to the underlying socket Changed in version 3 5 In earlier Python versions the SSLSocket send method returned zero instead of raising SSLWantWriteError or SSLWantReadError Calling select tells you that the OS level socket can be read from or written to but it does not imply that there is sufficient data at the upper SSL layer For example only part of an SSL frame might have arrived Therefore you must be ready to handle SSLSocket recv and SSLSocket send failures and retry after another call to select Conversely since the SSL layer has its own framing a SSL socket may still have data available for reading without select being aware of it Therefore you should first call SSLSocket recv to drain any potentially available data and then only block on a select call if still necessary of course similar provisions apply when using other primitives such as poll or those in the selectors module The SSL handshake itself will be non blocking the SSLSocket do_handshake method has to be retried until it returns successfully Here is a synopsis using select to wait for the socket s readiness while True try sock do_handshake break except ssl SSLWantReadError select select sock except ssl SSLWantWriteError select select sock See also The asyncio module supports non blocking SSL sockets and provides a higher level API It polls for events using the selectors module and handles SSLWantWriteError SSLWantReadError and BlockingIOError exceptions It runs the SSL handshake asynchronously as well Memory BIO Support New in version 3 5 Ever since the SSL module was introduced in Python 2 6 the SSLSocket class has provided two related but distinct areas of functionality SSL protocol handling Network IO The network IO API is identical to that provided by socket socket from which SSLSocket also inherits This allows an SSL socket to be used as a drop in replacement for a regular socket making it very easy to add SSL support to an existing application Combining SSL protocol handling and network IO usually works well but there are some cases where it doesn t An example is async IO frameworks that want to use a different IO multiplexing model than the select poll on a file descriptor readiness based model that is assumed by socket socket and by the internal OpenSSL socket IO routines This is mostly relevant for platforms like Windows where this model is not efficient For this purpose a reduced scope variant of SSLSocket called SSLObject is provided class ssl SSLObject A reduced scope variant of SSLSocket representing an SSL protocol instance that does not contain any network IO methods This class is typically used by framework authors that want to implement asynchronous IO for SSL through memory buffers This class implements an interface on top of a low level SSL object as implemented by OpenSSL This object captures the state of an SSL connection but does not provide any network IO itself IO needs to be performed through separate BIO objects which are OpenSSL s IO abstraction layer This class has no public constructor An SSLObject instance must be created using the wrap_bio method This method will create the SSLObject instance and bind it to a pair of BIOs The incoming BIO is used to pass data from Python to the SSL protocol instance while the outgoing BIO is used to pass data the other way around The following methods are available context server_side server_hostname session session_reused read
en
null
2,684
write getpeercert selected_alpn_protocol selected_npn_protocol cipher shared_ciphers compression pending do_handshake verify_client_post_handshake unwrap get_channel_binding version When compared to SSLSocket this object lacks the following features Any form of network IO recv and send read and write only to the underlying MemoryBIO buffers There is no do_handshake_on_connect machinery You must always manually call do_handshake to start the handshake There is no handling of suppress_ragged_eofs All end of file conditions that are in violation of the protocol are reported via the SSLEOFError exception The method unwrap call does not return anything unlike for an SSL socket where it returns the underlying socket The server_name_callback callback passed to SSLContext set_servername_callback will get an SSLObject instance instead of a SSLSocket instance as its first parameter Some notes related to the use of SSLObject All IO on an SSLObject is non blocking This means that for example read will raise an SSLWantReadError if it needs more data than the incoming BIO has available Changed in version 3 7 SSLObject instances must be created with wrap_bio In earlier versions it was possible to create instances directly This was never documented or officially supported An SSLObject communicates with the outside world using memory buffers The class MemoryBIO provides a memory buffer that can be used for this purpose It wraps an OpenSSL memory BIO Basic IO object class ssl MemoryBIO A memory buffer that can be used to pass data between Python and an SSL protocol instance pending Return the number of bytes currently in the memory buffer eof A boolean indicating whether the memory BIO is current at the end of file position read n 1 Read up to n bytes from the memory buffer If n is not specified or negative all bytes are returned write buf Write the bytes from buf to the memory BIO The buf argument must be an object supporting the buffer protocol The return value is the number of bytes written which is always equal to the length of buf write_eof Write an EOF marker to the memory BIO After this method has been called it is illegal to call write The attribute eof will become true after all data currently in the buffer has been read SSL session New in version 3 6 class ssl SSLSession Session object used by session id time timeout ticket_lifetime_hint has_ticket Security considerations Best defaults For client use if you don t have any special requirements for your security policy it is highly recommended that you use the create_default_context function to create your SSL context It will load the system s trusted CA certificates enable certificate validation and hostname checking and try to choose reasonably secure protocol and cipher settings For example here is how you would use the smtplib SMTP class to create a trusted secure connection to a SMTP server import ssl smtplib smtp smtplib SMTP mail python org port 587 context ssl create_default_context smtp starttls context context 220 b 2 0 0 Ready to start TLS If a client certificate is needed for the connection it can be added with SSLContext load_cert_chain By contrast if you create the SSL context by calling the SSLContext constructor yourself it will not have certificate validation nor hostname checking enabled by default If you do so please read the paragraphs below to achieve a good security level Manual settings Verifying certificates When calling the SSLContext constructor directly CERT_NONE is the default Since it does not authenticate the other peer it can be insecure especially in client mode where most of time you would like to ensure the authenticity of the server you re talking to Therefore when in client mode it is highly recommended to use CERT_REQUIRED However it is in itself not sufficient you also have to check that the server certificate which can be obtained by calling SSLSocket getpeercert matches the desired service For many protocols and applications the service can be identified by the hostname This common check is automatically performed when SSLContext check_host
en
null
2,685
name is enabled Changed in version 3 7 Hostname matchings is now performed by OpenSSL Python no longer uses match_hostname In server mode if you want to authenticate your clients using the SSL layer rather than using a higher level authentication mechanism you ll also have to specify CERT_REQUIRED and similarly check the client certificate Protocol versions SSL versions 2 and 3 are considered insecure and are therefore dangerous to use If you want maximum compatibility between clients and servers it is recommended to use PROTOCOL_TLS_CLIENT or PROTOCOL_TLS_SERVER as the protocol version SSLv2 and SSLv3 are disabled by default client_context ssl SSLContext ssl PROTOCOL_TLS_CLIENT client_context minimum_version ssl TLSVersion TLSv1_3 client_context maximum_version ssl TLSVersion TLSv1_3 The SSL context created above will only allow TLSv1 3 and later if supported by your system connections to a server PROTOCOL_TLS_CLIENT implies certificate validation and hostname checks by default You have to load certificates into the context Cipher selection If you have advanced security requirements fine tuning of the ciphers enabled when negotiating a SSL session is possible through the SSLContext set_ciphers method Starting from Python 3 2 3 the ssl module disables certain weak ciphers by default but you may want to further restrict the cipher choice Be sure to read OpenSSL s documentation about the cipher list format If you want to check which ciphers are enabled by a given cipher list use SSLContext get_ciphers or the openssl ciphers command on your system Multi processing If using this module as part of a multi processed application using for example the multiprocessing or concurrent futures modules be aware that OpenSSL s internal random number generator does not properly handle forked processes Applications must change the PRNG state of the parent process if they use any SSL feature with os fork Any successful call of RAND_add or RAND_bytes is sufficient TLS 1 3 New in version 3 7 The TLS 1 3 protocol behaves slightly differently than previous version of TLS SSL Some new TLS 1 3 features are not yet available TLS 1 3 uses a disjunct set of cipher suites All AES GCM and ChaCha20 cipher suites are enabled by default The method SSLContext set_ciphers cannot enable or disable any TLS 1 3 ciphers yet but SSLContext get_ciphers returns them Session tickets are no longer sent as part of the initial handshake and are handled differently SSLSocket session and SSLSession are not compatible with TLS 1 3 Client side certificates are also no longer verified during the initial handshake A server can request a certificate at any time Clients process certificate requests while they send or receive application data from the server TLS 1 3 features like early data deferred TLS client cert request signature algorithm configuration and rekeying are not supported yet See also Class socket socket Documentation of underlying socket class SSL TLS Strong Encryption An Introduction Intro from the Apache HTTP Server documentation RFC 1422 Privacy Enhancement for Internet Electronic Mail Part II Certificate Based Key Management Steve Kent RFC 4086 Randomness Requirements for Security Donald E Jeffrey I Schiller RFC 5280 Internet X 509 Public Key Infrastructure Certificate and Certificate Revocation List CRL Profile D Cooper RFC 5246 The Transport Layer Security TLS Protocol Version 1 2 T Dierks et al RFC 6066 Transport Layer Security TLS Extensions D Eastlake IANA TLS Transport Layer Security TLS Parameters IANA RFC 7525 Recommendations for Secure Use of Transport Layer Security TLS and Datagram Transport Layer Security DTLS IETF Mozilla s Server Side TLS recommendations Mozilla
en
null
2,686
Why is Python Installed on my Computer FAQ What is Python Python is a programming language It s used for many different applications It s used in some high schools and colleges as an introductory programming language because Python is easy to learn but it s also used by professional software developers at places such as Google NASA and Lucasfilm Ltd If you wish to learn more about Python start with the Beginner s Guide to Python Why is Python installed on my machine If you find Python installed on your system but don t remember installing it there are several possible ways it could have gotten there Perhaps another user on the computer wanted to learn programming and installed it you ll have to figure out who s been using the machine and might have installed it A third party application installed on the machine might have been written in Python and included a Python installation There are many such applications from GUI programs to network servers and administrative scripts Some Windows machines also have Python installed At this writing we re aware of computers from Hewlett Packard and Compaq that include Python Apparently some of HP Compaq s administrative tools are written in Python Many Unix compatible operating systems such as macOS and some Linux distributions have Python installed by default it s included in the base installation Can I delete Python That depends on where Python came from If someone installed it deliberately you can remove it without hurting anything On Windows use the Add Remove Programs icon in the Control Panel If Python was installed by a third party application you can also remove it but that application will no longer work You should use that application s uninstaller rather than removing Python directly If Python came with your operating system removing it is not recommended If you remove it whatever tools were written in Python will no longer run and some of them might be important to you Reinstalling the whole system would then be required to fix things again
en
null
2,687
Installing Python Modules Email distutils sig python org As a popular open source development project Python has an active supporting community of contributors and users that also make their software available for other Python developers to use under open source license terms This allows Python users to share and collaborate effectively benefiting from the solutions others have already created to common and sometimes even rare problems as well as potentially contributing their own solutions to the common pool This guide covers the installation part of the process For a guide to creating and sharing your own Python projects refer to the Python packaging user guide Note For corporate and other institutional users be aware that many organisations have their own policies around using and contributing to open source software Please take such policies into account when making use of the distribution and installation tools provided with Python Key terms pip is the preferred installer program Starting with Python 3 4 it is included by default with the Python binary installers A virtual environment is a semi isolated Python environment that allows packages to be installed for use by a particular application rather than being installed system wide venv is the standard tool for creating virtual environments and has been part of Python since Python 3 3 Starting with Python 3 4 it defaults to installing pip into all created virtual environments virtualenv is a third party alternative and predecessor to venv It allows virtual environments to be used on versions of Python prior to 3 4 which either don t provide venv at all or aren t able to automatically install pip into created environments The Python Package Index is a public repository of open source licensed packages made available for use by other Python users the Python Packaging Authority is the group of developers and documentation authors responsible for the maintenance and evolution of the standard packaging tools and the associated metadata and file format standards They maintain a variety of tools documentation and issue trackers on GitHub distutils is the original build and distribution system first added to the Python standard library in 1998 While direct use of distutils is being phased out it still laid the foundation for the current packaging and distribution infrastructure and it not only remains part of the standard library but its name lives on in other ways such as the name of the mailing list used to coordinate Python packaging standards development Changed in version 3 5 The use of venv is now recommended for creating virtual environments See also Python Packaging User Guide Creating and using virtual environments Basic usage The standard packaging tools are all designed to be used from the command line The following command will install the latest version of a module and its dependencies from the Python Package Index python m pip install SomePackage Note For POSIX users including macOS and Linux users the examples in this guide assume the use of a virtual environment For Windows users the examples in this guide assume that the option to adjust the system PATH environment variable was selected when installing Python It s also possible to specify an exact or minimum version directly on the command line When using comparator operators such as or some other special character which get interpreted by shell the package name and the version should be enclosed within double quotes python m pip install SomePackage 1 0 4 specific version python m pip install SomePackage 1 0 4 minimum version Normally if a suitable module is already installed attempting to install it again will have no effect Upgrading existing modules must be requested explicitly python m pip install upgrade SomePackage More information and resources regarding pip and its capabilities can be found in the Python Packaging User Guide Creation of virtual environments is done through the venv module Installing packages into an active virtual environment uses the commands shown above See also Python Packaging
en
null
2,688
User Guide Installing Python Distribution Packages How do I These are quick answers or links for some common tasks install pip in versions of Python prior to Python 3 4 Python only started bundling pip with Python 3 4 For earlier versions pip needs to be bootstrapped as described in the Python Packaging User Guide See also Python Packaging User Guide Requirements for Installing Packages install packages just for the current user Passing the user option to python m pip install will install a package just for the current user rather than for all users of the system install scientific Python packages A number of scientific Python packages have complex binary dependencies and aren t currently easy to install using pip directly At this point in time it will often be easier for users to install these packages by other means rather than attempting to install them with pip See also Python Packaging User Guide Installing Scientific Packages work with multiple versions of Python installed in parallel On Linux macOS and other POSIX systems use the versioned Python commands in combination with the m switch to run the appropriate copy of pip python2 m pip install SomePackage default Python 2 python2 7 m pip install SomePackage specifically Python 2 7 python3 m pip install SomePackage default Python 3 python3 4 m pip install SomePackage specifically Python 3 4 Appropriately versioned pip commands may also be available On Windows use the py Python launcher in combination with the m switch py 2 m pip install SomePackage default Python 2 py 2 7 m pip install SomePackage specifically Python 2 7 py 3 m pip install SomePackage default Python 3 py 3 4 m pip install SomePackage specifically Python 3 4 Common installation issues Installing into the system Python on Linux On Linux systems a Python installation will typically be included as part of the distribution Installing into this Python installation requires root access to the system and may interfere with the operation of the system package manager and other components of the system if a component is unexpectedly upgraded using pip On such systems it is often better to use a virtual environment or a per user installation when installing packages with pip Pip not installed It is possible that pip does not get installed by default One potential fix is python m ensurepip default pip There are also additional resources for installing pip Installing binary extensions Python has typically relied heavily on source based distribution with end users being expected to compile extension modules from source as part of the installation process With the introduction of support for the binary wheel format and the ability to publish wheels for at least Windows and macOS through the Python Package Index this problem is expected to diminish over time as users are more regularly able to install pre built extensions rather than needing to build them themselves Some of the solutions for installing scientific software that are not yet available as pre built wheel files may also help with obtaining other binary extensions without needing to build them locally See also Python Packaging User Guide Binary Extensions
en
null
2,689
Call Protocol CPython supports two different calling protocols tp_call and vectorcall The tp_call Protocol Instances of classes that set tp_call are callable The signature of the slot is PyObject tp_call PyObject callable PyObject args PyObject kwargs A call is made using a tuple for the positional arguments and a dict for the keyword arguments similarly to callable args kwargs in Python code args must be non NULL use an empty tuple if there are no arguments but kwargs may be NULL if there are no keyword arguments This convention is not only used by tp_call tp_new and tp_init also pass arguments this way To call an object use PyObject_Call or another call API The Vectorcall Protocol New in version 3 9 The vectorcall protocol was introduced in PEP 590 as an additional protocol for making calls more efficient As rule of thumb CPython will prefer the vectorcall for internal calls if the callable supports it However this is not a hard rule Additionally some third party extensions use tp_call directly rather than using PyObject_Call Therefore a class supporting vectorcall must also implement tp_call Moreover the callable must behave the same regardless of which protocol is used The recommended way to achieve this is by setting tp_call to PyVectorcall_Call This bears repeating Warning A class supporting vectorcall must also implement tp_call with the same semantics Changed in version 3 12 The Py_TPFLAGS_HAVE_VECTORCALL flag is now removed from a class when the class s __call__ method is reassigned This internally sets tp_call only and thus may make it behave differently than the vectorcall function In earlier Python versions vectorcall should only be used with immutable or static types A class should not implement vectorcall if that would be slower than tp_call For example if the callee needs to convert the arguments to an args tuple and kwargs dict anyway then there is no point in implementing vectorcall Classes can implement the vectorcall protocol by enabling the Py_TPFLAGS_HAVE_VECTORCALL flag and setting tp_vectorcall_offset to the offset inside the object structure where a vectorcallfunc appears This is a pointer to a function with the following signature typedef PyObject vectorcallfunc PyObject callable PyObject const args size_t nargsf PyObject kwnames Part of the Stable ABI since version 3 12 callable is the object being called args is a C array consisting of the positional arguments followed by the values of the keyword arguments This can be NULL if there are no arguments nargsf is the number of positional arguments plus possibly the PY_VECTORCALL_ARGUMENTS_OFFSET flag To get the actual number of positional arguments from nargsf use PyVectorcall_NARGS kwnames is a tuple containing the names of the keyword arguments in other words the keys of the kwargs dict These names must be strings instances of str or a subclass and they must be unique If there are no keyword arguments then kwnames can instead be NULL PY_VECTORCALL_ARGUMENTS_OFFSET Part of the Stable ABI since version 3 12 If this flag is set in a vectorcall nargsf argument the callee is allowed to temporarily change args 1 In other words args points to argument 1 not 0 in the allocated vector The callee must restore the value of args 1 before returning For PyObject_VectorcallMethod this flag means instead that args 0 may be changed Whenever they can do so cheaply without additional allocation callers are encouraged to use PY_VECTORCALL_ARGUMENTS_OFFSET Doing so will allow callables such as bound methods to make their onward calls which include a prepended self argument very efficiently New in version 3 8 To call an object that implements vectorcall use a call API function as with any other callable PyObject_Vectorcall will usually be most efficient Note In CPython 3 8 the vectorcall API and related functions were available provisionally under names with a leading underscore _PyObject_Vectorcall _Py_TPFLAGS_HAVE_VECTORCALL _PyObject_VectorcallMethod _PyVectorcall_Function _PyObject_CallOneArg _PyObject_CallMethodNoArgs _PyObject_CallMethodOneArg Additionally PyObj
en
null
2,690
ect_VectorcallDict was available as _PyObject_FastCallDict The old names are still defined as aliases of the new non underscored names Recursion Control When using tp_call callees do not need to worry about recursion CPython uses Py_EnterRecursiveCall and Py_LeaveRecursiveCall for calls made using tp_call For efficiency this is not the case for calls done using vectorcall the callee should use Py_EnterRecursiveCall and Py_LeaveRecursiveCall if needed Vectorcall Support API Py_ssize_t PyVectorcall_NARGS size_t nargsf Part of the Stable ABI since version 3 12 Given a vectorcall nargsf argument return the actual number of arguments Currently equivalent to Py_ssize_t nargsf PY_VECTORCALL_ARGUMENTS_OFFSET However the function PyVectorcall_NARGS should be used to allow for future extensions New in version 3 8 vectorcallfunc PyVectorcall_Function PyObject op If op does not support the vectorcall protocol either because the type does not or because the specific instance does not return NULL Otherwise return the vectorcall function pointer stored in op This function never raises an exception This is mostly useful to check whether or not op supports vectorcall which can be done by checking PyVectorcall_Function op NULL New in version 3 9 PyObject PyVectorcall_Call PyObject callable PyObject tuple PyObject dict Part of the Stable ABI since version 3 12 Call callable s vectorcallfunc with positional and keyword arguments given in a tuple and dict respectively This is a specialized function intended to be put in the tp_call slot or be used in an implementation of tp_call It does not check the Py_TPFLAGS_HAVE_VECTORCALL flag and it does not fall back to tp_call New in version 3 8 Object Calling API Various functions are available for calling a Python object Each converts its arguments to a convention supported by the called object either tp_call or vectorcall In order to do as little conversion as possible pick one that best fits the format of data you have available The following table summarizes the available functions please see individual documentation for details Function callable args kwargs PyObject_Call PyObject tuple dict NULL PyObject_CallNoArgs PyObject PyObject_CallOneArg PyObject 1 object PyObject_CallObject PyObject tuple NULL PyObject_CallFunction PyObject format PyObject_CallMethod obj char format PyObject_CallFunctionObjArgs PyObject variadic PyObject_CallMethodObjArgs obj name variadic PyObject_CallMethodNoArgs obj name PyObject_CallMethodOneArg obj name 1 object PyObject_Vectorcall PyObject vectorcall vectorcall PyObject_VectorcallDict PyObject vectorcall dict NULL PyObject_VectorcallMethod arg name vectorcall vectorcall PyObject PyObject_Call PyObject callable PyObject args PyObject kwargs Return value New reference Part of the Stable ABI Call a callable Python object callable with arguments given by the tuple args and named arguments given by the dictionary kwargs args must not be NULL use an empty tuple if no arguments are needed If no named arguments are needed kwargs can be NULL Return the result of the call on success or raise an exception and return NULL on failure This is the equivalent of the Python expression callable args kwargs PyObject PyObject_CallNoArgs PyObject callable Return value New reference Part of the Stable ABI since version 3 10 Call a callable Python object callable without any arguments It is the most efficient way to call a callable Python object without any argument Return the result of the call on success or raise an exception and return NULL on failure New in version 3 9 PyObject PyObject_CallOneArg PyObject callable PyObject arg Return value New reference Call a callable Python object callable with exactly 1 positional argument arg and no keyword arguments Return the result of the call on success or raise an exception and return NULL on failure New in version 3 9 PyObject PyObject_CallObject PyObject callable PyObject args Return value New reference Part of the Stable ABI Call a callable Python object callable with arguments given by the tuple args If no arguments are needed then args
en
null
2,691
can be NULL Return the result of the call on success or raise an exception and return NULL on failure This is the equivalent of the Python expression callable args PyObject PyObject_CallFunction PyObject callable const char format Return value New reference Part of the Stable ABI Call a callable Python object callable with a variable number of C arguments The C arguments are described using a Py_BuildValue style format string The format can be NULL indicating that no arguments are provided Return the result of the call on success or raise an exception and return NULL on failure This is the equivalent of the Python expression callable args Note that if you only pass PyObject args PyObject_CallFunctionObjArgs is a faster alternative Changed in version 3 4 The type of format was changed from char PyObject PyObject_CallMethod PyObject obj const char name const char format Return value New reference Part of the Stable ABI Call the method named name of object obj with a variable number of C arguments The C arguments are described by a Py_BuildValue format string that should produce a tuple The format can be NULL indicating that no arguments are provided Return the result of the call on success or raise an exception and return NULL on failure This is the equivalent of the Python expression obj name arg1 arg2 Note that if you only pass PyObject args PyObject_CallMethodObjArgs is a faster alternative Changed in version 3 4 The types of name and format were changed from char PyObject PyObject_CallFunctionObjArgs PyObject callable Return value New reference Part of the Stable ABI Call a callable Python object callable with a variable number of PyObject arguments The arguments are provided as a variable number of parameters followed by NULL Return the result of the call on success or raise an exception and return NULL on failure This is the equivalent of the Python expression callable arg1 arg2 PyObject PyObject_CallMethodObjArgs PyObject obj PyObject name Return value New reference Part of the Stable ABI Call a method of the Python object obj where the name of the method is given as a Python string object in name It is called with a variable number of PyObject arguments The arguments are provided as a variable number of parameters followed by NULL Return the result of the call on success or raise an exception and return NULL on failure PyObject PyObject_CallMethodNoArgs PyObject obj PyObject name Call a method of the Python object obj without arguments where the name of the method is given as a Python string object in name Return the result of the call on success or raise an exception and return NULL on failure New in version 3 9 PyObject PyObject_CallMethodOneArg PyObject obj PyObject name PyObject arg Call a method of the Python object obj with a single positional argument arg where the name of the method is given as a Python string object in name Return the result of the call on success or raise an exception and return NULL on failure New in version 3 9 PyObject PyObject_Vectorcall PyObject callable PyObject const args size_t nargsf PyObject kwnames Part of the Stable ABI since version 3 12 Call a callable Python object callable The arguments are the same as for vectorcallfunc If callable supports vectorcall this directly calls the vectorcall function stored in callable Return the result of the call on success or raise an exception and return NULL on failure New in version 3 9 PyObject PyObject_VectorcallDict PyObject callable PyObject const args size_t nargsf PyObject kwdict Call callable with positional arguments passed exactly as in the vectorcall protocol but with keyword arguments passed as a dictionary kwdict The args array contains only the positional arguments Regardless of which protocol is used internally a conversion of arguments needs to be done Therefore this function should only be used if the caller already has a dictionary ready to use for the keyword arguments but not a tuple for the positional arguments New in version 3 9 PyObject PyObject_VectorcallMethod PyObject name PyObject const args size_t nargsf PyObje
en
null
2,692
ct kwnames Part of the Stable ABI since version 3 12 Call a method using the vectorcall calling convention The name of the method is given as a Python string name The object whose method is called is args 0 and the args array starting at args 1 represents the arguments of the call There must be at least one positional argument nargsf is the number of positional arguments including args 0 plus PY_VECTORCALL_ARGUMENTS_OFFSET if the value of args 0 may temporarily be changed Keyword arguments can be passed just like in PyObject_Vectorcall If the object has the Py_TPFLAGS_METHOD_DESCRIPTOR feature this will call the unbound method object with the full args vector as arguments Return the result of the call on success or raise an exception and return NULL on failure New in version 3 9 Call Support API int PyCallable_Check PyObject o Part of the Stable ABI Determine if the object o is callable Return 1 if the object is callable and 0 otherwise This function always succeeds
en
null
2,693
Annotations Best Practices author Larry Hastings Abstract This document is designed to encapsulate the best practices for working with annotations dicts If you write Python code that examines __annotations__ on Python objects we encourage you to follow the guidelines described below The document is organized into four sections best practices for accessing the annotations of an object in Python versions 3 10 and newer best practices for accessing the annotations of an object in Python versions 3 9 and older other best practices for __annotations__ that apply to any Python version and quirks of __annotations__ Note that this document is specifically about working with __annotations__ not uses for annotations If you re looking for information on how to use type hints in your code please see the typing module Accessing The Annotations Dict Of An Object In Python 3 10 And Newer Python 3 10 adds a new function to the standard library inspect get_annotations In Python versions 3 10 and newer calling this function is the best practice for accessing the annotations dict of any object that supports annotations This function can also un stringize stringized annotations for you If for some reason inspect get_annotations isn t viable for your use case you may access the __annotations__ data member manually Best practice for this changed in Python 3 10 as well as of Python 3 10 o __annotations__ is guaranteed to always work on Python functions classes and modules If you re certain the object you re examining is one of these three specific objects you may simply use o __annotations__ to get at the object s annotations dict However other types of callables for example callables created by functools partial may not have an __annotations__ attribute defined When accessing the __annotations__ of a possibly unknown object best practice in Python versions 3 10 and newer is to call getattr with three arguments for example getattr o __annotations__ None Before Python 3 10 accessing __annotations__ on a class that defines no annotations but that has a parent class with annotations would return the parent s __annotations__ In Python 3 10 and newer the child class s annotations will be an empty dict instead Accessing The Annotations Dict Of An Object In Python 3 9 And Older In Python 3 9 and older accessing the annotations dict of an object is much more complicated than in newer versions The problem is a design flaw in these older versions of Python specifically to do with class annotations Best practice for accessing the annotations dict of other objects functions other callables and modules is the same as best practice for 3 10 assuming you aren t calling inspect get_annotations you should use three argument getattr to access the object s __annotations__ attribute Unfortunately this isn t best practice for classes The problem is that since __annotations__ is optional on classes and because classes can inherit attributes from their base classes accessing the __annotations__ attribute of a class may inadvertently return the annotations dict of a base class As an example class Base a int 3 b str abc class Derived Base pass print Derived __annotations__ This will print the annotations dict from Base not Derived Your code will have to have a separate code path if the object you re examining is a class isinstance o type In that case best practice relies on an implementation detail of Python 3 9 and before if a class has annotations defined they are stored in the class s __dict__ dictionary Since the class may or may not have annotations defined best practice is to call the get method on the class dict To put it all together here is some sample code that safely accesses the __annotations__ attribute on an arbitrary object in Python 3 9 and before if isinstance o type ann o __dict__ get __annotations__ None else ann getattr o __annotations__ None After running this code ann should be either a dictionary or None You re encouraged to double check the type of ann using isinstance before further examination Note that some exotic or malformed type objects m
en
null
2,694
ay not have a __dict__ attribute so for extra safety you may also wish to use getattr to access __dict__ Manually Un Stringizing Stringized Annotations In situations where some annotations may be stringized and you wish to evaluate those strings to produce the Python values they represent it really is best to call inspect get_annotations to do this work for you If you re using Python 3 9 or older or if for some reason you can t use inspect get_annotations you ll need to duplicate its logic You re encouraged to examine the implementation of inspect get_annotations in the current Python version and follow a similar approach In a nutshell if you wish to evaluate a stringized annotation on an arbitrary object o If o is a module use o __dict__ as the globals when calling eval If o is a class use sys modules o __module__ __dict__ as the globals and dict vars o as the locals when calling eval If o is a wrapped callable using functools update_wrapper functools wraps or functools partial iteratively unwrap it by accessing either o __wrapped__ or o func as appropriate until you have found the root unwrapped function If o is a callable but not a class use o __globals__ as the globals when calling eval However not all string values used as annotations can be successfully turned into Python values by eval String values could theoretically contain any valid string and in practice there are valid use cases for type hints that require annotating with string values that specifically can t be evaluated For example PEP 604 union types using before support for this was added to Python 3 10 Definitions that aren t needed at runtime only imported when typing TYPE_CHECKING is true If eval attempts to evaluate such values it will fail and raise an exception So when designing a library API that works with annotations it s recommended to only attempt to evaluate string values when explicitly requested to by the caller Best Practices For __annotations__ In Any Python Version You should avoid assigning to the __annotations__ member of objects directly Let Python manage setting __annotations__ If you do assign directly to the __annotations__ member of an object you should always set it to a dict object If you directly access the __annotations__ member of an object you should ensure that it s a dictionary before attempting to examine its contents You should avoid modifying __annotations__ dicts You should avoid deleting the __annotations__ attribute of an object __annotations__ Quirks In all versions of Python 3 function objects lazy create an annotations dict if no annotations are defined on that object You can delete the __annotations__ attribute using del fn __annotations__ but if you then access fn __annotations__ the object will create a new empty dict that it will store and return as its annotations Deleting the annotations on a function before it has lazily created its annotations dict will throw an AttributeError using del fn __annotations__ twice in a row is guaranteed to always throw an AttributeError Everything in the above paragraph also applies to class and module objects in Python 3 10 and newer In all versions of Python 3 you can set __annotations__ on a function object to None However subsequently accessing the annotations on that object using fn __annotations__ will lazy create an empty dictionary as per the first paragraph of this section This is not true of modules and classes in any Python version those objects permit setting __annotations__ to any Python value and will retain whatever value is set If Python stringizes your annotations for you using from __future__ import annotations and you specify a string as an annotation the string will itself be quoted In effect the annotation is quoted twice For example from __future__ import annotations def foo a str pass print foo __annotations__ This prints a str This shouldn t really be considered a quirk it s mentioned here simply because it might be surprising
en
null
2,695
Coroutine Objects New in version 3 5 Coroutine objects are what functions declared with an async keyword return type PyCoroObject The C structure used for coroutine objects PyTypeObject PyCoro_Type The type object corresponding to coroutine objects int PyCoro_CheckExact PyObject ob Return true if ob s type is PyCoro_Type ob must not be NULL This function always succeeds PyObject PyCoro_New PyFrameObject frame PyObject name PyObject qualname Return value New reference Create and return a new coroutine object based on the frame object with __name__ and __qualname__ set to name and qualname A reference to frame is stolen by this function The frame argument must not be NULL
en
null
2,696
Buffer Protocol Certain objects available in Python wrap access to an underlying memory array or buffer Such objects include the built in bytes and bytearray and some extension types like array array Third party libraries may define their own types for special purposes such as image processing or numeric analysis While each of these types have their own semantics they share the common characteristic of being backed by a possibly large memory buffer It is then desirable in some situations to access that buffer directly and without intermediate copying Python provides such a facility at the C level in the form of the buffer protocol This protocol has two sides on the producer side a type can export a buffer interface which allows objects of that type to expose information about their underlying buffer This interface is described in the section Buffer Object Structures on the consumer side several means are available to obtain a pointer to the raw underlying data of an object for example a method parameter Simple objects such as bytes and bytearray expose their underlying buffer in byte oriented form Other forms are possible for example the elements exposed by an array array can be multi byte values An example consumer of the buffer interface is the write method of file objects any object that can export a series of bytes through the buffer interface can be written to a file While write only needs read only access to the internal contents of the object passed to it other methods such as readinto need write access to the contents of their argument The buffer interface allows objects to selectively allow or reject exporting of read write and read only buffers There are two ways for a consumer of the buffer interface to acquire a buffer over a target object call PyObject_GetBuffer with the right parameters call PyArg_ParseTuple or one of its siblings with one of the y w or s format codes In both cases PyBuffer_Release must be called when the buffer isn t needed anymore Failure to do so could lead to various issues such as resource leaks Buffer structure Buffer structures or simply buffers are useful as a way to expose the binary data from another object to the Python programmer They can also be used as a zero copy slicing mechanism Using their ability to reference a block of memory it is possible to expose any data to the Python programmer quite easily The memory could be a large constant array in a C extension it could be a raw block of memory for manipulation before passing to an operating system library or it could be used to pass around structured data in its native in memory format Contrary to most data types exposed by the Python interpreter buffers are not PyObject pointers but rather simple C structures This allows them to be created and copied very simply When a generic wrapper around a buffer is needed a memoryview object can be created For short instructions how to write an exporting object see Buffer Object Structures For obtaining a buffer see PyObject_GetBuffer type Py_buffer Part of the Stable ABI including all members since version 3 11 void buf A pointer to the start of the logical structure described by the buffer fields This can be any location within the underlying physical memory block of the exporter For example with negative strides the value may point to the end of the memory block For contiguous arrays the value points to the beginning of the memory block PyObject obj A new reference to the exporting object The reference is owned by the consumer and automatically released i e reference count decremented and set to NULL by PyBuffer_Release The field is the equivalent of the return value of any standard C API function As a special case for temporary buffers that are wrapped by PyMemoryView_FromBuffer or PyBuffer_FillInfo this field is NULL In general exporting objects MUST NOT use this scheme Py_ssize_t len product shape itemsize For contiguous arrays this is the length of the underlying memory block For non contiguous arrays it is the length that the logical structure would have if it were copied to a c
en
null
2,697
ontiguous representation Accessing char buf 0 up to char buf len 1 is only valid if the buffer has been obtained by a request that guarantees contiguity In most cases such a request will be PyBUF_SIMPLE or PyBUF_WRITABLE int readonly An indicator of whether the buffer is read only This field is controlled by the PyBUF_WRITABLE flag Py_ssize_t itemsize Item size in bytes of a single element Same as the value of struct calcsize called on non NULL format values Important exception If a consumer requests a buffer without the PyBUF_FORMAT flag format will be set to NULL but itemsize still has the value for the original format If shape is present the equality product shape itemsize len still holds and the consumer can use itemsize to navigate the buffer If shape is NULL as a result of a PyBUF_SIMPLE or a PyBUF_WRITABLE request the consumer must disregard itemsize and assume itemsize 1 const char format A NUL terminated string in struct module style syntax describing the contents of a single item If this is NULL B unsigned bytes is assumed This field is controlled by the PyBUF_FORMAT flag int ndim The number of dimensions the memory represents as an n dimensional array If it is 0 buf points to a single item representing a scalar In this case shape strides and suboffsets MUST be NULL The maximum number of dimensions is given by PyBUF_MAX_NDIM Py_ssize_t shape An array of Py_ssize_t of length ndim indicating the shape of the memory as an n dimensional array Note that shape 0 shape ndim 1 itemsize MUST be equal to len Shape values are restricted to shape n 0 The case shape n 0 requires special attention See complex arrays for further information The shape array is read only for the consumer Py_ssize_t strides An array of Py_ssize_t of length ndim giving the number of bytes to skip to get to a new element in each dimension Stride values can be any integer For regular arrays strides are usually positive but a consumer MUST be able to handle the case strides n 0 See complex arrays for further information The strides array is read only for the consumer Py_ssize_t suboffsets An array of Py_ssize_t of length ndim If suboffsets n 0 the values stored along the nth dimension are pointers and the suboffset value dictates how many bytes to add to each pointer after de referencing A suboffset value that is negative indicates that no de referencing should occur striding in a contiguous memory block If all suboffsets are negative i e no de referencing is needed then this field must be NULL the default value This type of array representation is used by the Python Imaging Library PIL See complex arrays for further information how to access elements of such an array The suboffsets array is read only for the consumer void internal This is for use internally by the exporting object For example this might be re cast as an integer by the exporter and used to store flags about whether or not the shape strides and suboffsets arrays must be freed when the buffer is released The consumer MUST NOT alter this value Constants PyBUF_MAX_NDIM The maximum number of dimensions the memory represents Exporters MUST respect this limit consumers of multi dimensional buffers SHOULD be able to handle up to PyBUF_MAX_NDIM dimensions Currently set to 64 Buffer request types Buffers are usually obtained by sending a buffer request to an exporting object via PyObject_GetBuffer Since the complexity of the logical structure of the memory can vary drastically the consumer uses the flags argument to specify the exact buffer type it can handle All Py_buffer fields are unambiguously defined by the request type request independent fields The following fields are not influenced by flags and must always be filled in with the correct values obj buf len itemsize ndim readonly format PyBUF_WRITABLE Controls the readonly field If set the exporter MUST provide a writable buffer or else report failure Otherwise the exporter MAY provide either a read only or writable buffer but the choice MUST be consistent for all consumers PyBUF_FORMAT Controls the format field If set this field MUST b
en
null
2,698
e filled in correctly Otherwise this field MUST be NULL PyBUF_WRITABLE can be d to any of the flags in the next section Since PyBUF_SIMPLE is defined as 0 PyBUF_WRITABLE can be used as a stand alone flag to request a simple writable buffer PyBUF_FORMAT can be d to any of the flags except PyBUF_SIMPLE The latter already implies format B unsigned bytes shape strides suboffsets The flags that control the logical structure of the memory are listed in decreasing order of complexity Note that each flag contains all bits of the flags below it Request shape strides suboffsets PyBUF_INDIRECT yes yes if needed PyBUF_STRIDES yes yes NULL PyBUF_ND yes NULL NULL PyBUF_SIMPLE NULL NULL NULL contiguity requests C or Fortran contiguity can be explicitly requested with and without stride information Without stride information the buffer must be C contiguous Request shape strides suboffsets contig PyBUF_C_CONTIGUOUS yes yes NULL C PyBUF_F_CONTIGUOUS yes yes NULL F PyBUF_ANY_CONTIGUOUS yes yes NULL C or F PyBUF_ND yes NULL NULL C compound requests All possible requests are fully defined by some combination of the flags in the previous section For convenience the buffer protocol provides frequently used combinations as single flags In the following table U stands for undefined contiguity The consumer would have to call PyBuffer_IsContiguous to determine contiguity Request shape strides suboffsets contig readonly format PyBUF_FULL yes yes if needed U 0 yes PyBUF_FULL_RO yes yes if needed U 1 or 0 yes PyBUF_RECORDS yes yes NULL U 0 yes PyBUF_RECORDS_RO yes yes NULL U 1 or 0 yes PyBUF_STRIDED yes yes NULL U 0 NULL PyBUF_STRIDED_RO yes yes NULL U 1 or 0 NULL PyBUF_CONTIG yes NULL NULL C 0 NULL PyBUF_CONTIG_RO yes NULL NULL C 1 or 0 NULL Complex arrays NumPy style shape and strides The logical structure of NumPy style arrays is defined by itemsize ndim shape and strides If ndim 0 the memory location pointed to by buf is interpreted as a scalar of size itemsize In that case both shape and strides are NULL If strides is NULL the array is interpreted as a standard n dimensional C array Otherwise the consumer must access an n dimensional array as follows ptr char buf indices 0 strides 0 indices n 1 strides n 1 item typeof item ptr As noted above buf can point to any location within the actual memory block An exporter can check the validity of a buffer with this function def verify_structure memlen itemsize ndim shape strides offset Verify that the parameters represent a valid array within the bounds of the allocated memory char mem start of the physical memory block memlen length of the physical memory block offset char buf mem if offset itemsize return False if offset 0 or offset itemsize memlen return False if any v itemsize for v in strides return False if ndim 0 return ndim 0 and not shape and not strides if 0 in shape return True imin sum strides j shape j 1 for j in range ndim if strides j 0 imax sum strides j shape j 1 for j in range ndim if strides j 0 return 0 offset imin and offset imax itemsize memlen PIL style shape strides and suboffsets In addition to the regular items PIL style arrays can contain pointers that must be followed in order to get to the next element in a dimension For example the regular three dimensional C array char v 2 2 3 can also be viewed as an array of 2 pointers to 2 two dimensional arrays char v 2 2 3 In suboffsets representation those two pointers can be embedded at the start of buf pointing to two char x 2 3 arrays that can be located anywhere in memory Here is a function that returns a pointer to the element in an N D array pointed to by an N dimensional index when there are both non NULL strides and suboffsets void get_item_pointer int ndim void buf Py_ssize_t strides Py_ssize_t suboffsets Py_ssize_t indices char pointer char buf int i for i 0 i ndim i pointer strides i indices i if suboffsets i 0 pointer char pointer suboffsets i return void pointer Buffer related functions int PyObject_CheckBuffer PyObject obj Part of the Stable ABI since version 3 11 Return 1 if obj supports the buffer interface otherwise
en
null
2,699
0 When 1 is returned it doesn t guarantee that PyObject_GetBuffer will succeed This function always succeeds int PyObject_GetBuffer PyObject exporter Py_buffer view int flags Part of the Stable ABI since version 3 11 Send a request to exporter to fill in view as specified by flags If the exporter cannot provide a buffer of the exact type it MUST raise BufferError set view obj to NULL and return 1 On success fill in view set view obj to a new reference to exporter and return 0 In the case of chained buffer providers that redirect requests to a single object view obj MAY refer to this object instead of exporter See Buffer Object Structures Successful calls to PyObject_GetBuffer must be paired with calls to PyBuffer_Release similar to malloc and free Thus after the consumer is done with the buffer PyBuffer_Release must be called exactly once void PyBuffer_Release Py_buffer view Part of the Stable ABI since version 3 11 Release the buffer view and release the strong reference i e decrement the reference count to the view s supporting object view obj This function MUST be called when the buffer is no longer being used otherwise reference leaks may occur It is an error to call this function on a buffer that was not obtained via PyObject_GetBuffer Py_ssize_t PyBuffer_SizeFromFormat const char format Part of the Stable ABI since version 3 11 Return the implied itemsize from format On error raise an exception and return 1 New in version 3 9 int PyBuffer_IsContiguous const Py_buffer view char order Part of the Stable ABI since version 3 11 Return 1 if the memory defined by the view is C style order is C or Fortran style order is F contiguous or either one order is A Return 0 otherwise This function always succeeds void PyBuffer_GetPointer const Py_buffer view const Py_ssize_t indices Part of the Stable ABI since version 3 11 Get the memory area pointed to by the indices inside the given view indices must point to an array of view ndim indices int PyBuffer_FromContiguous const Py_buffer view const void buf Py_ssize_t len char fort Part of the Stable ABI since version 3 11 Copy contiguous len bytes from buf to view fort can be C or F for C style or Fortran style ordering 0 is returned on success 1 on error int PyBuffer_ToContiguous void buf const Py_buffer src Py_ssize_t len char order Part of the Stable ABI since version 3 11 Copy len bytes from src to its contiguous representation in buf order can be C or F or A for C style or Fortran style ordering or either one 0 is returned on success 1 on error This function fails if len src len int PyObject_CopyData PyObject dest PyObject src Part of the Stable ABI since version 3 11 Copy data from src to dest buffer Can convert between C style and or Fortran style buffers 0 is returned on success 1 on error void PyBuffer_FillContiguousStrides int ndims Py_ssize_t shape Py_ssize_t strides int itemsize char order Part of the Stable ABI since version 3 11 Fill the strides array with byte strides of a contiguous C style if order is C or Fortran style if order is F array of the given shape with the given number of bytes per element int PyBuffer_FillInfo Py_buffer view PyObject exporter void buf Py_ssize_t len int readonly int flags Part of the Stable ABI since version 3 11 Handle buffer requests for an exporter that wants to expose buf of size len with writability set according to readonly buf is interpreted as a sequence of unsigned bytes The flags argument indicates the request type This function always fills in view as specified by flags unless buf has been designated as read only and PyBUF_WRITABLE is set in flags On success set view obj to a new reference to exporter and return 0 Otherwise raise BufferError set view obj to NULL and return 1 If this function is used as part of a getbufferproc exporter MUST be set to the exporting object and flags must be passed unmodified Otherwise exporter MUST be NULL
en
null