prefix
stringclasses
1 value
input_text
stringlengths
10
172
target_text
stringlengths
18
33.9k
QA:
How do I set a specific action to happen when the "enter" key on my keyboard is pressed in python
<p>You can use <code>bind()</code> to assign function to <code>Entry</code> which will be executed when you press <code>Enter</code></p> <p>Example:</p> <pre><code>import tkinter as tk def on_return(event): print('keycode:', event.keycode) print('text in entry:', event.widget.get()) root = tk.Tk() e = tk.Entry(root) e.pack() e.bind('&lt;Return&gt;', on_return) # standard Enter e.bind('&lt;KP_Enter&gt;', on_return) # KeyPad Enter root.mainloop() </code></pre> <p>In your code it can be - for test</p> <pre><code>self.entry = tk.Entry(self, width=37, bg="white") self.entry.grid(row=0, column=0, columnspan=5) self.entry.bind('&lt;Return&gt;', lambda event:print("ENTER:", event.widget.get())) self.entry.bind('&lt;KP_Enter&gt;', lambda event:print("ENTER:", event.widget.get())) </code></pre> <p>If you have class method <code>def on_return(self, event):</code> then </p> <pre><code>self.entry.bind('&lt;Return&gt;', self.on_return) self.entry.bind('&lt;KP_Enter&gt;', self.on_return) </code></pre> <hr> <ol> <li><p><a href="http://effbot.org/tkinterbook/tkinter-events-and-bindings.htm" rel="nofollow">Events and Bindings</a></p></li> <li><p><a href="http://effbot.org/tkinterbook/tkinter-events-and-bindings.htm" rel="nofollow">Key names</a></p></li> </ol>
QA:
Optimal way to iterate through 3 files and generate a third file in python
<p>Since your IDs are unique, as you write, you could use dictionaries instead of lists for file1 and file3. So your loop check to see if the ID is present reduces to a single lookup in the set of keys to those dictionaries. I don't know your original lists, but I presume that dictionaries are faster for your purpose. Thus you save two loop iterations over your long file. Some time will be spent on assembling the lists of keys, though. Please try the following approach:</p> <pre><code>file1 = {} # empty new dictionary file4 = [] file3 = {} final_list.append("ID1, ID2, DATA1, DATA2") #Import file1 with open('file1.txt') as inputfile: #file 1: around 9.7k for line in inputfile: temp = line.strip().split(' ') file1[temp[0]] = temp[1] # store ID1 and associated data in dict #Import file3 with open('file3.txt') as inputfile: #file 3: around 1.1k for line in inputfile: temp = line.strip().split(' ') file3[temp[0]] = temp[1] # store ID2 and associated data in dict print len(file1) #Iterate through file2 (so I only iterate once through this) keys1 = file1.keys() # for fast lookup, precalculate the list of ID1 entries keys3 = file3.keys() # for fast lookup, precalculate the list of ID2 entries with open('file2.txt') as inputfile: #File 2: 2.1 million for line in inputfile: temp = line.strip().split(' ') if temp[0] in keys1: if temp[1] in keys3: file4.append([temp, file1[temp[0]], file3[temp[0]]]) print len(file4) print file4[:10] thefile = open('final.txt', 'w') for item in file4: thefile.write("%s\n" % item) thefile.close() </code></pre> <p>Regards,</p>
QA:
Convert full month and day name to short ones
<p>it should be <code>d</code>:</p> <pre><code>&gt;&gt;&gt; data_conv = datetime.strptime(str_date, "%Y %m %d") &gt;&gt;&gt; data_conv datetime.datetime(2015, 8, 1, 0, 0) </code></pre>
QA:
How should I log while using multiprocessing in Python?
<p>Since we can represent multiprocess logging as many publishers and one subscriber (listener), using <a href="http://zguide.zeromq.org/page:all" rel="nofollow">ZeroMQ</a> to implement PUB-SUB messaging is indeed an option. </p> <p>Moreover, <a href="http://pyzmq.readthedocs.io" rel="nofollow">PyZMQ</a> module, is the Python bindings for ZMQ, implements <a href="http://pyzmq.readthedocs.io/en/latest/api/zmq.log.handlers.html#zmq.log.handlers.PUBHandler" rel="nofollow">PUBHandler</a>, which is object for publishing logging messages over a zmq.PUB socket.</p> <p>There's a <a href="https://pyfunc.blogspot.co.il/2013/08/centralized-logging-for-distributed.html" rel="nofollow">solution on the web</a>, for centralized logging from distributed application using PyZMQ and PUBHandler, which can be easily adopted for working locally with multiple publishing processes.</p> <pre><code>formatters = { logging.DEBUG: logging.Formatter("[%(name)s] %(message)s"), logging.INFO: logging.Formatter("[%(name)s] %(message)s"), logging.WARN: logging.Formatter("[%(name)s] %(message)s"), logging.ERROR: logging.Formatter("[%(name)s] %(message)s"), logging.CRITICAL: logging.Formatter("[%(name)s] %(message)s") } # This one will be used by publishing processes class PUBLogger: def __init__(self, host, port=config.PUBSUB_LOGGER_PORT): self._logger = logging.getLogger(__name__) self._logger.setLevel(logging.DEBUG) self.ctx = zmq.Context() self.pub = self.ctx.socket(zmq.PUB) self.pub.connect('tcp://{0}:{1}'.format(socket.gethostbyname(host), port)) self._handler = PUBHandler(self.pub) self._handler.formatters = formatters self._logger.addHandler(self._handler) @property def logger(self): return self._logger # This one will be used by listener process class SUBLogger: def __init__(self, ip, output_dir="", port=config.PUBSUB_LOGGER_PORT): self.output_dir = output_dir self._logger = logging.getLogger() self._logger.setLevel(logging.DEBUG) self.ctx = zmq.Context() self._sub = self.ctx.socket(zmq.SUB) self._sub.bind('tcp://*:{1}'.format(ip, port)) self._sub.setsockopt(zmq.SUBSCRIBE, "") handler = handlers.RotatingFileHandler(os.path.join(output_dir, "client_debug.log"), "w", 100 * 1024 * 1024, 10) handler.setLevel(logging.DEBUG) formatter = logging.Formatter("%(asctime)s;%(levelname)s - %(message)s") handler.setFormatter(formatter) self._logger.addHandler(handler) @property def sub(self): return self._sub @property def logger(self): return self._logger # And that's the way we actually run things: # Listener process will forever listen on SUB socket for incoming messages def run_sub_logger(ip, event): sub_logger = SUBLogger(ip) while not event.is_set(): try: topic, message = sub_logger.sub.recv_multipart(flags=zmq.NOBLOCK) log_msg = getattr(logging, topic.lower()) log_msg(message) except zmq.ZMQError as zmq_error: if zmq_error.errno == zmq.EAGAIN: pass # Publisher processes loggers should be initialized as follows: class Publisher: def __init__(self, stop_event, proc_id): self.stop_event = stop_event self.proc_id = proc_id self._logger = pub_logger.PUBLogger('127.0.0.1').logger def run(self): self._logger.info("{0} - Sending message".format(proc_id)) def run_worker(event, proc_id): worker = Publisher(event, proc_id) worker.run() # Starting subscriber process so we won't loose publisher's messages sub_logger_process = Process(target=run_sub_logger, args=('127.0.0.1'), stop_event,)) sub_logger_process.start() #Starting publisher processes for i in range(MAX_WORKERS_PER_CLIENT): processes.append(Process(target=run_worker, args=(stop_event, i,))) for p in processes: p.start() </code></pre>
QA:
Pyspark ml can't fit the model and always "AttributeError: 'PipelinedRDD' object has no attribute '_jdf'
<p>I guess you are using the tutorial for the latest spark version <a href="https://spark.apache.org/docs/latest/ml-classification-regression.html" rel="nofollow">(2.0.1)</a> with <code>pyspark.ml.classification import LogisticRegression</code> whereas you need some other version, e.g. <a href="https://spark.apache.org/docs/1.6.2/mllib-linear-methods.html" rel="nofollow">1.6.2</a> with <code>pyspark.mllib.classification import LogisticRegressionWithLBFGS, LogisticRegressionModel</code>. Note the different libraries.</p>
QA:
Fastest way to build a Matrix with a custom architecture
<p>Using <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy brodacasting</code></a>!</p> <pre><code>In [289]: a = np.array([1,2,3,2,1]) In [290]: np.minimum(a[:,None],a) Out[290]: array([[1, 1, 1, 1, 1], [1, 2, 2, 2, 1], [1, 2, 3, 2, 1], [1, 2, 2, 2, 1], [1, 1, 1, 1, 1]]) </code></pre> <p>To build the range array, we can do something like this -</p> <pre><code>In [303]: N = 3 In [304]: np.concatenate((np.arange(1,N+1),np.arange(N-1,0,-1))) Out[304]: array([1, 2, 3, 2, 1]) </code></pre> <p><strong>Adding some bias</strong></p> <p>Let's say we want to move the highest number/peak up or down. We need to create another <em>biasing</em> array and use the same strategy of <code>broadcasting</code>, like so -</p> <pre><code>In [394]: a = np.array([1,2,3,2,1]) In [395]: b = np.array([2,3,2,1,0]) # Biasing array In [396]: np.minimum(b[:,None],a) Out[396]: array([[1, 2, 2, 2, 1], [1, 2, 3, 2, 1], [1, 2, 2, 2, 1], [1, 1, 1, 1, 1], [0, 0, 0, 0, 0]]) </code></pre> <p>Similarly, to have the bias shifted left or right, modify <code>a</code>, like so -</p> <pre><code>In [397]: a = np.array([2,3,2,1,0]) # Biasing array In [398]: b = np.array([1,2,3,2,1]) In [399]: np.minimum(b[:,None],a) Out[399]: array([[1, 1, 1, 1, 0], [2, 2, 2, 1, 0], [2, 3, 2, 1, 0], [2, 2, 2, 1, 0], [1, 1, 1, 1, 0]]) </code></pre>
QA:
Changing Basemap projection causes beach balls / data to disappear (obspy)
<p>The problem lies it seems with the projection and the axis or data coordinates transformation. When the width was changed from 10 to 1000000, then this resolves the issue:</p> <pre><code>b = beach(focmecs[i], facecolor=beachball_color, xy=(x[i], y[i]), width=1000000, linewidth=1, alpha=0.85) </code></pre>
QA:
Jupyter magic to handle notebook exceptions
<p>I don't think there is an out-of-the-box way to do that not using a <code>try..except</code> statement in your cells. AFAIK <a href="https://github.com/ipython/ipython/issues/1977" rel="nofollow">a 4 years old issue</a> mentions this, but is still in open status.</p> <p>However, the <a href="https://github.com/ipython-contrib/jupyter_contrib_nbextensions/tree/master/src/jupyter_contrib_nbextensions/nbextensions/runtools" rel="nofollow">runtools extension</a> may do the trick.</p>
QA:
Convert full month and day name to short ones
<p>Just remove the leading zeroes:</p> <pre><code>' '.join([x.lstrip('0') for x in str_date.split()]) </code></pre>
QA:
Efficiently write SPSS data into Excel using Python
<p>Make sure lxml is installed and use openpyxl's write-only mode, assuming you can work on a row-by-row basis. If this is not directly possible then you'll need some kind of intermediate structure that can give you rows.</p>
QA:
Error importing storage module.whitenoise.django
<p>You're using an unsupported version of Django. Django 1.5 has been out of mainstream support for three years, and out of extended support for two. See here: <a href="https://www.djangoproject.com/download/#supported-versions" rel="nofollow">https://www.djangoproject.com/download/#supported-versions</a></p> <p>The latest version of WhiteNoise is tested with Django 1.8 and up.</p>
QA:
Checking if two lists share at least one element
<p>The any function takes in an iterable <a href="https://docs.python.org/2/library/functions.html#any" rel="nofollow">(See documentation here)</a>, so the answer should be <code>any([x in a for x in b])</code></p>
QA:
Creating a chart in python 3.5 using XlsxWriter where series values are based on a loop
<p>In almost all parts of the XlsxWriter API, anywhere there is a Cell reference like <code>A1</code> or a range like <code>=Sheet1!$A$1</code> you can use a tuple or a list of values. For charts you can use a list of values like this:</p> <pre><code># String interface. This is good for static ranges. chart.add_series({ 'name': '=Sheet1!$A$1', 'categories': '=Sheet1!$B$1:$B$5', 'values': '=Sheet1!$C$1:$C$5', }) # Or using a list of values instead of name/category/value formulas: # [sheetname, first_row, first_col, last_row, last_col] # This is better for generating programmatically. chart.add_series({ 'name': ['Sheet1', 0, 0 ], 'categories': ['Sheet1', 0, 1, 4, 1], 'values': ['Sheet1', 0, 2, 4, 2], }) </code></pre> <p>Refer to the <a href="https://xlsxwriter.readthedocs.io/chart.html#chart-class" rel="nofollow">docs</a>.</p> <p>As for the error you are getting (after the edit): <code>Subscribers</code> isn't a XlsxWriter parameter, you probably mean<code>values</code>:</p> <pre><code> chart.add_series({ 'values': ['Sheet1', row_1 + 2, col_1, 2 + len(sb_subCount_list_clean), col_1] }) </code></pre>
QA:
Convert word2vec bin file to text
<p>Just a quick update as now there is easier way.</p> <p>If you are using <code>word2vec</code> from <a href="https://github.com/dav/word2vec" rel="nofollow">https://github.com/dav/word2vec</a> there is additional option called <code>-binary</code> which accept <code>1</code> to generate binary file or <code>0</code> to generate text file. This example comes from <code>demo-word.sh</code> in the repo:</p> <p><code> time ./word2vec -train text8 -output vectors.bin -cbow 1 -size 200 -window 8 -negative 25 -hs 0 -sample 1e-4 -threads 20 -binary 0 -iter 15 </code></p>
QA:
Verbose_name and helptext lost when using django autocomplete light
<p>It's not specific to dal. You're re-instanciating a new widget class, so you need to copy help_text and verbose_name yourself.</p>
QA:
How do you check in which module a function is defined? Python
<p>I'm trying to implement this but it's not working. Use case: I am going through lecturers code trying to understand how it works. Need to trace which module a function is coming from so I can inspect the underlying code;</p> <p>Method 1:</p> <pre><code>train.__module__ AttributeError: 'list' object has no attribute '__module__' </code></pre> <p>Method 2:<br></p> <pre><code>getmodule(train) NameError: name 'getmodule' is not defined </code></pre>
QA:
How to generate effectively a random number that only contains unique digits in Python?
<p>Shuffle is the way to go as suggested by @jonrsharpe:</p> <pre><code>import random def get_number(size): l = [ str(i) for i in list(range(10))] while l[0] == '0': random.shuffle(l) return int("".join(l[:size])) </code></pre> <p>Limits:</p> <ul> <li>is you ask for a number of more than 10 digits, you will only get 10 digits</li> <li>it can take some steps if first digit is initially a 0</li> </ul>
QA:
Anaconda OpenCV Arch Linux libselinux.so error
<p>Fixed with installing the libselinux package in the AUR. I now have </p> <pre><code>ImportError: /usr/lib/libpangoft2-1.0.so.0: undefined symbol: FcWeightToOpenType </code></pre> <p>will post if I solve</p> <p>EDIT: Solved as in issue <a href="https://github.com/ContinuumIO/anaconda-issues/issues/368" rel="nofollow">368</a></p> <pre><code>conda install -c asmeurer pango </code></pre>
QA:
Issue adding request headers for Django Tests
<p>For anybody that finds themselves looking at this page with a similar issue, I was able to get this working with the following:</p> <pre><code>resp = self.client.post(resource_url, data=res_params, content_type='application/json', HTTP_SSL_CLIENT_CERT=self.client_cert) </code></pre>
QA:
Split and shift RGB channels in Python
<p>Per color plane, replace the pixel at <code>(X, Y)</code> by the pixel at <code>(X-1, Y+3)</code>, for example. (Of course your shifts will be different.)</p> <p>You can do that in-place, taking care to loop by increasing or decreasing coordinate to avoid overwriting.</p> <p>There is no need to worry about transparency.</p>
QA:
Python function to create an array of particular column of csv file
<p>Try the following:</p> <pre><code>import csv def facet(file, column): with open(file, newline='') as f_input: return [row[column] for row in csv.reader(f_input)][1:-1] print(facet('input.csv', 1)) </code></pre> <p>The reads the file as a <code>csv</code> and uses a list comprehension to select only only cell from each row and build a list. Finally <code>[1:-1</code> is used to skip over header and footer in your file.</p>
QA:
Deleting blank rows and rows with text in them at the same time
<p>Your for loop is just iterating over <em>filecsv</em>. It's not doing anything. You want something like</p> <p><code>non_blank_rows = [] for row in filecsv: if row: # empty strings are false-y non_blank_rows.append(row)</code></p> <p>Now at this point you could just set filecsv to be non_blank_rows. Note that the more pythonic way to do all of this would probably be to use a list comprehension. I'd write the syntax for that, but if that's something you haven't heard about you might want to look them up on your own.</p> <p><code>filecsv = [row for row in filecsv if row]</code> </p>
QA:
python , launch an external program with ulimit
<p>Check out the python <a href="https://docs.python.org/2.7/library/resource.html?highlight=resource#module-resource" rel="nofollow">resource</a> module. It will let you set the size of core files, etc., just like the ulimit command. Specifically, you want to do something like</p> <pre><code>resource.setrlimit(resource.RLIMIT_CORE, &lt;size&gt;) </code></pre> <p>before launching your target program.</p> <p>My guess at usage (I haven't done this myself) is:</p> <pre><code>import resource import subprocess resource.setrlimit(resource.RLIMIT_CORE, (resource.RLIM_INFINITY, resource.RLIM_INFINITY)) command = 'command line to be launched' subprocess.call(command) # os.system(command) would work, but os.system has been deprecated # in favor of the subprocess module </code></pre> <hr>
QA:
Why did conda and pip just stop working? 'CompiledFFI' object has no attribute 'def_extern'
<p>I am answering the question this late, as all of the above answers didn't work me.</p> <p><strong>Cause</strong>: The probable cause was <code>cffi</code> package version i.e. 1.2.1 (in my case 1.3.0). </p> <p><strong>Solution</strong>: Upgrade <code>cffi</code> package. But it isn't that simple as most probably it'd have broken your <code>pip</code> as well.</p> <p>First uninstall pip (for CentOS 7):</p> <pre><code>yum remove -y python-pip </code></pre> <p>Once removed, now delete the cffi package manually:</p> <p>To get the exact path:</p> <pre><code>$ python &gt;&gt;&gt; import cffi &gt;&gt;&gt; cffi.__path__ ['/usr/lib64/python2.7/site-packages/cffi'] </code></pre> <p>Now go to the directory: cd /usr/lib64/python2.7/site-packages - to check what cffi files and folders are there:</p> <pre><code>ls | grep cffi cffi cffi-1.3.0-py2.7.egg-info _cffi_backend.so </code></pre> <p>Remove the cffi relevant files and folders:</p> <pre><code>rm -rf cffi cffi-1.3.0-py2.7.egg-info/ _cffi_backend.so </code></pre> <p>Re-installing pip:</p> <pre><code>yum install -y python-pip </code></pre> <p>Installing the latest cffi package:</p> <pre><code>pip install cffi==1.8.2 </code></pre>
QA:
Align key-value pair in PyCharm
<p>You need a function:</p> <h2><code>Menu</code> → <code>Code</code> → <code>Align to Columns</code></h2> <hr> <h3>Configure hot-keys</h3> <ul> <li><code>Settings</code> → <code>Keymap</code></li> <li>Search <em>"Align to Columns"</em> and select</li> <li><code>Add Keyboard Shortcut</code>(<kbd>Enter</kbd>)</li> <li>Set your shortcuts (my <kbd>Ctrl</kbd>+<kbd>K</kbd>)</li> <li>Press <code>Apply</code> and <code>Ok</code></li> </ul>
QA:
Dictionary keys cannot be encoded as utf-8
<p>Note that the error message says "'ascii' codec can't <strong>decode</strong> ...". That's because when you call <code>encode</code> on something that is already a bytestring in Python 2, it tries to decode it to Unicode first using the default codec.</p> <p>I'm not sure why you thought that encoding again would be a good idea. Don't do it; the strings are already byetestrings, leave them as that.</p>
QA:
NetworkX: how to properly create a dictionary of edge lengths?
<p>There is a lot going wrong in the last line, first and foremost that G.edges() is an iterator and not a valid dictionary key, and secondly, that G.edges() really just yields the edges, not the positions of the nodes.</p> <p>This is what you want instead: </p> <pre><code>lengths = dict() for source, target in G.edges(): x1, y1 = posk[source] x2, y2 = posk[target] lengths[(source, target)] = math.sqrt((x2-x1)**2 + (y2-y1)**2) </code></pre>
QA:
Why do Python modules sometimes not import their sub-modules?
<p>I've faced recently the same odd situation. So, I bet you've removed some third-party lib import. That removed lib contained <code>from logging import handlers</code> or <code>from logging import *</code> and provided you <code>handlers</code>. And in other script you've had something like <code>import logging</code> and just used <code>logging.handlers</code> and you've thought that is a way things work as I did. </p>
QA:
Dictionary keys cannot be encoded as utf-8
<p>You get a <strong>decode</strong> error while you are trying to <strong>encode</strong> a string. This seems weird but it is due to implicit decode/encode mechanism of Python.</p> <p>Python allows to encode strings to obtain bytes and decode bytes to obtain strings. This means that Python can encode only strings and decode only bytes.</p> <p>So when you try to encode bytes, Python (which does not know how to encode bytes) tries to implicitely decode the byte to obtain a string to encode and it uses its default encoding to do that. This is why you get a decode error while trying to encode something: the implicit decoding.</p> <p>That means that you are probably trying to encode something which is already encoded.</p>
QA:
webpage access while using scrapy
<p>First of all, this website looks like a JavaScript-heavy one. Scrapy itself only downloads HTML from servers but does not interpret JavaScript statements.</p> <p>Second, the URL fragment (i.e. everything including and after <code>#body</code>) is not sent to the server and only <code>http://www.city-data.com/advanced/search.php</code> is fetched (scrapy does the same as your browser. You can confirm that with your browser's dev tools network tab.)</p> <p>So for Scrapy, the requests to </p> <pre><code>http://www.city-data.com/advanced/search.php#body?fips=0&amp;csize=a&amp;sc=2&amp;sd=0&amp;states=ALL&amp;near=&amp;nam_crit1=6914&amp;b6914=MIN&amp;e6914=MAX&amp;i6914=1&amp;nam_crit2=6819&amp;b6819=15500&amp;e6819=MAX&amp;i6819=1&amp;ps=20&amp;p=0 </code></pre> <p>and</p> <pre><code>http://www.city-data.com/advanced/search.php#body?fips=0&amp;csize=a&amp;sc=2&amp;sd=0&amp;states=ALL&amp;near=&amp;nam_crit1=6914&amp;b6914=MIN&amp;e6914=MAX&amp;i6914=1&amp;nam_crit2=6819&amp;b6819=15500&amp;e6819=MAX&amp;i6819=1&amp;ps=20&amp;p=1 </code></pre> <p>are the same resource, so it's only fetch once. They differ only in their URL fragments.</p> <p>What you need is a JavaScript renderer. You could use Selenium or something like <a href="http://splash.readthedocs.io/" rel="nofollow">Splash</a>. I recommend using the <a href="https://github.com/scrapy-plugins/scrapy-splash" rel="nofollow">scrapy-splash plugin</a> which includes a duplicate filter that takes into account URL fragments.</p>
QA:
Python: Plot a sparse matrix
<p><code>plt.matshow</code> also turned out to be a feasible solution. I could also plot a heatmap with colorbars and all that.</p>
QA:
How Do I Display a Pandas Dataframe as a table in a simple Kivy App?
<p>I found out something which could help you :</p> <p>Kivy file</p> <pre><code>GraphDraw: &lt;GraphDraw&gt;: BoxLayout: Button: text: "Hello World" on_press: root.graph() </code></pre> <p>Logic</p> <pre><code>#!/usr/bin/env python # -*- encoding: utf-8 import datetime import pandas as pd from kivy.app import App from kivy.uix.boxlayout import BoxLayout import dfgui import pandas as pd class Visualisation(App): pass class GraphDraw(BoxLayout): def graph(self): xls = pd.read_excel('filepath') #df = pd.DataFrame.xls dfgui.show(xls) #print xls if __name__ == '__main__': Visualisation().run() </code></pre> <p>So you use dfgui which can create pandas dataframe table instead of using Kivy. See the project dfgui : <a href="https://github.com/bluenote10/PandasDataFrameGUI" rel="nofollow">https://github.com/bluenote10/PandasDataFrameGUI</a></p> <p>I hope it can help :)</p>
QA:
Tkinter how to update second combobox automatically according this combobox
<p>Actually you don't need the global variable <code>ListB</code>. And you need to add <code>comboboxB.config(values=...)</code> at the end of <code>CallHotel()</code> to set the options of <code>comboboxB</code>:</p> <pre><code>def CallHotel(*args): sel = hotel.get() if sel == ListA[0]: ListB = ListB1 elif sel == ListA[1]: ListB = ListB2 elif sel == ListA[2]: ListB = ListB3 comboboxB.config(values=ListB) </code></pre> <p>And change the initial values of <code>comboboxB</code> to <code>ListB1</code> directly:</p> <pre><code>comboboxB=ttk.Combobox(win0,textvariable=stp,values=ListB1,width=15) </code></pre>
QA:
How to catch Cntrl + C in a shell script which runs a Python script
<p>The other answers involve modifying the Python code itself, which is less than ideal since I don't want it to contain code related only to the testing. Instead, I found that the following bash script useful:</p> <pre><code>#!/bin/bash rm archived_sensor_data.json python rethinkdb_monitor_batch.py ; gedit archived_sensor_data.json </code></pre> <p>This will run the <code>gedit archived_sensor_data.json</code> command after <code>python rethinkdb_monitor_batch.py</code> is finished, regardless of whether it exited successfully.</p>
QA:
Dump database table or work remotely for analysis?
<p>Directly on the database with SQL is perfectly fine for any analysis <em>when you already know what you're looking for</em>.</p> <p>When you don't know what you're looking for, and you want to do e.g. pattern recognition, the effort to dump and process in another tool is probably worth it.</p> <p>Also consider the possibility to connect Pandas directly to your Oracle database (which allows you to skip dumping data), <a href="http://dominicgiles.com/blog/files/bbffdb638932620b3182980fbd0e3d5b-146.html" rel="nofollow">see here for an example</a>. </p>
QA:
Python script to count pixel values fails on a less-than/greater-than comparison
<p>There are a couple of issues with your approach.</p> <p>When you do</p> <pre><code>(y &lt; 85.00).sum() </code></pre> <p>You're actually summing over the truth condition. So you end up counting where the condition evaluates to <code>True</code>. You can easily see it with a quick example:</p> <pre><code>In [6]: x = np.arange(10) In [7]: x Out[7]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) In [8]: x &lt; 4 Out[8]: array([ True, True, True, True, False, False, False, False, False, False], dtype=bool) In [9]: (x &lt; 4).sum() Out[9]: 4 </code></pre> <p>Now if you want to get the indices where the condition is satisfied, you can use <code>np.where</code></p> <pre><code>In [10]: np.where(x &lt; 4) Out[10]: (array([0, 1, 2, 3]),) </code></pre> <p>And use them for your sum</p> <pre><code>In [11]: x[np.where(x &lt; 4)].sum() Out[11]: 6 </code></pre> <p>The other issue comes from using the compact notation for the range which is easily solved splitting it in two with <code>&amp;</code> or <code>np.logical_and()</code></p> <pre><code>In [12]: x[np.where((2 &lt; x) &amp; (x &lt; 6))].sum() Out[12]: 12 </code></pre>
QA:
Why do I get garbage in the file when using "w+" instead of "a+" when the filehandles are stored in a Dict?
<p>You are <em>reopening</em> the file objects each time in the loop, even if already present in the dictionary. The expression:</p> <pre><code>mailLists.setdefault(filenameA,open(filenameA,"w+")) </code></pre> <p>opens the file <em>first</em>, as both arguments to <code>setdefault()</code> need to be available. Using <code>open(..., 'w+')</code> <em>truncates the file</em>.</p> <p>This is fine when you do so for the first time the filename is not yet present, but all subsequent times, you just truncated a file for which there is still an open file handle. That already-existing open file handle in the dictionary has a file writing position, and continues to write from that position. Since the file just has been truncated, this leads to the behaviour you observed; corrupted file contents. You'll see multiple entries written as data could still be buffered; only data already flushed to disk is lost.</p> <p>See this short demo (executed on OSX, different operating systems and filesystems can behave differently):</p> <pre><code>&gt;&gt;&gt; with open('/tmp/testfile.txt', 'w') as f: ... f.write('The quick brown fox') ... f.flush() # flush the buffer to disk ... open('/tmp/testfile.txt', 'w') # second open call, truncates ... f.write(' jumps over the lazy fox') ... &lt;open file '/tmp/testfile.txt', mode 'w' at 0x10079b150&gt; &gt;&gt;&gt; with open('/tmp/testfile.txt', 'r') as f: ... f.read() ... '\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 jumps over the lazy fox' </code></pre> <p>Opening the files in <code>a</code> append mode doesn't truncate, which is why that change made things work.</p> <p>Don't keep opening files, only do so when the file is <em>actually missing</em>. You'll have to use an <code>if</code> statement for that:</p> <pre><code>if filenameA not in mailLists: mailLists[filenameA] = open(filenameA, 'w+') </code></pre> <p>I'm not sure why you are using <code>+</code> in the filemode however, since you don't appear to be reading from any of the files.</p> <p>For <code>filenameAll</code>, that variable name never changes and you don't need to open that file in the loop at all. Move that <em>outside</em> of the loop and open just once.</p>
QA:
Machine Learning Prediction - why and when beginning with PCA?
<p>I assume you mean PCA-based dimensionality reduction. Low-variance data often, but not always, has little predictive power, so removing low-variance dimensions of your dataset can be an effective way of improving predictor running time. In cases where it raises the signal to noise ratio, it can even improve prediction quality. But this is just a heuristic and is not universally applicable.</p>
QA:
Python backtest using percentage based commission
<p>Solved it by creating a function with parameters for quantity and price. Thus it was easy returning a percentage based on the transaction cost as follows:</p> <pre><code>def my_comm(q, p): return abs(q)*p*0.0025 </code></pre>
QA:
Jython : SyntaxError: Lexical error at line 29, column 32. Encountered: "$" (36), after : ""
<p>Change this line</p> <pre><code>classpath = ["classpath" , ${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar ] </code></pre> <p>to this:</p> <pre><code>classpath = ["classpath" , "${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar" ] </code></pre> <p>or better yet, just delete that line. Anyways, classpath is declared again later to the same value</p>
QA:
SCP Through python is not transferring file
<p>If you don't mind to try other approaches it worth to use SCPClient from scp import. </p>
QA:
merge two plot to one graph
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.bar.html" rel="nofollow"><code>DataFrame.plot.bar</code></a> with transposing <code>DataFrame</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.T.html" rel="nofollow"><code>T</code></a>:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame({ 'X2': {'female': 75, 'male': 65}, 'X46': {'female': 350, 'male': 150}, 'X1': {'female': 500, 'male': 100}, 'X3': {'female': 30, 'male': 75}}) print (df) X1 X2 X3 X46 female 500 75 30 350 male 100 65 75 150 df.T.plot.bar() plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/0XCLw.png" rel="nofollow"><img src="https://i.stack.imgur.com/0XCLw.png" alt="graph"></a></p>
QA:
How to connect 2 ultrasonic sensors concurrently using python multiprocessing process?
<p>You do not say what you mean with "does not work", so I am taking a few guesses here.</p> <p>The obvious fail here would be:</p> <blockquote> <p>TypeError: A() takes exactly 1 argument (0 given)</p> </blockquote> <p>Since functions <code>A</code>, <code>B</code> and <code>C</code> all take an argument <code>name</code>, and you do not provide it in <code>Process(target=A)</code>. It works if you just remove the parameter from the functions, since you are not even using it.</p> <p>You can also provide the argument in the call like this:</p> <pre><code>p = Process(target=A, args=('ultra_a',)) </code></pre> <p>Other one could be indentation error, at least in your code paste you have one extra space at each line until def B.</p>
QA:
Jython : SyntaxError: Lexical error at line 29, column 32. Encountered: "$" (36), after : ""
<p>Your problem is in this line:</p> <pre><code>classpath = ["classpath" , ${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar ] </code></pre> <p>Instead, do something like</p> <pre><code>opath = os.getenv("ORACLE_JDBC_DRIVER_PATH") classpath = ["classpath", "{}/ojdbc6.jar".format(opath)] </code></pre> <p>"${ORACLE_JDBC_DRIVER_PATH}" is shell syntax, not Python.</p>
QA:
PyInstaller 2.1 IOError
<p>i fixed like that :</p> <pre><code>#texturePath="visual\\" --&gt; texturePath="visual_common\\" #texturePath = os.path.split( __file__ )[0] + "/" --&gt; texturePath="C:\Python27\Lib\site-packages\\visual_common\\" </code></pre> <p>Regards</p>
QA:
Replace 0 with blank in dataframe Python pandas
<p>I think you need add <code>^</code> for matching start of string and <code>$</code> for end of string:</p> <pre><code>data_usage_df['Data Volume (MB)']=data_usage_df['Data Volume (MB)'].str.replace('^0.0$', '') </code></pre> <p>Sample:</p> <pre><code>data_usage_df = pd.DataFrame({'Data Volume (MB)':[3016.2, 0.235, 1.4001, 0, 4.00]}) print (data_usage_df) runfile('C:/Dropbox/work-joy/so/_t/test.py', wdir='C:/Dropbox/work-joy/so/_t') Data Volume (MB) 0 3016.2000 1 0.2350 2 1.4001 3 0.0000 4 4.0000 data_usage_df['Data Volume (MB)'] = data_usage_df['Data Volume (MB)'].astype(str) data_usage_df['Data Volume (MB)']=data_usage_df['Data Volume (MB)'].str.replace('^0.0$', '') print (data_usage_df) Data Volume (MB) 0 3016.2 1 0.235 2 1.4001 3 4 4.0 </code></pre> <p>Another solution is converting column <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html" rel="nofollow"><code>to_numeric</code></a> and where is <code>0</code> give empty space:</p> <pre><code>data_usage_df['Data Volume (MB)'] = data_usage_df['Data Volume (MB)'].astype(str) data_usage_df.ix[pd.to_numeric(data_usage_df['Data Volume (MB)'], errors='coerce') == 0, ['Data Volume (MB)']] = '' print (data_usage_df) Data Volume (MB) 0 3016.2 1 0.235 2 1.4001 3 4 4.0 </code></pre>
QA:
How to get latest unique entries from sqlite db with the counter of entries via Django ORM
<p>You can use Django annotate() and value() together: <a href="https://docs.djangoproject.com/el/1.10/topics/db/aggregation/#values" rel="nofollow">link</a>. </p> <blockquote> <p>when a values() clause is used to constrain the columns that are returned in the result set, the method for evaluating annotations is slightly different. Instead of returning an annotated result for each result in the original QuerySet, the original results are grouped according to the unique combinations of the fields specified in the values() clause. An annotation is then provided for each unique group; the annotation is computed over all members of the group.</p> </blockquote> <p>Your ORM query should looks like this:</p> <pre><code>queryset = Model.objects.values("Lang").annotate( max_datetime=Max("DateTime"), count=Count("ID") ).values( "ID", "max_datetime", "Lang", "Details", "count" ) </code></pre>
QA:
Request returns partial page
<p>You can't. The question is based on a misunderstanding of what requests does; it loads the content of the page only. Endless scrolling is powered by Javascript, which requests won't do anything with.</p> <p>You'd need some browser automation tools like Selenium to do this; or find out what Ajax endpoint the scrolling JS is using and load that directly.</p>
QA:
How can I open a file, replace a string in that file and write it to a new file using Python
<pre><code>import sys input_file = open ('hostnames.txt', 'r') template = open ('hosttemplate.txt', 'r') tdata = template.readlines() count_lines = 0 for hostname in input_file: system = hostname.strip() computername = open(system+".test",'a') for line in tdata: computername.write(line.replace('hostname', system)) computername.close() print hostname count_lines += 1 print 'number of lines:', count_lines </code></pre> <p>Before the loop, we open hostnames.txt and read the contents of hosttemplate.txt into list tdata.</p> <p>As we step through hostnames.txt ("for hostname in input_file"), we</p> <ul> <li>strip the newline off hostname before assigning to system,</li> <li>open the file .test with handle computername</li> <li>iterate through tdata, writing each line to computername after replacing the string 'hostname' with the actual name of the host (system)</li> <li>close the file open through handle computername</li> <li>print the hostname and increment count_lines</li> </ul>
QA:
Starting virtualenv script rc local
<p>So, partially, the solution seems to set some variables instead of accesing them directly. At least this worked for me. Thanks Samer for giving us a big tip :)</p> <p><code>HOME=/home/backend #the project path docker start container . $HOME/venv/bin/activate #activates the virtualenv of the project /usr/bin/env python $HOME/run.py &amp; #runs the run.py through virtualenv's python and runs it in the background exit 0</code></p>
QA:
Deleting blank rows and rows with text in them at the same time
<p>you can use as fallowing for writing new csv file after deleting particuler row's </p> <pre><code>import csv #read csv file csvfile = open('/home/17082016_ExpiryReport.csv', 'rb') filecsv = csv.reader(csvfile, delimiter=",") #following line increment pointer for blank line also up to range 10 lines [next(filecsv) for i in range(10)] #write csv file with open('/home/17082016_ExpiryReport_output.csv', 'wb') as csvfile: csvwriter = csv.writer(csvfile, delimiter=',') [csvwriter.writerow(i) for i in filecsv ] </code></pre>
QA:
Jupyter magic to handle notebook exceptions
<p>A such magic command does not exist, but you can write it.</p> <pre><code>from IPython.core.magic import register_cell_magic @register_cell_magic def handle(line, cell): try: exec(cell) except Exception as e: send_mail_to_myself(e) </code></pre> <p>It is not possible to load automatically the magic command for the whole notebook, you have to add it at each cell where you need this feature. </p> <pre><code>%%handle some_code() raise ValueError('this exception will be caught by the magic command') </code></pre>
QA:
Assign class' staticmethods in static variable in python
<p>Change the order so that the dictionary is specified after the methods have been defined. Also don't use <code>MyClass</code> when doing so.</p> <pre><code>class MyClass(object): @staticmethod def _make_basic_query(): #some code here pass @staticmethod def _make_query_three_aggregations(): #some code here pass @staticmethod def _make_query_three_transformations(aggs): #some code here pass QUERIES_AGGS = { 'query3': { "query": _make_basic_query, 'aggregations': _make_query_three_aggregations, 'aggregation_transformations': _make_query_three_transformations } } </code></pre> <p>This works because when in the body of the class declaration, you can reference methods without needing the class type. What you are referencing has to have already been declared though.</p>
QA:
Index data from not related models in Django-Haystack Solr
<p>It's easy.</p> <pre><code>colors = PointingModel1.objects.filter(color='blue') for color in colors: name = color.main_model.name # now you can put `name` to a list or something else </code></pre>
QA:
How to detect fast moving soccer ball with OpenCV, Python, and Raspberry Pi?
<p>May I suggest you read this post?</p> <p><a href="http://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/" rel="nofollow">http://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/</a> </p> <p>There are also a few comments below indicating how to detect multiple balls rather than one.</p>
QA:
Getting HTTP POST Error : {"reason":null,"error":"Request JSON object for insert cannot be null."}
<p>First of all in Service-Now you should always use SSL, so no http! Second error I see in your script is how you pass your payload, you need to transform your dictionary into a JSON Object/String. And you don't need to authenticate twice, you have the basic http authentication handled by requests.post so no need for it in the header.</p> <p>With this script it should work:</p> <pre><code>import json import requests url = 'https://instancename.service-now.com/change_request.do?JSONv2' user = 'admin' pwd = 'admin' # Set proper headers headers = {"Content-Type":"application/json","Accept":"application/json"} payload = { 'sysparm_action': 'insert', 'short_description': 'test_jsonv2', 'priority': '1' } # Do the HTTP request response = requests.post(url, auth=(user, pwd), headers=headers, data=json.dumps(payload)) # Check for HTTP codes other than 200 if response.status_code != 200: print('Status:', response.status_code, 'Headers:', response.headers, 'Error Response:',response.json()) exit() # Decode the JSON response into a dictionary and use the data data = response.json() print(data) </code></pre>
QA:
GeoTIFF issue with opening in PIL
<p>It would've been really nice if you put the link of the figure that you are using (if it's free). I downloaded a sample geotiff image from <a href="http://eoimages.gsfc.nasa.gov/images/imagerecords/57000/57752/land_shallow_topo_2048.tif" rel="nofollow">here</a>, and I used <a href="https://pypi.python.org/pypi/GDAL/" rel="nofollow">gdal</a> to open it.</p> <p>The shape of the <code>geotiff.ReadAsArray()</code> is <code>(3, 1024, 2048)</code> so I convert it to <code>(1024, 2048, 3)</code> (RGB) and open it with <code>imshow</code>: </p> <p>import gdal gdal.UseExceptions() import matplotlib.pyplot as plt import numpy as np</p> <pre><code>geotiff = gdal.Open('/home/vafanda/Downloads/test.tif') geotiff_arr= geotiff.ReadAsArray() print np.shape(geotiff_arr) geotiff_shifted = np.rollaxis(geotiff_arr,0,3) print "Dimension converted to: " print np.shape(geotiff_shifted) plt.imshow(geotiff_shifted) plt.show() </code></pre> <p>result: </p> <p><a href="https://i.stack.imgur.com/JApVV.png" rel="nofollow"><img src="https://i.stack.imgur.com/JApVV.png" alt="enter image description here"></a></p>
QA:
How to generate effectively a random number that only contains unique digits in Python?
<p>Just use shuffle:</p> <pre><code>import string x = list(string.digits) random.shuffle(x) print int(str(random.choice(range(1, 10))) + "".join(x[:3])) </code></pre>
QA:
Request returns partial page
<blockquote> <p>The only thing you should know is how serverlet works.</p> </blockquote> <p>Usually, <code>onScroll</code> or <code>onClick</code> or any other event will trigger <code>AJAX request</code> to the server. And the client side javascript will render those return (JSON/XML...) So the only thing you should do is to repeat those AJAX request to the same server to get those data.</p> <p>For example, the action in browser will like below:</p> <pre><code>1. Enter url on browser &gt; [HTTP GET REQUEST] http://url/to/website 2. Scroll on the page &gt; [AJAX GET] http://url/to/website/1 &gt; [javascript on front-end will process those data] 3. Then, keeping scrolling on the page &gt; [AJAX GET] http://url/to/website/2 &gt; [javascript on front-end will process those data] 4. ... (and so on) </code></pre> <hr> <p><strong>Q. How to use python to get those data?</strong></p> <p>A. One simple way is using <code>browser &gt; inspect &gt; network_tab</code> to find what AJAX request you send when you scroll in that page. And repeat those AJAX request with correspond header by python.</p>
QA:
sqlalchemy.exc.AmbiguousForeignKeysError after Inheritance
<p>Just use backref and use Integer on both EmployeeID and OldemployeeID. Otherwise you will get an another error.</p> <pre><code>class Sales(Employee): __tablename__ = 'Sales' EmployeeID = Column(Integer, ForeignKey('Employee.EmployeeId'), primary_key=True) OldemployeeID = Column(Integer, ForeignKey('Employee.EmployeeId')) employee = relationship('Employee', foreign_keys=[EmployeeID], backref='Employee') old_employee = relationship("Employee", foreign_keys=[OldemployeeID], backref='Employee') </code></pre>
QA:
Adding paths to arguments in popen
<p>Here's something to try:</p> <pre><code>import subprocess import shlex p = subprocess.Popen(shlex.split("/usr/bin/myprogram --path /home/myuser") </code></pre> <p>Mind the forward slashes ("/"). From what I read, Python doesn't like backslashes ("\") even when running on Windows (I've never used it on Windows myself).</p>
QA:
smtplib - sending but with no subject when indented in a while loop
<p>You need to mention the <code>Subject:</code> in the very beginning of your message.</p> <p>e.g. in your case, your message should be like</p> <pre><code>message = """Subject: %s From: %s To: %s %s """ % (FROM, TO, SUBJECT, BODY) </code></pre> <p>I also faced the same problem, and it worked for me.</p>
QA:
opencv videocapture hangs/freeze when camera disconnected instead of returning "False"
<p>I'm experiencing the same issue with the webcam of my MacBook, when the lid is closed (i.e. cam not available). After a quick look at the doc, the <code>VideoCapture</code> constructor doesn't seem to have any <code>timeout</code> parameter. So the solution has to involve forcibly interrupting this call from Python.</p> <p>After yet more readings about Python's <code>asyncio</code> then <code>threading</code> in general, I couldn't come up with any clue on how to interrupt a method which is busy outside the interpreter. So I resorted to creating a daemon per <code>VideoCapture</code> call, and let them die on their own.</p> <pre><code>import threading, queue class VideoCaptureDaemon(threading.Thread): def __init__(self, video, result_queue): super().__init__() self.daemon = True self.video = video self.result_queue = result_queue def run(self): self.result_queue.put(cv2.VideoCapture(self.video)) def get_video_capture(video, timeout=5): res_queue = queue.Queue() VideoCaptureDaemon(video, res_queue).start() try: return res_queue.get(block=True, timeout=timeout) except queue.Empty: print('cv2.VideoCapture: could not grab input ({}). Timeout occurred after {:.2f}s'.format(video, timeout)) </code></pre> <p>If anyone has better, I'm all ears.</p>
QA:
Numpy conversion of column values in to row values
<p>Here's an approach using <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>broadcasting</code></a> to get those sliding windowed elements and then just some stacking to get <code>A</code> -</p> <pre><code>col2 = matrix[:,2] nrows = col2.size-nr+1 out = np.zeros((nr-1+nrows,nr)) col2_2D = np.take(col2,np.arange(nrows)[:,None] + np.arange(nr)) out[nr-1:] = col2_2D </code></pre> <p>Here's an efficient alternative using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.strides.html" rel="nofollow"><code>NumPy strides</code></a> to get <code>col2_2D</code> -</p> <pre><code>n = col2.strides[0] col2_2D = np.lib.stride_tricks.as_strided(col2, shape=(nrows,nr), strides=(n,n)) </code></pre> <p>It would be even better to initialize an output array of zeros of the size as <code>total</code> and then assign values into it with <code>col2_2D</code> and finally with input array <code>matrix</code>. </p> <p><strong>Runtime test</strong></p> <p>Approaches as functions -</p> <pre><code>def org_app1(matrix,nr): A = np.zeros((nr-1,nr)) for x in range( matrix.shape[0]-nr+1): newrow = (np.transpose( matrix[x:x+nr,2:3] )) A = np.vstack([A , newrow]) return A def vect_app1(matrix,nr): col2 = matrix[:,2] nrows = col2.size-nr+1 out = np.zeros((nr-1+nrows,nr)) col2_2D = np.take(col2,np.arange(nrows)[:,None] + np.arange(nr)) out[nr-1:] = col2_2D return out def vect_app2(matrix,nr): col2 = matrix[:,2] nrows = col2.size-nr+1 out = np.zeros((nr-1+nrows,nr)) n = col2.strides[0] col2_2D = np.lib.stride_tricks.as_strided(col2, \ shape=(nrows,nr), strides=(n,n)) out[nr-1:] = col2_2D return out </code></pre> <p>Timings and verification -</p> <pre><code>In [18]: # Setup input array and params ...: matrix = np.arange(1800).reshape((60, 30)) ...: nr=3 ...: In [19]: np.allclose(org_app1(matrix,nr),vect_app1(matrix,nr)) Out[19]: True In [20]: np.allclose(org_app1(matrix,nr),vect_app2(matrix,nr)) Out[20]: True In [21]: %timeit org_app1(matrix,nr) 1000 loops, best of 3: 646 µs per loop In [22]: %timeit vect_app1(matrix,nr) 10000 loops, best of 3: 20.6 µs per loop In [23]: %timeit vect_app2(matrix,nr) 10000 loops, best of 3: 21.5 µs per loop In [28]: # Setup input array and params ...: matrix = np.arange(7200).reshape((120, 60)) ...: nr=30 ...: In [29]: %timeit org_app1(matrix,nr) 1000 loops, best of 3: 1.19 ms per loop In [30]: %timeit vect_app1(matrix,nr) 10000 loops, best of 3: 45 µs per loop In [31]: %timeit vect_app2(matrix,nr) 10000 loops, best of 3: 27.2 µs per loop </code></pre>
QA:
Adding paths to arguments in popen
<p>The problem comes from your string literal, <code>'\usr\bin\myprogram'</code>. According to <a href="https://docs.python.org/2.0/ref/strings.html" rel="nofollow">escaping rules</a>, <code>\b</code> is replaced by <code>\x08</code>, so your executable is not found.</p> <p>Pun an <code>r</code> in front of your string literals (i.e. <code>r'\usr\bin\myprogram'</code>), or use <code>\\</code> to represent a backslash (i.e. <code>'\\usr\\bin\\myprogram'</code>).</p>
QA:
How to add and subtract in python
<p>print("Welcome to fizz buzz") num1=int(input("Choose a number from 1 to 100")) if num is </p>
QA:
Why object primary keys increment between tests in Django?
<p>this could also work for you</p> <pre><code>class MyClassTest(TestCase): @classmethod def setUpTestData(cls): cls.x = Someclass() cls.x.save() def test_first_test(self): # Here, Someclass.objects.all()[0].pk -&gt; returns 1 def test_second_test(self): # Here, Someclass.objects.all()[0].pk -&gt; returns 1 !!! (good !) # Here, self.x.pk -&gt; returns 1 !!! (good !) </code></pre>
QA:
Why does my plot not disappear on exit in ipython mode?
<p>Try running <code>%matplotlib</code> before plotting, so that IPython integrates with the GUI event loop showing the plots.</p> <p>This shouldn't be necessary once matplotlib 2.0 is released, because it has some code to detect when it's running inside IPython.</p>
QA:
How Can I Detect Gaps and Consecutive Periods In A Time Series In Pandas
<p>here's something to get started:</p> <pre><code>df = pd.DataFrame(np.ones(5),columns = ['ones']) df.index = pd.DatetimeIndex(['2016-09-19 10:23:03', '2016-08-03 10:53:39', '2016-09-05 11:11:30', '2016-09-05 11:10:46', '2016-09-06 10:53:39']) daily_rng = pd.date_range('2016-08-03 00:00:00', periods=48, freq='D') daily_rng = daily_rng.append(df.index) daily_rng = sorted(daily_rng) df = df.reindex(daily_rng).fillna(0) df = df.astype(int) df['ones'] = df.cumsum() </code></pre> <p>The cumsum() creates a grouping variable on 'ones' partitioning your data at the points your provided. If you print df to say a spreadsheet it will make sense:</p> <pre><code>print df.head() ones 2016-08-03 00:00:00 0 2016-08-03 10:53:39 1 2016-08-04 00:00:00 1 2016-08-05 00:00:00 1 2016-08-06 00:00:00 1 print df.tail() ones 2016-09-16 00:00:00 4 2016-09-17 00:00:00 4 2016-09-18 00:00:00 4 2016-09-19 00:00:00 4 2016-09-19 10:23:03 5 </code></pre> <p>now to complete:</p> <pre><code>df = df.reset_index() df = df.groupby(['ones']).aggregate({'ones':{'gaps':'count'},'index':{'first_spotted':'min'}}) df.columns = df.columns.droplevel() </code></pre> <p>which gives:</p> <pre><code> first_time gaps ones 0 2016-08-03 00:00:00 1 1 2016-08-03 10:53:39 34 2 2016-09-05 11:10:46 1 3 2016-09-05 11:11:30 2 4 2016-09-06 10:53:39 14 5 2016-09-19 10:23:03 1 </code></pre>
QA:
NetworkX: how to add weights to an existing G.edges()?
<p>It fails because <code>edges</code> is a method.</p> <p>The <a href="https://networkx.github.io/documentation/development/reference/generated/networkx.Graph.get_edge_data.html" rel="nofollow">documentation</a> says to do this like:</p> <pre><code>G[source][target]['weight'] = weight </code></pre> <p>For example, the following works for me:</p> <pre><code>import networkx as nx G = nx.Graph() G.add_path([0, 1, 2, 3]) G[0][1]['weight'] = 3 &gt;&gt;&gt; G.get_edge_data(0, 1) {'weight': 3} </code></pre> <p>However, your type of code indeed fails:</p> <pre><code>G.edges[0][1]['weight'] = 3 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-14-97b10ad2279a&gt; in &lt;module&gt;() ----&gt; 1 G.edges[0][1]['weight'] = 3 TypeError: 'instancemethod' object has no attribute '__getitem__' </code></pre> <hr> <p>In your case, I'd suggest</p> <pre><code>for e in G.edges(): G[e[0]][e[1]] = weights[e] </code></pre>
QA:
How to create xml in Python dynamically?
<p>You are taking no. of elements from user but you are not using it. Use a loop and get element details from user within the loop as shown below: </p> <pre><code>import xml.etree.ElementTree as ET try: no_of_rows=int(input("Enter the number of Element for XML files: - \n")) root = input("Enter the root Element: \n") root_element = ET.Element(root) for _ in range(1, no_of_rows): tag = input("Enter Element: - \n") value = input("Enter Data: - \n") ET.SubElement(root_element, tag).text = value tree = ET.ElementTree(root_element) tree.write("filename.xml") print("Xml file Created..!!") except ValueError: print("Value Error") except: print("Exception Occuured") enter code here </code></pre> <p>I hope this is what you want to achieve.</p>
QA:
Pandas Create Column with Groupby and Sum with additional condition
<p>You can add <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p> <pre><code>mask = df['D'] == 1 df1 = df[mask].join(df[mask].groupby(['A'])['C'].sum(), on='A', rsuffix='_inward') print (df1) A B C D C_inward 0 4 9 9 1 13 2 7 7 7 1 14 4 2 3 3 1 3 5 3 3 3 1 3 8 1 1 1 1 3 9 1 1 1 1 3 13 4 4 4 1 13 14 5 5 5 1 5 15 1 1 1 1 3 18 7 7 7 1 14 </code></pre>
QA:
How to get latest unique entries from sqlite db with the counter of entries via Django ORM
<p>As you want distinct entries by "Lang" and latest entries by "DateTime", below query will help you,</p> <p>queryset = Model.objects.distinct("Lang").order_by("-DateTime")</p>
QA:
Where to find the source code for pandas DataFrame __add__
<p>I think you need check <a href="https://github.com/pandas-dev/pandas/blob/master/pandas/core/ops.py#L166" rel="nofollow">this</a>:</p> <pre><code>def add_special_arithmetic_methods(cls, arith_method=None, comp_method=None, bool_method=None, use_numexpr=True, force=False, select=None, exclude=None, have_divmod=False): ... ... </code></pre>
QA:
How to create xml in Python dynamically?
<p>If you want create xml you may just do this:</p> <pre><code>from lxml import etree try: root_text = raw_input("Enter the root Element: \n") root = etree.Element(root_text) child_tag = raw_input("Enter the child tag Element: \n") child_text = raw_input("Enter the child text Element: \n") child = etree.Element(child_text ) child.text =child_text root.append(child) with open('file.xml', 'w') as f: f.write(etree.tostring(root)) f.close() except ValueError: print("Occured Error") </code></pre> <p>Or if you want dynamic length you just use for loop.</p>
QA:
Using alphabet as counter in a loop
<p>This is pretty easy:</p> <pre><code>import collections print collections.Counter("señor") </code></pre> <p>This prints:</p> <pre><code>Counter({'s': 1, 'r': 1, 'e': 1, '\xa4': 1, 'o': 1}) </code></pre>
QA:
How to stream from/to stdin/stdout to/from S3 using boto3
<p>To an object to <code>STDOUT</code>, you can open an S3 object as a stream</p> <pre><code>s3 = boto3.client('s3') s3.download_fileobj('your_bucket', 'your_key', sys.stdout) </code></pre> <p>to upload from <code>STDIN</code> it's almost the same, but depending on what you want to do you can make your life easier (nb <code>python 3</code> here`)</p> <pre><code>some_stuff = input('type something: ') s3.put_object(**{ 'Bucket': 'your_bucket', 'Key': 'your_key', 'Body': some_stuff }) </code></pre>
QA:
pairwise comparisons within a dataset
<p>You can use <code>itertools</code> to generate your pairwise comparisons. If you just want the items which are shared between two lists you can use a <code>set</code> intersection. Using your example:</p> <pre class="lang-python prettyprint-override"><code>import itertools a = [2, 3, 35, 63, 64, 298, 523, 624, 625, 626, 823, 824] b = [2, 752, 753, 808, 843] c = [2, 752, 753, 843] d = [2, 752, 753, 808, 843] e = [3, 36, 37, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112] data = [a, b, c, d, e] def number_same(a, b): # Find the items which are the same return set(a).intersection(set(b)) for i in itertools.permutations([i for i in range(len(data) - 1)], r=2): print "Indexes: ", i, len(number_same(data[i[0]], data[i[1]])) &gt;&gt;&gt;Indexes (0, 1) 1 Indexes (0, 2) 1 Indexes (0, 3) 1 Indexes (1, 0) 1 Indexes (1, 2) 4 Indexes (1, 3) 5 ... etc </code></pre> <p>This will give the number of items which are shared between two lists, you could maybe use this information to define which two lists are the best pair...</p>
QA:
append to returned Google App Engine ndb query list
<p>The problem is that the representation you want is not valid json.</p> <p>You could get something similar by doing:</p> <pre><code>results, cursor, more = query.fetch_page() dict_representation = {"results": results, "cursor": cursor, "more": more} json_representation = json.dumps(dict_representation) </code></pre> <p>And the result will be something like the following:</p> <pre><code>{ "results":[ { "modifiedOn":null, "id":"6051711999279104", "stars":0, "tags":[ ], "softDeleted":false, "date":"2016-03-20 00:00:00", "content":"hello", "createdOn":"2016-05-21 13:06:24" }, { "modifiedOn":null, "id":"4925812092436480", "stars":0, "tags":[ ], "softDeleted":false, "date":"2016-03-20 00:00:00", "createdOn":"2016-05-21 13:06:16" } ], "cursor":"dhiugdgdwidfwiflfsduifewrr3rdufif", "more":false } </code></pre>
QA:
Numba jitclass and inheritance
<p>Currently (as of 0.28.1), Numba does not support subclassing/inheriting from a <code>jitclass</code>. It's not stated in the documentation but the error message is pretty explicit. I'm guessing this capability will be added sometime in the future, but right now it's a limitation.</p>
QA:
Using alphabet as counter in a loop
<p>It is not actually a dupe as you want to filter to only count characters from a certain set, you can use a <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow">Counter</a> dict to do the counting and a set of allowed characters to filter by:</p> <pre><code>word = ["h", "e", "l", "l", "o"] from collections import Counter from string import ascii_lowercase # create a set of the characters you want to count. allowed = set(ascii_lowercase + 'ñ') # use a Counter dict to get the counts, only counting chars that are in the allowed set. counts = Counter(s for s in word if s in allowed) </code></pre> <p>If you actually just want the total sum:</p> <pre><code>total = sum(s in allowed for s in word) </code></pre> <p>Or using a functional approach:</p> <pre><code>total = sum(1 for _ in filter(allowed.__contains__, word)) </code></pre> <p>Using <em>filter</em> is going to be a bit faster for any approach:</p> <pre><code>In [31]: from collections import Counter ...: from string import ascii_lowercase, digits ...: from random import choice ...: In [32]: chars = [choice(digits+ascii_lowercase+'ñ') for _ in range(100000)] In [33]: timeit Counter(s for s in chars if s in allowed) 100 loops, best of 3: 36.8 ms per loop In [34]: timeit Counter(filter(allowed.__contains__, chars)) 10 loops, best of 3: 31.7 ms per loop In [35]: timeit sum(s in allowed for s in chars) 10 loops, best of 3: 35.4 ms per loop In [36]: timeit sum(1 for _ in filter(allowed.__contains__, chars)) 100 loops, best of 3: 32 ms per loop </code></pre> <p>If you want a case insensitive match, use <em>ascii_letters</em> and add <code>'ñÑ'</code>:</p> <pre><code>from string import ascii_letters allowed = set(ascii_letters+ 'ñÑ') </code></pre>
QA:
Cannot anchor to an item that isn't a parent or sibling QML QtQuick
<pre><code>SplitView { anchors.fill: parent orientation: Qt.Horizontal Rectangle { color: "#272822" id: cameraRectangle width: window.width / 2 Item { //more stuff } Item { // The parent of this Item is 'cameraRectangle' // This Item will be the parent of the Rectangle // therefore the Rectangle can't anchor to the 'cameraRectangle' // anymore. As you are not doing anything with this Item // (so far?) anway, you can just delete it, and everything // will be fine. Rectangle { // The parent of this Rectangle is the Item that wraps it // and not the 'cameraRectangle'. anchors.top: cameraRectangle.bottom } } } Rectangle { //Rectangle info. } } </code></pre> <p>As the error message stated: you can't anchor to 'ancestors' other than your parent. You can also anchor to siblings. But neither to their children, nor to yours, and not to any of your 'grand-parents', uncles or aunts ;-)</p>
QA:
Text to Zip to base64 and vice versa, in Python
<p>You convert the base 64 back, using <a href="https://docs.python.org/3.5/library/base64.html" rel="nofollow">base64 module</a>, and then the zip, using the <a href="https://docs.python.org/3.5/library/zipfile.html" rel="nofollow">zipfile module</a>.</p> <p>Assuming <code>file.txt</code> was zipped into <code>file.zip</code>, and then the archive was converted to base 64 as <code>encoded.txt</code>:</p> <pre><code>import zipfile import base64 base64.decode(open('encoded.txt'), open('file.zip', 'w')) ZipFile('file.zip').extractall() plaintext = open('file.txt').read() </code></pre>
QA:
Getting Data in Wrong Sequence Order from DynamoDB
<p>When you creating dictionary object, that having problem of to maintain order, it doesn't iterate in order with respect to element added in it.</p> <pre><code>#So we have to create Ordered dict you can use collection package as follow's from collections import OrderedDict data_dict = OrderedDict() </code></pre> <p>Now, you will maintain order of your directory, in sequence of data added in it. you can able to iterate in order also. </p>
QA:
Jupyter: can't create new notebook?
<p>I had the same problem, it is because I installed ipython with <code>sudo apt-get -y install ipython ipython-notebook</code> instead of <code>sudo pip install ipython</code>. Therefore, uninstall all ipython stuff using: <code>sudo apt-get --purge remove ipython sudo pip uninstall ipython</code></p> <p>and then install it with pip</p>
QA:
Separate pdf to pages using pdfminer
<p>This should work. </p> <pre><code>from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter from pdfminer.converter import TextConverter from pdfminer.layout import LAParams from pdfminer.pdfpage import PDFPage from cStringIO import StringIO import os def set_interpreter(): rsrcmgr = PDFResourceManager() retstr = StringIO() codec = 'utf-8' laparams = LAParams() device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams) interpreter = PDFPageInterpreter(rsrcmgr, device) return { 'retstr': retstr, 'device': device, 'interpreter': interpreter } def convert_pdf_to_txt(path): fp = file(path, 'rb') si = set_interpreter() retstr = si['retstr'] device = si['device'] interpreter = si['interpreter'] password = "" maxpages = 0 caching = True pagenos=set() page_counter = 0 for pageNumber, page in enumerate(PDFPage.get_pages(fp, pagenos, maxpages=maxpages,password=password,caching=caching, check_extractable=True)): interpreter.process_page(page) fpp = file('pagetext_%d.txt' % page_counter, 'w+') fpp.write(retstr.getvalue()) fpp.close() page_counter += 1 si = set_interpreter() retstr = si['retstr'] device = si['device'] interpreter = si['interpreter'] fp.close() device.close() retstr.close() return text print convert_pdf_to_txt(os.path.dirname(os.path.realpath('filename.pdf')) + "/filename.pdf") </code></pre>
QA:
Python, scipy, curve_fit, bounds: How can I contstraint param by two intervals?
<p>There is a crude way to do this, and that is to have your function return very large values if the parameter is outside the multiple bounds. For example:</p> <pre><code>sigmoid_func(x, parameters): if parameter outside multiple bounds: return 1.0E10 * len(x) # very large number else: return sigmoid value </code></pre> <p>This has the effect of yielding very large errors if the parameter is outside of your multiple bounds. If you have single bound range of [upper, lower] tou should not use this method, since the most recent version of scipy already supports the more common single bound range type of problem.</p>
QA:
Dynamic class creation - Python
<p>Standard library function <a href="https://docs.python.org/3/library/collections.html#collections.namedtuple" rel="nofollow"><code>namedtuple</code></a> creates and returns a class. Internally it uses <code>exec</code>. It may be an inspiration for what you need.</p> <p>Source code: <a href="https://github.com/python/cpython/blob/master/Lib/collections/__init__.py#L356" rel="nofollow">https://github.com/python/cpython/blob/master/Lib/collections/<strong>init</strong>.py#L356</a></p>
QA:
How to get the message sender in yowsup cli echo?
<p>Actually, this is a problem that occurs when python 3.5 is used. So, I used python 2.7 and it's solved. </p>
QA:
Iterating over every two elements in a list
<p>Here we can have <code>alt_elem</code> method which can fit in your for loop.</p> <pre><code>def alt_elem(list, index=2): for i, elem in enumerate(list, start=1): if not i % index: yield tuple(list[i-index:i]) a = range(10) for index in [2, 3, 4]: print("With index: {0}".format(index)) for i in alt_elem(a, index): print(i) </code></pre> <p>Output:</p> <pre><code>With index: 2 (0, 1) (2, 3) (4, 5) (6, 7) (8, 9) With index: 3 (0, 1, 2) (3, 4, 5) (6, 7, 8) With index: 4 (0, 1, 2, 3) (4, 5, 6, 7) </code></pre> <p>Note: Above solution might not be efficient considering operations performed in func.</p>
QA:
python pandas dataframe merge or join dataframe
<p>IIUC you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a> with parameter <code>how='left'</code> if need left join on column <code>Year</code> and <code>Location</code>:</p> <pre><code>print (df1) Year Age Location column1 column2 0 2013 20 america 7 5 1 2008 35 usa 8 1 2 2011 32 asia 9 3 3 2008 45 japan 7 1 print (df2) Year Location column1 column2 0 2008 usa 8 9 1 2008 usa 7 2 2 2009 asia 8 2 3 2009 asia 0 1 4 2010 japna 9 3 df = pd.merge(df1,df2, on=['Year','Location'], how='left') print (df) Year Age Location column1_x column2_x column1_y column2_y 0 2013 20 america 7 5 NaN NaN 1 2008 35 usa 8 1 8.0 9.0 2 2008 35 usa 8 1 7.0 2.0 3 2011 32 asia 9 3 NaN NaN 4 2008 45 japan 7 1 NaN NaN </code></pre> <p>You can also check <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging" rel="nofollow">documentation</a>.</p>
QA:
How to convert JSON data into a tree image?
<p>For a tree like this there's no need to use a library: you can generate the Graphviz DOT language statements directly. The only tricky part is extracting the tree edges from the JSON data. To do that, we first convert the JSON string back into a Python <code>dict</code>, and then parse that <code>dict</code> recursively.</p> <p>If a name in the tree dict has no children it's a simple string, otherwise, it's a dict and we need to scan the items in its <code>"children"</code> list. Each (parent, child) pair we find gets appended to a global list <code>edges</code>.</p> <p>This somewhat cryptic line:</p> <pre><code>name = next(iter(treedict.keys())) </code></pre> <p>gets a single key from <code>treedict</code>. This gives us the person's name, since that's the only key in <code>treedict</code>. In Python 2 we could do</p> <pre><code>name = treedict.keys()[0] </code></pre> <p>but the previous code works in both Python 2 and Python 3.</p> <pre><code>from __future__ import print_function import json import sys # Tree in JSON format s = '{"Harry": {"children": ["Bill", {"Jane": {"children": [{"Diane": {"children": ["Mary"]}}, "Mark"]}}]}}' # Convert JSON tree to a Python dict data = json.loads(s) # Convert back to JSON &amp; print to stderr so we can verfiy that the tree is correct. print(json.dumps(data, indent=4), file=sys.stderr) # Extract tree edges from the dict edges = [] def get_edges(treedict, parent=None): name = next(iter(treedict.keys())) if parent is not None: edges.append((parent, name)) for item in treedict[name]["children"]: if isinstance(item, dict): get_edges(item, parent=name) else: edges.append((name, item)) get_edges(data) # Dump edge list in Graphviz DOT format print('strict digraph tree {') for row in edges: print(' {0} -&gt; {1};'.format(*row)) print('}') </code></pre> <p><strong>stderr output</strong></p> <pre class="lang-none prettyprint-override"><code>{ "Harry": { "children": [ "Bill", { "Jane": { "children": [ { "Diane": { "children": [ "Mary" ] } }, "Mark" ] } } ] } } </code></pre> <p><strong>stdout output</strong></p> <pre class="lang-none prettyprint-override"><code>strict digraph tree { Harry -&gt; Bill; Harry -&gt; Jane; Jane -&gt; Diane; Diane -&gt; Mary; Jane -&gt; Mark; } </code></pre> <p>The code above runs on Python 2 &amp; Python 3. It prints the JSON data to stderr so we can verify that it's correct. It then prints the Graphviz data to stdout so we can capture it to a file or pipe it directly to a Graphviz program. Eg, if the script is name "tree_to_graph.py", then you can do this in the command line to save the graph as a PNG file named "tree.png":</p> <pre class="lang-bash prettyprint-override"><code>python tree_to_graph.py | dot -Tpng -otree.png </code></pre> <p>And here's the PNG output:</p> <p><img src="https://i.stack.imgur.com/zFvic.png" alt="Tree made by Graphviz" title="Tree made by Graphviz"></p>
QA:
Initializing Class instance within a class
<p>One of the problems in your code is that your init functions are <em>init</em>. Try using</p> <pre><code>def __init__(self): pass </code></pre> <p>This should solve one of your problems</p>
QA:
Tensorflow Logistic Regression
<p>When you train the model you are passing it the list <code>FEATURE_COLUMNS</code> which I believe you have as a list of strings. When tensor flow loops over this list it is trying to access the key property on a string which fails. you probably want to pass it a list of your tensorflow variables i.e. define a new list <code>wide_columns</code>:</p> <pre><code>wide_columns=[square_feet, guests_included,...] m = tf.contrib.learn.LinearClassifier(feature_columns=wide_columns, model_dir=model_dir) m.fit(...) </code></pre>
QA:
How to retry before an exception with Eclipse/PyDev
<p>Unfortunately no, this is a Python restriction on setting the next line to be executed: it can't set the next statement after an exception is thrown (it can't even go to a different block -- i.e.: if you're inside a try..except, you can't set the next statement to be out of that block).</p> <p>You could in theory take a look at Python itself as it's open source and see how it handles that and make it more generic to handle your situation, but apart from that, what you want is not doable.</p>
QA:
Dynamic class creation - Python
<p>Here is a possible implementation. All the code is contained in a single '.py' file</p> <pre><code>class A: pass class B: pass # map class name to class _classes = { A.__name__: A, B.__name__: B, } def get_obj(cname): return _classes[cname]() # test the function if __name__ == '__main__': print get_obj('A') </code></pre> <p>It will produce the following output</p> <pre><code>&lt;__main__.A instance at 0x1026ea950&gt; </code></pre>
QA:
Finding holes in a binary image
<p>Here is some idea presented as code (and it might be not what you need).</p> <p>The problem is, that i don't understand your example. Depending on the neighborhood-definition, there are different results possible.</p> <ul> <li>If you have a 8-neighborhood, all zeros are connected somehow (what does that mean about the surrounding 1's?).</li> <li>If you have a 4-neighborhood, each one surrounded by 4 1's represents a new hole <ul> <li>Of course you could postprocess this but the question is still unclear</li> </ul></li> </ul> <h3>Code</h3> <pre><code>import numpy as np from skimage.measure import label img = np.array([[0,0,1,0], [0,1,0,1], [0,1,0,1], [0,0,1,0], [0,1,0,0], [1,0,1,0], [0,1,0,0], [0,0,0,0]]) labels = label(img, connectivity=1, background=-1) # conn=1 -&gt; 4 neighbors label_vals = np.unique(labels) # conn=2 -&gt; 8 neighbors counter = 0 for i in label_vals: indices = np.where(labels == i) if indices: if img[indices][0] == 0: print('hole: ', indices) counter += 1 print(img) print(labels) print(counter) </code></pre> <h3>Output</h3> <pre><code>('hole: ', (array([0, 0, 1, 2, 3, 3, 4]), array([0, 1, 0, 0, 0, 1, 0]))) ('hole: ', (array([0]), array([3]))) ('hole: ', (array([1, 2]), array([2, 2]))) ('hole: ', (array([3, 4, 4, 5, 6, 6, 6, 7, 7, 7, 7]), array([3, 2, 3, 3, 0, 2, 3, 0, 1, 2, 3]))) ('hole: ', (array([5]), array([1]))) [[0 0 1 0] [0 1 0 1] [0 1 0 1] [0 0 1 0] [0 1 0 0] [1 0 1 0] [0 1 0 0] [0 0 0 0]] [[ 1 1 2 3] [ 1 4 5 6] [ 1 4 5 6] [ 1 1 7 8] [ 1 9 8 8] [10 11 12 8] [ 8 13 8 8] [ 8 8 8 8]] 5 </code></pre>
QA:
Python matplotlib.pyplot: How to make a histogram with bins counts including right bin edge?
<p>I do not think there is an option to do it explicitly in either matplotlib or numpy.</p> <p>However, you may use <code>np.histogram()</code> with negative value of your <code>data</code> (and bins), then negate the output and plot it with <code>plt.bar()</code> function.</p> <pre><code>bins = np.arange(min(data), max(data) + binwidth, binwidth) hist, binsHist = np.histogram(-data, bins=sorted(-bins)) plt.plot(-binsHist[1:], -hist, np.diff(binHist)) </code></pre>
QA:
Suppress python markdown wrapping text in <p></p>
<p>I've had the same problem with <code>&lt;p&gt; &lt;/p&gt;</code> tags messing up my tables. The easiest solution for me was to fix this in css by adding </p> <pre><code>td p {display:inline;} </code></pre>
QA:
Numpy: Checking if a value is NaT
<p>Another way would be to catch the exeption:</p> <pre><code>def is_nat(npdatetime): try: npdatetime.strftime('%x') return False except: return True </code></pre>