prefix
stringclasses
1 value
input_text
stringlengths
10
172
target_text
stringlengths
18
33.9k
QA:
Unique Data Uplload in Python Excel
<p>For that you need atleast one field to identify the record to check the duplicacy.</p>
QA:
python filter 2d array by a chunk of data
<p><strong>Generic approach :</strong> Here's an approach using <a href="https://numeric.scipy.org/doc/numpy/reference/generated/numpy.unique.html" rel="nofollow"><code>np.unique</code></a> and <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.bincount.html" rel="nofollow"><code>np.bincount</code></a> to solve for a generic case -</p> <pre><code>unq,tags = np.unique(data[:,1],return_inverse=1) goodIDs = np.flatnonzero(np.bincount(tags,data[:,3]==1)&gt;=1) out = data[np.in1d(tags,goodIDs)] </code></pre> <p>Sample run -</p> <pre><code>In [15]: data Out[15]: array([[20, 10, 5, 1], [20, 73, 5, 0], [20, 73, 5, 1], [20, 31, 5, 0], [20, 10, 5, 1], [20, 10, 5, 0], [20, 42, 5, 1], [20, 54, 5, 0], [20, 73, 5, 0], [20, 54, 5, 0], [20, 54, 5, 0], [20, 31, 5, 0]]) In [16]: out Out[16]: array([[20, 10, 5, 1], [20, 73, 5, 0], [20, 73, 5, 1], [20, 10, 5, 1], [20, 10, 5, 0], [20, 42, 5, 1], [20, 73, 5, 0]]) </code></pre> <p><strong>Specific case approach :</strong> If the second column data is always sorted and have sequential numbers starting from <code>0</code>, we can use a simplified version, like so -</p> <pre><code>goodIDs = np.flatnonzero(np.bincount(data[:,1],data[:,3]==1)&gt;=1) out = data[np.in1d(data[:,1],goodIDs)] </code></pre> <p>Sample run -</p> <pre><code>In [44]: data Out[44]: array([[20, 0, 5, 1], [20, 0, 5, 1], [20, 0, 5, 0], [20, 1, 5, 0], [20, 1, 5, 0], [20, 2, 5, 1], [20, 3, 5, 0], [20, 3, 5, 0], [20, 3, 5, 1], [20, 4, 5, 0], [20, 4, 5, 0], [20, 4, 5, 0]]) In [45]: out Out[45]: array([[20, 0, 5, 1], [20, 0, 5, 1], [20, 0, 5, 0], [20, 2, 5, 1], [20, 3, 5, 0], [20, 3, 5, 0], [20, 3, 5, 1]]) </code></pre> <p>Also, if <code>data[:,3]</code> always have ones and zeros, we can just use <code>data[:,3]</code> in place of <code>data[:,3]==1</code> in the above listed codes.</p> <hr> <p><strong>Benchmarking</strong> </p> <p>Let's benchmark the vectorized approaches on the specific case for a larger array -</p> <pre><code>In [69]: def logical_or_based(data): #@ Eric's soln ...: b_vals = data[:,1] ...: d_vals = data[:,3] ...: is_ok = np.zeros(np.max(b_vals) + 1, dtype=np.bool_) ...: np.logical_or.at(is_ok, b_vals, d_vals) ...: return is_ok[b_vals] ...: ...: def in1d_based(data): ...: goodIDs = np.flatnonzero(np.bincount(data[:,1],data[:,3])!=0) ...: out = np.in1d(data[:,1],goodIDs) ...: return out ...: In [70]: # Setup input ...: data = np.random.randint(0,100,(10000,4)) ...: data[:,1] = np.sort(np.random.randint(0,100,(10000))) ...: data[:,3] = np.random.randint(0,2,(10000)) ...: In [71]: %timeit logical_or_based(data) #@ Eric's soln 1000 loops, best of 3: 1.44 ms per loop In [72]: %timeit in1d_based(data) 1000 loops, best of 3: 528 µs per loop </code></pre>
QA:
Using Google Appengine Python (Webapp2) I need to authenticate to Microsoft's new V2 endpoint using OpenID Connect
<p>Having wasted a lot of time on this I can confirm that you CAN overload the decorator to direct to the Azure V2 endpoint using the code below:</p> <pre><code>decorator = OAuth2Decorator( client_id='d4ea6ab9-adf4-4aec-9b99-675cf46XXX', auth_uri='https://login.microsoftonline.com/common/oauth2/v2.0/authorize', response_type='id_token', response_mode='form_post', client_secret='sW8rJYvWtCBVpgXXXXX', extraQueryParameter='nux=1', state='12345', nonce='678910', scope=['openid','email','profile']) </code></pre> <p>Problem is that the decorators are coded purely to handle Google APIs and can not decode the response from Microsoft, whilst it may be possible to implement this myself by modifying the code in appengine.py it's too much work.</p> <p>So if you are looking to authenticate to the Microsoft Azure V2 endpoint via Appengine it is not possible by using the built in OAuth2Decorator this only works with Google's own services.</p>
QA:
Applying a matrix decomposition for classification using a saved W matrix
<p>For completeness, here's the rewritten <code>applyModel</code> function that takes into account the answer from ForceBru (uses an import of <code>scipy.sparse.linalg</code>)</p> <pre><code>def applyModel(tfidfm,W): H = tfidfm * linalg.inv(W) return H </code></pre> <p>This returns (assuming an aligned vocabulary) a mapping of documents to topics <strong>H</strong> based on a pregenerated topic-model <strong>W</strong> and document feature matrix <strong>V</strong> generated by tfidf.</p>
QA:
Conditionally calculated column for a Pandas DataFrame
<p>You can do:</p> <pre><code>data['column_c'] = data['column_a'].where(data['column_a'] == 0, data['column_b']) </code></pre> <p>this is vectorised your attempts failed because the comparison with <code>if</code> doesn't understand how to treat an array of boolean values hence the error</p> <p>Example:</p> <pre><code>In [81]: df = pd.DataFrame(np.random.randn(5,3), columns=list('abc')) df Out[81]: a b c 0 -1.065074 -1.294718 0.165750 1 -0.041167 0.962203 0.741852 2 0.714889 0.056171 1.197534 3 0.741988 0.836636 -0.660314 4 0.074554 -1.246847 0.183654 In [82]: df['d'] = df['b'].where(df['b'] &lt; 0, df['c']) df Out[82]: a b c d 0 -1.065074 -1.294718 0.165750 -1.294718 1 -0.041167 0.962203 0.741852 0.741852 2 0.714889 0.056171 1.197534 1.197534 3 0.741988 0.836636 -0.660314 -0.660314 4 0.074554 -1.246847 0.183654 -1.246847 </code></pre>
QA:
find repeated element in list of list python
<p>And this is how i would do it since i was not aware of the <code>collections.defaultdict()</code>.</p> <pre><code>list_of_list = [(1, 2, 4.99), (3, 6, 5.99), (1, 4, 3.00), (5, 1, 1.12), (7, 8, 1.99) ] results = [] for i_sub, subset in enumerate(list_of_list): # test if ai == aj rest = list_of_list[:i_sub] + list_of_list[i_sub + 1:] if any(subset[0] == subrest[0] for subrest in rest): results.append(subset) # test if ai == bj elif any(subset[0] == subrest[1] for subrest in rest): results.append(subset) # test if bi == aj elif any(subset[1] == subrest[0] for subrest in rest): results.append(subset) print(results) # -&gt; [(1, 2, 4.99), (1, 4, 3.0), (5, 1, 1.12)] </code></pre>
QA:
Conditionally calculated column for a Pandas DataFrame
<p>use where() and notnull() </p> <pre><code> data['column_c'] = data['column_b'].where(data['column_a'].notnull(), 0) </code></pre>
QA:
python filter 2d array by a chunk of data
<p>Let's assume the following:</p> <ul> <li><code>b &gt;= 0</code></li> <li><code>b</code> is an integer</li> <li><code>b</code> is fairly dense, ie <code>max(b) ~= len(unique(b))</code></li> </ul> <p>Here's a solution using <a href="https://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.ufunc.at.html" rel="nofollow"><code>np.ufunc.at</code></a>:</p> <pre><code># unpack for clarity - this costs nothing in numpy b_vals = data[:,1] d_vals = data[:,3] # build an array indexed by b values is_ok = np.zeros(np.max(b_vals) + 1, dtype=np.bool_) np.logical_or.at(is_ok, b_vals, d_vals) # is_ok == array([ True, False, True, True, False], dtype=bool) # take the rows which have a b value that was deemed OK result = data[is_ok[b_vals]] </code></pre> <hr> <p><code>np.logical_or.at(is_ok, b_vals, d_vals)</code> is a more efficient version of:</p> <pre><code>for idx, val in zip(b_vals, d_vals): is_ok[idx] = np.logical_or(is_ok[idx], val) </code></pre>
QA:
Adding data to a Python list
<p>Joel already answered but if you want a more compact code you can use range</p> <pre><code>numbers = [] for number in range(12,24,2): # do whatevery you want with b numbers.append(number) print numbers </code></pre> <p>or if you only want to print the numbers you can do</p> <pre><code>print [number for number in range(12,24,2)] </code></pre>
QA:
How to solve "No module named 'cStringIO'" when importing the logging module in Python 3
<p>As pointed out in several comments, I accidentally left a directory <code>logging</code> in the same directory which is what the error message refers to. After removing that directory, I get a different error message,</p> <pre><code>Printing unpacked contents: Traceback (most recent call last): File "msgpack_checker.py", line 27, in &lt;module&gt; for unpacked in unpacker: File "msgpack/_unpacker.pyx", line 459, in msgpack._unpacker.Unpacker.__next__ (msgpack/_unpacker.cpp:459) File "msgpack/_unpacker.pyx", line 380, in msgpack._unpacker.Unpacker._unpack (msgpack/_unpacker.cpp:380) File "msgpack/_unpacker.pyx", line 370, in msgpack._unpacker.Unpacker.read_from_file (msgpack/_unpacker.cpp:370) TypeError: expected bytes, str found </code></pre> <p>but that is a separate issue; at least the importing of <code>logging</code> was successful.</p>
QA:
JSON combined with python loop to only print integers
<pre><code>In [15]: from django.template import Template,Context In [16]: tracks =[ ...: { ...: 'album_name':'Nevermind', ...: 1:'Smells like teen spirit', ...: 2:'In Bloom', ...: 3:'Come as you are', ...: 4:'Breed', ...: 5:'Lithium', ...: 6:'Polly', ...: 7:'Territorial Pissings', ...: 8:'Drain You', ...: 9:'Lounge act', ...: 10:'Stay away', ...: 11:'On a plain', ...: 12:'Something in the way' ...: }, ...: { ...: 'album_name':'Relapse', ...: 1:'Hello', ...: 2:'3AM', ...: ...: }, ...: ...: ] In [17]: t = Template("""&lt;div class="single_album"&gt; &lt;h2&gt;Track list&lt;/h2&gt; {% for track in tracks %} {%if track.album_name == album_name %}&lt;ol&gt; {% for key, value in track ...: .items %} {%if key != 'album_name' %}&lt;li&gt;{{value}}&lt;/li&gt;{%endif%} {% endfor%} &lt;/ol&gt;{%endif%} {% endfor %} &lt;/div&gt;""") In [18]: c = Context({"tracks": tracks,'album_name':'Nevermind'}) In [19]: t.render(c) Out[19]: u'&lt;div class="single_album"&gt; &lt;h2&gt;Track list&lt;/h2&gt; &lt;ol&gt; &lt;li&gt;Smells like teen spirit&lt;/li&gt; &lt;li&gt;In Bloom&lt;/li&gt; &lt;li&gt;Come as you are&lt;/li&gt; &lt;li&gt;Breed&lt;/li&gt; &lt;li&gt;Lithium&lt;/li&gt; &lt;li&gt;Polly&lt;/li&gt; &lt;li&gt;Territorial Pissings&lt;/li&gt; &lt;li&gt;Drain You&lt;/li&gt; &lt;li&gt;Lounge act&lt;/li&gt; &lt;li&gt;Stay away&lt;/li&gt; &lt;li&gt;On a plain&lt;/li&gt; &lt;li&gt;Something in the way&lt;/li&gt; &lt;/ol&gt; &lt;/div&gt;' In [20]: </code></pre>
QA:
How to install Python MySQLdb module using pip?
<p>The above answer is great, but there may be some problems when we using pip to install MySQL-python in <strong>Windows</strong> </p> <p>for example,It needs some files that are associated with <strong>Visual Stdio</strong> .One solution is installing VS2008 or 2010……Obviously,it cost too much.</p> <p>Another way is the answer of @bob90937 . I am here to do something to add. </p> <p>with <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs" rel="nofollow">http://www.lfd.uci.edu/~gohlke/pythonlibs</a>, u can download many Windows binaries of many scientific open-source extension packages for the official CPython distribution of the Python programming language.</p> <p>Back to topic,we can choose the <strong>MySQL-python(py2)</strong> or <strong>Mysqlclient(py3)</strong> and use <em>pip install </em> to install. it gives us Great convenience! </p>
QA:
Understanding Parsing Error when reading model file into PySD
<p>If you aren't using subscripts, you may have found a bug in the parser. If so, best course is to create a report in the github <a href="https://github.com/JamesPHoughton/pysd/issues" rel="nofollow">issue tracker</a> for the project. The stack trace you posted says that the error is happening in the first line of the file, and that the error has to do with how the right hand side of the equation is being parsed. You might include the first few lines in your bug report to help me recreate the issue. I'll add a case to our growing <a href="https://github.com/SDXorg/test-models" rel="nofollow">test suite</a> and then we can make sure it isn't a problem going forwards.</p>
QA:
How to copy content of a numpy matrix to another?
<pre><code>import numpy as np self.x_last = np.copy(x) </code></pre>
QA:
Adding data to a Python list
<p>you can achieve the expected list as output by using the <a href="https://docs.python.org/2/library/functions.html#range" rel="nofollow">range()</a> method. It takes three parameters, start, stop and step. </p> <pre><code>data_points = range(12, 23, 2) # range returns list in python 2 print data_points </code></pre> <p>Note that, in <code>python 3</code> the <a href="https://docs.python.org/3/library/functions.html#func-range" rel="nofollow">range()</a> is a <a href="https://docs.python.org/3/library/stdtypes.html#range" rel="nofollow">sequence-type</a>. So, you will have to cast it to <code>list</code> in python 3</p> <pre><code>data_points = list(range(12, 23, 2)) # python 3 print(data_points) </code></pre>
QA:
Get SQLAlchemy to encode correctly strings with cx_Oracle
<p>I found a related topic, that actually answers my question:</p> <p><a href="http://stackoverflow.com/questions/39780090/python-2-7-connection-to-oracle-loosing-polish-characters">Python 2.7 connection to oracle loosing polish characters</a></p> <p>You simply add the following line, before creating the database connection:</p> <pre><code>os.environ["NLS_LANG"] = "GERMAN_GERMANY.UTF8" </code></pre> <p>Additional documentation about which strings you need for different languages are found at the Oracle website:</p> <p><a href="https://docs.oracle.com/cd/E23943_01/bi.1111/b32121/pbr_nls005.htm#RSPUB23733" rel="nofollow">Oracle documentation on Unicode Support</a></p>
QA:
Python Global Variable Not Defined - Declared inside Class
<p>You're declaring a class attribute 'failures', not a global, within the ErrorValidations</p> <p>Instead of using global failures try:</p> <pre><code>class ErrorValidations: failures = 0 def CheckforError1(driver): try: if error1.is_displayed(): ErrorValidations.failures += 1 </code></pre> <p>A true global would be declared outside of the class</p>
QA:
Most efficient way to set value in column based on prefix of the index
<p>Because you have repeats of the prefix, you want to first separate out the prefix to make sure you don't generate a new random number for the same prefix. Therefore the removal of duplicates is necessary from your prefix list. I did this in a more condensed way by making a new column for the prefix and then using df.prefix.unique(). </p> <pre><code>df['prefix'] = [i.split('_')[0] for i in df.index] df['values'] = df.prefix.map(dict(zip(df.prefix.unique(),[return_something(i) for i in df.prefix.unique()]))) </code></pre>
QA:
Issues with try/except, attempting to convert strings to integers in pandas data frame where possible
<p>insert the <code>columm.append</code> into the <code>try:</code></p> <pre><code>for col in list_of_columns: column = [] for row in list(df[col]): try: column.append(remove_html(row)) except ValueError: pass del df[col] df[col] = column return df </code></pre>
QA:
Python/Flask: UnicodeDecodeError/ UnicodeEncodeError: 'ascii' codec can't decode/encode
<p>since in Python 2 bytecode is not enforced, one can get confused with them. Encoding and Decoding works as far as i know from string to bytecode and reverse. So if your resultset is a string, there should be no need to encode it again. If you get wrong representations for special characters like "§", i would try something like this:</p> <p>repr(queryResult[row][6])).</p> <p>Does that work?</p>
QA:
Is there a way to compile python application into static binary?
<p>If you are on a Mac you can use py2app to create a .app bundle, which starts your Django app when you double-click on it.</p> <p>I described how to bundle Django and CherryPy into such a bundle at <a href="https://moosystems.com/articles/14-distribute-django-app-as-native-desktop-app-01.html" rel="nofollow">https://moosystems.com/articles/14-distribute-django-app-as-native-desktop-app-01.html</a></p> <p>In the article I use pywebview to display your Django site in a local application window.</p>
QA:
getting an average of values from dictionaries with keys with a list of values
<p><code>zip</code> the values to get the columns, and divide each column's <code>sum</code> by its <code>len</code>.</p>
QA:
getting an average of values from dictionaries with keys with a list of values
<p>First collect the values, transpose them and then its easy:</p> <pre><code># values of the dict values = data_dict.values() # transposed average averages = [sum(x)/float(len(x)) for x in zip(*values)] print (averages) </code></pre> <p>returns:</p> <pre><code>[4.333333333333333, 5.333333333333333, 6.333333333333333, 7.333333333333333] </code></pre> <p>A shorter <em>'less-explanatory'</em> one-liner would be:</p> <pre><code>averages = [sum(x)/float(len(x)) for x in zip(*data_dict.values())] </code></pre>
QA:
getting an average of values from dictionaries with keys with a list of values
<p>One approach could be:</p> <pre><code>data_dict = { "abc" : [1, 2, 3, 4], "def" : [4, 5, 6, 7], "ghi" : [8, 9, 10, 11] } print data_dict for i in data_dict: sum_items = 0 num_items = 0 for j in data_dict[i]: num_items += 1 sum_items += j print data_dict[i] print sum_items print sum_items/num_items </code></pre>
QA:
String formatting of floats
<pre><code>'{:.4g}'.format(float(input)) if x&lt;=1000 or x&gt;=.0001 else '{:.4e}'.format(float(input)) </code></pre>
QA:
Getting column values from multi index data frame pandas
<p><code>df.iterrows()</code> return a <code>Series</code>, if you want the original <code>index</code> you need to call the <code>name</code> of that <code>Series</code> such has:</p> <pre><code>for index,row in df.iterrows(): print row.name </code></pre>
QA:
String formatting of floats
<p>You're almost there, you just need <code>e</code>, not <code>g</code>:</p> <pre><code>"{:.5g}".format(0.000123456789) # '1.23457e-04' </code></pre> <p>Though the number in the format string indicates the amount of decimal points, so you'll want 4 (plus the one digit to the left of the decimal point):</p> <pre><code>"{:.4e}".format(0.000123456789) '1.2346e-04' </code></pre>
QA:
Listing locations in Azure using Python Azure SDK error
<p>Use <code>sms.list_locations()</code> to list out the regions.</p> <p>Thanks, Gopal.</p>
QA:
Bokeh Python: Laying out multiple plots
<p>I do it, intead of this</p> <pre><code>p = hplot(plot.values()) </code></pre> <p>I am using this</p> <pre><code>p = hplot(*plot.values()) </code></pre>
QA:
Django – remove trailing zeroes for a Decimal in a template
<p>The solution is to use <code>normalize()</code> method of a <code>Decimal</code> field:</p> <pre><code>{{ balance.bitcoins.normalize }} </code></pre>
QA:
Making a quiver plot from .dat files
<p>In the <a href="http://www.scipy-lectures.org/intro/matplotlib/auto_examples/plot_quiver_ex.html" rel="nofollow">example for the quiver plot</a> you provided all <code>X</code>, <code>Y</code>, <code>U</code> and <code>V</code> are 2D arrays, with shape <code>(n,n)</code>.</p> <p>In your example you are importing an array of values for <code>x</code>, <code>y</code>, <code>fx</code> and <code>fy</code>, and then selecting only the first line with <code>[0]</code>.</p> <p>When using the code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt n=3 # number of points, changed it data0 = np.genfromtxt('xaxis.dat') data1 = np.genfromtxt('yaxis.dat') data2 = np.genfromtxt('fx.dat') data3 = np.genfromtxt('fy.dat') x = data0[0] y = data1[0] fx = data2[0] fy = data3[0] plt.axes([0.025, 0.025, 0.95, 0.95]) # position of bottom left point of graph inside window and its size plt.quiver(x,y,fx,fy, alpha=.5) # draw inside of arrows, half transparent plt.quiver(x,y,fx,fy,edgecolor='k',facecolor='none', linewidth=.5) # draw contours of arrows plt.xlim(-1,n) # left and right most values in the x axis plt.xticks(()) # remove the numbers from the x axis plt.ylim(-1,n) # ... plt.yticks(()) # ... plt.show() </code></pre> <p>I get: <a href="https://i.stack.imgur.com/W9HoF.png" rel="nofollow"><img src="https://i.stack.imgur.com/W9HoF.png" alt="only one point"></a> With <code>0 1 2 0 1 2 0 1 2</code> in xaxis.dat and fx.dat, <code>0 0 0 1 1 1 2 2 2</code> in yaxis.dat and <code>1 1 1 2 2 2 3 3 3</code> in fy.dat. If I just remove the <code>[0]</code> from the arrays assignment, I get: <a href="https://i.stack.imgur.com/Vcbvv.png" rel="nofollow"><img src="https://i.stack.imgur.com/Vcbvv.png" alt="all points"></a> with all points shown.</p> <p>One change I would make is to use <code>plt.xlim(min(x)-1,max(x)+1)</code> and <code>plt.ylim(min(y)-1,max(y)+1)</code>, to ensure you get to view the right area of the graph. For instance, if I make all four arrays equal to <code>np.random.rand(10)</code> (a 1D array with 10 random elements between 0 and 1), I get: <a href="https://i.stack.imgur.com/NQkAv.png" rel="nofollow"><img src="https://i.stack.imgur.com/NQkAv.png" alt="random points"></a></p> <h2>Notes on array format</h2> <p>The <code>plt.quiver</code> will also accept the arrays in the format:</p> <pre><code>x = [0, 1, 2] # 1D array (list, actually...) y = [0, 1, 2] fx = [[0, 1, 2], [0, 1, 2], [0, 1, 2]] # 2D array fy = [[0, 0, 0], [1, 1, 1], [2, 2, 2]] </code></pre> <p><a href="https://i.stack.imgur.com/q95SE.png" rel="nofollow"><img src="https://i.stack.imgur.com/q95SE.png" alt="enter image description here"></a> But not if all arrays are 1D:</p> <pre><code>fx = np.array(fx).flatten() fy = np.array(fy).flatten() </code></pre> <p><a href="https://i.stack.imgur.com/yiqCj.png" rel="nofollow"><img src="https://i.stack.imgur.com/yiqCj.png" alt="enter image description here"></a></p> <h2>Previous answer (wrong)</h2> <p>[first two paragraphs]...</p> <p>This means you probably noticed <code>genfromtxt</code> returns a 2D array (as it is able to import several columns from a single file, so the returned array will mimic the 2D structure of your file if nothing else is told), making <code>data0[0]</code> the first line on your document xaxis.dat.</p> <p><strong>EDIT:</strong> the sentence below is erroneous, <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.quiver" rel="nofollow">plt.quiver</a> can receive 1D arrays, just in the right shape.</p> <p>However the <code>quiver</code> expects 2D arrays, from where it will retrieve the values for each point: for point <code>i,j</code> the position will be <code>(X[i,j], Y[i,j])</code> and the arrow will be <code>(U[i,j], V[i,j])</code>.</p> <p>If you have the repeated values for x and y in the file like this:</p> <ul> <li><p>xaxis.dat:</p> <p>0, 1, 2, 0, 1, 2, 0, 1, 2</p></li> <li><p>yaxix.dat:</p> <p>0, 0, 0, 1, 1, 1, 2, 2, 2</p></li> </ul> <p>You can just <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html" rel="nofollow">reshape</a> all four of your arrays to (# points in x, # points in y) and it should work out.</p> <p>If you don't you will have to use something similar to <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.mgrid.html" rel="nofollow"><code>np.mgrid</code></a> (or <a href="http://louistiao.me/posts/numpy-mgrid-vs-meshgrid/" rel="nofollow"><code>np.meshgrid</code></a>) to make a valid combination of <code>X</code> and <code>Y</code> arrays, and format <code>fx</code> and <code>fy</code> accordingly.</p>
QA:
Issues with try/except, attempting to convert strings to integers in pandas data frame where possible
<p>consider the <code>pd.DataFrame</code> <code>df</code></p> <pre><code>df = pd.DataFrame(dict(A=[1, '2', '_', '4'])) </code></pre> <p><a href="https://i.stack.imgur.com/m05NY.png" rel="nofollow"><img src="https://i.stack.imgur.com/m05NY.png" alt="enter image description here"></a></p> <p>You want to use the function <code>pd.to_numeric</code>...<br> <strong><em>Note</em></strong><br> <code>pd.to_numeric</code> operates on scalars and <code>pd.Series</code>. It doesn't operate on a <code>pd.DataFrame</code><br> <strong><em>Also</em></strong><br> Use the parameter <code>errors='coerce'</code> to get numbers where you can and <code>NaN</code> elsewhere.</p> <pre><code>pd.to_numeric(df['A'], 'coerce') 0 1.0 1 2.0 2 NaN 3 4.0 Name: A, dtype: float6 </code></pre> <p>Or, to get numbers where you can, and what you already had elsewhere</p> <pre><code>pd.to_numeric(df['A'], 'coerce').combine_first(df['A']) 0 1 1 2 2 _ 3 4 Name: A, dtype: object </code></pre> <p>you can then assign it back to your <code>df</code></p> <pre><code>df['A'] = pd.to_numeric(df['A'], 'coerce').combine_first(df['A']) </code></pre>
QA:
Deleting cookies and changing user agent in Python 3+ without Mechanize
<p>set the <code>Expires</code> attribute to a date in the past (like Epoch):</p> <pre><code>Set-Cookie: name=val; expires=Thu, 01 Jan 1970 00:00:00 GMT </code></pre> <p>Read more here: <a href="http://stackoverflow.com/questions/5285940/correct-way-to-delete-cookies-server-side">Correct way to delete cookies server-side</a></p>
QA:
Python/Flask: UnicodeDecodeError/ UnicodeEncodeError: 'ascii' codec can't decode/encode
<p>See: <a href="https://wiki.python.org/moin/UnicodeEncodeError" rel="nofollow">https://wiki.python.org/moin/UnicodeEncodeError</a></p> <blockquote> <p>The encoding of the postgres database is utf-8. The type of queryResult[row][6] returns type 'str'. </p> </blockquote> <p>You've got it right so far. Remember, in Python 2.7, a <code>str</code> is a string of bytes. So you've got a string of bytes from the database, that probably looks like <code>'gl\xc3\xbce'</code> (<code>'glüe'</code>).</p> <p>What happens next is that some part of the program is calling <code>.decode</code> on your string, but using the default 'ascii' codec. It's probably some part of the Item() API that needs the string as a unicode object, or maybe Flask itself. Either way, you need to call <code>.decode</code> yourself on your string, since you know that it's actually in utf-8:</p> <pre><code>col_6 = queryResult[row][6].decode('utf-8') Item(..., ..., col_6, ...) </code></pre> <p>Then you will provide all the downstream APIs with a <code>unicode</code> which is apparently what they want.</p> <p>The way I remember it is this: Unicode is a an abstraction, where everything is represented as "code points". If we want to create real bytes that we can print on a screen or send as an HTML file, we need to ENcode to bytes. If you have some bytes, they could mean any letters, who knows? You need to DEcode the mysterious bytes in order to get Unicode.</p> <p>Hope this helps.</p>
QA:
Django: create database tables programmatically/dynamically
<p>The reason South is incompatible with recent Django versions is that it has been <a href="http://south.readthedocs.io/en/latest/releasenotes/1.0.html" rel="nofollow">rolled into Django</a> as of Django 1.7, under the name "migrations". If you are looking for similar functionality the starting point would be the <a href="https://docs.djangoproject.com/en/dev/topics/migrations/" rel="nofollow">documentation on migrations</a>. In particular you may be interested in the section on <a href="https://docs.djangoproject.com/en/dev/ref/migration-operations/#runsql" rel="nofollow">RunSQL</a>.</p> <p>If you wish to avoid the migrations module you can also <a href="https://docs.djangoproject.com/en/1.10/topics/db/sql/" rel="nofollow">perform raw SQL queries</a>.</p>
QA:
Find a certain difference between members of a list (or set) of numbers
<p>You could avoid running the lowest part of the numbers in the second loop (no need for <code>low</code>, just check numbers ahead)</p> <p>With that you can drop the <code>set</code> and use a <code>list</code> instead: less hashing, less processing. Also, don't change the <code>numbers</code> input by sorting it, the caller may not expect it. Use a locally sorted list instead (the other advantage is that <code>numbers</code> can now be a <code>set</code>, a <code>deque</code> ...:</p> <pre><code>def difference(numbers,diff,tol): '''diff is the searched difference,numbers is a list of numbers and tol the tolerance''' snum = sorted(numbers) match=list() for i,n in enumerate(snum): high= n+diff+tol for j in range(i+1,len(snum)): k = snum[j] if k &gt; high: break match.append(n) match.append(k) return match </code></pre> <p>(maybe that would be a better question for code review, the boundary is thin)</p>
QA:
Reading a text file using Pandas where some rows have empty elements?
<p>It seems like your data is fixed width columns, you can try <code>pandas.read_fwf()</code>:</p> <pre><code>from io import StringIO import pandas as pd df = pd.read_fwf(StringIO("""0 0CF00400 X 8 66 7D 91 6E 22 03 0F 7D 0.021650 R 0 18EA0080 X 3 E9 FE 00 0.022550 R 0 00000003 X 8 D5 64 22 E1 FF FF FF F0 0.023120 R"""), header = None, widths = [1,12,2,8,4,4,4,4,4,4,4,4,16,2]) </code></pre> <p><a href="https://i.stack.imgur.com/G3bLR.png"><img src="https://i.stack.imgur.com/G3bLR.png" alt="enter image description here"></a></p>
QA:
Issues with try/except, attempting to convert strings to integers in pandas data frame where possible
<p>Works like this:</p> <pre><code>def clean_df(df): df = df.astype(str) list_of_columns = list(df.columns) for col in list_of_columns: column = [] for row in list(df[col]): try: column.append(int(remove_html(row))) except ValueError: column.append(remove_html(row)) del df[col] df[col] = column return df </code></pre>
QA:
Rename python click argument
<p>By default, click will intelligently map intra-option commandline hyphens to underscores so your code should work as-is. This is used in the click documentation, e.g., in the <a href="http://click.pocoo.org/5/options/#choice-options" rel="nofollow">Choice example</a>. If --delete-thing is intended to be a boolean option, you may also want to make it a <a href="http://stackoverflow.com/questions/40132694/gradientboostingclassifier-analog-in-r">boolean argument</a>.</p>
QA:
Convert .fbx to .obj with Python FBX SDK
<p>its a example for fbx to obj import fbx</p> <pre><code># Create an SDK manager manager = fbx.FbxManager.Create() # Create a scene scene = fbx.FbxScene.Create(manager, "") # Create an importer object importer = fbx.FbxImporter.Create(manager, "") # Path to the .obj file milfalcon = "samples/millenium-falcon/millenium-falcon.fbx" # Specify the path and name of the file to be imported importstat = importer.Initialize(milfalcon, -1) importstat = importer.Import(scene) # Create an exporter object exporter = fbx.FbxExporter.Create(manager, "") save_path = "samples/millenium-falcon/millenium-falcon.obj" # Specify the path and name of the file to be imported exportstat = exporter.Initialize(save_path, -1) exportstat = exporter.Export(scene) </code></pre>
QA:
can't find ipython notebook in the app list of anaconda launcher
<p>I reinstall the program again, the problem was that I accidentally have 2 version of the same program</p>
QA:
Getting column values from multi index data frame pandas
<p>consider the <code>pd.DataFrame</code> <code>df</code> in the setup reference below</p> <p><strong><em>method 1</em></strong> </p> <ul> <li><code>xs</code> for cross section</li> <li><code>any(1)</code> to check if any in row</li> </ul> <hr> <pre><code>df.loc[df.xs('Panning', axis=1, level=1).eq('Panning').any(1)] </code></pre> <p><a href="https://i.stack.imgur.com/61CAw.png" rel="nofollow"><img src="https://i.stack.imgur.com/61CAw.png" alt="enter image description here"></a></p> <p><strong><em>method 2</em></strong> </p> <ul> <li><code>stack</code></li> <li><code>query</code></li> <li><code>unstack</code></li> </ul> <hr> <pre><code>df.stack(0).query('Panning == "Panning"').stack().unstack([-2, -1]) </code></pre> <p><a href="https://i.stack.imgur.com/w9lYo.png" rel="nofollow"><img src="https://i.stack.imgur.com/w9lYo.png" alt="enter image description here"></a></p> <hr> <p>To return just the <code>sec</code> columns</p> <pre><code>df.xs('sec', axis=1, level=1)[df.xs('Panning', axis=1, level=1).eq('Panning').any(1)] </code></pre> <p><a href="https://i.stack.imgur.com/xUmiy.png" rel="nofollow"><img src="https://i.stack.imgur.com/xUmiy.png" alt="enter image description here"></a></p> <p><strong><em>setup</em></strong><br> Reference</p> <pre><code>from StringIO import StringIO import pandas as pd txt = """None 5.0 None 0.0 None 6.0 None 1.0 Panning 7.0 None 2.0 None 8.0 Panning 3.0 None 9.0 None 4.0 Panning 10.0 None 5.0""" df = pd.read_csv(StringIO(txt), delim_whitespace=True, header=None) df.columns = pd.MultiIndex.from_product([[1, 2], ['Panning', 'sec']]) df </code></pre> <p><a href="https://i.stack.imgur.com/6dowL.png" rel="nofollow"><img src="https://i.stack.imgur.com/6dowL.png" alt="enter image description here"></a></p>
QA:
is there any way to split Spark Dataset in given logic
<p>Preserving order is very difficult in Spark applications due to the assumptions of the RDD abstraction. The best approach you can take is to translate the pandas logic using the Spark api, like I've done here. Unfortunately, I do not think you can apply the same filter criteria to every column, so I had to manually translate the mask into operations on multiple columns. This <a href="https://databricks.com/blog/2015/08/12/from-pandas-to-apache-sparks-dataframe.html" rel="nofollow">Databricks blog post</a> is helpful for anyone transitioning from Pandas to Spark. </p> <pre><code>import pandas as pd import numpy as np np.random.seed(1000) df1 = pd.DataFrame(np.random.randn(10, 4), columns=['a', 'b', 'c', 'd']) mask = df1.applymap(lambda x: x &lt;-0.7) df2 = df1[-mask.any(axis=1)] </code></pre> <p>The result we want is: </p> <pre><code> a b c d 1 -0.300797 0.389475 -0.107437 -0.479983 5 -0.334835 -0.099482 0.407192 0.919388 6 0.312118 1.533161 -0.550174 -0.383147 8 -0.326925 -0.045797 -0.304460 1.923010 </code></pre> <p>So in Spark, we create the dataframe using the Pandas data frame and use <code>filter</code> to get the correct result set: </p> <pre><code>df1_spark = sqlContext.createDataFrame(df1).repartition(10) df2_spark = df1_spark.filter(\ (df1_spark.a &gt; -0.7)\ &amp; (df1_spark.b &gt; -0.7)\ &amp; (df1_spark.c &gt; -0.7)\ &amp; (df1_spark.d &gt; -0.7)\ ) </code></pre> <p>Which gives us the proper result (notice the order is not preserved): </p> <pre><code>df2_spark.show() +-------------------+--------------------+--------------------+-------------------+ | a| b| c| d| +-------------------+--------------------+--------------------+-------------------+ |-0.3348354532115408| -0.0994816980097769| 0.40719210034152314| 0.919387539204449| | 0.3121180100663634| 1.5331610653579348| -0.5501738650283003|-0.3831474108842978| |-0.3007966727870205| 0.3894745542873072|-0.10743730169089667|-0.4799830753607686| | -0.326924675176391|-0.04579718800728687| -0.3044600616968845| 1.923010130400007| +-------------------+--------------------+--------------------+-------------------+ </code></pre> <p>If you <strong><em>absolutely needed</em></strong> to create the mask using Pandas, you would have to preserve the index of the original Pandas dataframe and remove individual records from the Spark by creating a broadcast variable and filtering based on the index column. Here's an example, YMMV. </p> <p>Add an index: </p> <pre><code>df1['index_col'] = df1.index df1 a b c d index_col 0 -0.804458 0.320932 -0.025483 0.644324 0 1 -0.300797 0.389475 -0.107437 -0.479983 1 2 0.595036 -0.464668 0.667281 -0.806116 2 3 -1.196070 -0.405960 -0.182377 0.103193 3 4 -0.138422 0.705692 1.271795 -0.986747 4 5 -0.334835 -0.099482 0.407192 0.919388 5 6 0.312118 1.533161 -0.550174 -0.383147 6 7 -0.822941 1.600083 -0.069281 0.083209 7 8 -0.326925 -0.045797 -0.304460 1.923010 8 9 -0.078659 -0.582066 -1.617982 0.867261 9 </code></pre> <p>Convert the mask into a Spark broadcast variable: </p> <pre><code>myIdx = sc.broadcast(df2.index.tolist()) </code></pre> <p>Create and modify the dataframes using the Spark api: </p> <pre><code>df1_spark.rdd.filter(lambda row: row and row['index_col'] not in myIdx.value).collect() df2_spark = df1_spark.rdd.filter(lambda row: row and row['index_col'] in myIdx.value).toDF() df2_spark.show() +-------------------+--------------------+--------------------+-------------------+---------+ | a| b| c| d|index_col| +-------------------+--------------------+--------------------+-------------------+---------+ |-0.3007966727870205| 0.3894745542873072|-0.10743730169089667|-0.4799830753607686| 1| |-0.3348354532115408| -0.0994816980097769| 0.40719210034152314| 0.919387539204449| 5| | 0.3121180100663634| 1.5331610653579348| -0.5501738650283003|-0.3831474108842978| 6| | -0.326924675176391|-0.04579718800728687| -0.3044600616968845| 1.923010130400007| 8| +-------------------+--------------------+--------------------+-------------------+---------+ </code></pre>
QA:
Jupyter magic to handle notebook exceptions
<p>@show0k gave the correct answer to my question (in regards to magic methods). Thanks a lot! :)</p> <p>That answer inspired me to dig a little deeper and I came across an IPython method that lets you define a <strong>custom exception handler for the whole notebook</strong>.</p> <p>I got it to work like this:</p> <pre><code>from IPython.core.ultratb import AutoFormattedTB # initialize the formatter for making the tracebacks into strings itb = AutoFormattedTB(mode = 'Plain', tb_offset = 1) # this function will be called on exceptions in any cell def custom_exc(shell, etype, evalue, tb, tb_offset=None): # still show the error within the notebook, don't just swallow it shell.showtraceback((etype, evalue, tb), tb_offset=tb_offset) # grab the traceback and make it into a list of strings stb = itb.structured_traceback(etype, evalue, tb) sstb = itb.stb2text(stb) print (sstb) # &lt;--- this is the variable with the traceback string print ("sending mail") send_mail_to_myself(sstb) # this registers a custom exception handler for the whole current notebook get_ipython().set_custom_exc((Exception,), custom_exc) </code></pre> <p>So this can be put into a single cell at the top of any notebook and as a result it will do the mailing in case something goes wrong.</p> <p>Note to self / TODO: make this snippet into a little python module that can be imported into a notebook and activated via line magic.</p> <p>Be careful though. The documentation contains a warning for this <code>set_custom_exc</code> method: "WARNING: by putting in your own exception handler into IPython’s main execution loop, you run a very good chance of nasty crashes. This facility should only be used if you really know what you are doing."</p>
QA:
pylint: getting it to understand decorators
<p>If you have something like that </p> <pre><code>def decorator(f): def wrapper(*args, **kwargs): return f(1, *args, **kwargs) return wrapper @decorator def z(a, b): return a + b print( z(5) ) </code></pre> <p>A simple solution that don't ask for too much change in your code is to just forget the @, that is just syntactic sugar. It works for me.</p> <pre><code>def z(a, b): return a + b z = decorator(z) print( z(5) ) </code></pre>
QA:
Issues with try/except, attempting to convert strings to integers in pandas data frame where possible
<p>Use the try/except in a function and use that function with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.applymap.html" rel="nofollow"><code>DataFrame.applymap()</code></a></p> <pre><code>df = pd.DataFrame([['a','b','1'], ['2','c','d'], ['e','3','f']]) def foo(thing): try: return int(thing) except ValueError as e: return thing &gt;&gt;&gt; df[0][2] 'e' &gt;&gt;&gt; df[0][1] '2' &gt;&gt;&gt; df = df.applymap(foo) &gt;&gt;&gt; df[0][2] 'e' &gt;&gt;&gt; df[0][1] 2 &gt;&gt;&gt; </code></pre>
QA:
python: multiple .dat's in multiple arrays
<p>I think building arrays like this is going to make things more complicated for you. It would be easier to build a dictionary using tuples as keys. In the example file you sent me, each <code>(x, y, z)</code> pair was repeated twice, making me think that each file contains data on <em>two</em> iterations of a total solution of 2000 iterations. Dictionaries must have unique keys, so for each file I have implemented another counter, <code>timestep</code>, that can increment when collating data from a single file.</p> <p>Now, if I wanted coords (1, 2, 3) on the 3rd timestep, I could do <code>simulation[(1, 2, 3, 3)]</code>.</p> <pre><code>import csv import numpy as np ''' Made the assumptions that: -Each file contains two iterations from a simulation of 2000 iterations -Each file is numbered sequentially. Each time the same (x, y, z) coords are discovered, it represents the next timestep in simulation Accessing data is via a tuple key (x, y, z, n) with n being timestep ''' simulation = {} file_count = 1 timestep = 1 num_files = 2 for x in range(1, num_files + 1): with open('sim_file_{}.dat'.format(file_count), 'r') as infile: second_read = False reader = csv.reader(infile, delimiter=' ') for row in reader: item = [float(x) for x in row] if row: if (not second_read and not any(simulation.get((item[0], item[1], item[2], timestep), []))): timestep += 1 second_read = True simulation[(item[0], item[1], item[2], timestep)] = np.array(item[3:]) file_count += 1 timestep += 1 second_read = False </code></pre>
QA:
cast numpy array into memmap
<p>I found next example in numpy documentation :</p> <pre><code>data = np.arange(12, dtype='float32') data.resize((3,4)) fp = np.memmap(filename, dtype='float32', mode='w+', shape=(3,4)) fp[:] = data[:] </code></pre> <p>So your last command is ok.</p>
QA:
String formatting of floats
<pre><code>if 1 &lt;= x &lt;10000: print '{:.5g}'.format(x) elif 1 &gt; x or x &gt;= 10000: print '{:.4e}'.format(x) </code></pre> <p>Similar to A.Kot's answer but not a one liner and outputs what you want given your sample. </p>
QA:
problems dealing with pandas read csv
<p>Use <code>names</code> instead of <code>usecols</code> while specifying parameter.</p>
QA:
How to tell Python to save files in this folder?
<p>To iterate over files in a particular folder, we can simply use os.listdir() to traverse a single tree.</p> <pre><code>import os for fn in os.listdir(r'Z:\Slack_Files'): if os.path.isfile(fn): open(fn,'r') # mode is r means read mode </code></pre>
QA:
Replacing a node in graph with custom op having variable dependency in tensorflow
<p>Could you precise which version of Tensorflow you use : r0.08, r0.09, r0.10, r0.11 ?</p> <p>That is impossible to change an op in the graph with another op. But If you may access W, you can still make a backup copy of W (using deepcopy() <a href="https://docs.python.org/2/library/copy.html" rel="nofollow">from copy module</a> ) before running the train op which update it ? </p> <p>Regards</p>
QA:
Ipython cv2.imwrite() not saving image
<p>As a general and absolute rule, you <em>have</em> to protect your windows path strings (containing backslashes) with <code>r</code> prefix or some characters are interpreted (ex: <code>\n,\b,\v,\x</code> aaaaand <code>\t</code> !):</p> <p>so when doing this:</p> <pre><code>cv2.imwrite('C:\Users\Niladri\Desktop\tropical_image_sig5.bmp', img2) </code></pre> <p>you're trying to save to <code>C:\Users\Niladri\Desktop&lt;TAB&gt;ropical_image_sig5.bmp</code></p> <p>(and I really don't know what it does :))</p> <p>Do this:</p> <pre><code>cv2.imwrite(r'C:\Users\Niladri\Desktop\tropical_image_sig5.bmp', img2) </code></pre> <p>Note: the read works fine because "escaped" uppercase letters have no particular meaning in python 2 (<code>\U</code> has a meaning in python 3)</p>
QA:
Python large array comparison
<p>As already mentioned by @Laurent and @sisanared, you can use the <code>in</code> operator for either <code>lists</code> or <code>sets</code> to check for membership. For example:</p> <pre><code>found = x in some_list if found: #do stuff else: #other stuff </code></pre> <p>However, you mentioned that speed is an issue. TL;DR -- <code>sets</code> are faster if the <code>set</code> already exists. From <a href="https://wiki.python.org/moin/TimeComplexity" rel="nofollow">https://wiki.python.org/moin/TimeComplexity</a>, checking membership using the <code>in</code> operator is O(n) for <code>list</code> and O(1) for <code>set</code> (like @enderland pointed out). </p> <p>For 100,000 items, or for one-time-only checks it probably doesn't make much of a difference which you use, but for a larger number of items or situations where you'll be doing many checks, you should probably use a <code>set</code>. I did a couple of tests from the interpreter and this is what I found (Python 2.7, i3 Windows 10 64bit): </p> <pre><code>import timeit #Case 1: Timing includes building the list/set def build_and_check_a_list(n): a_list = [ '/'.join( ('http:stackoverflow.com',str(i)) ) for i in xrange(1,n+1) ] check = '/'.join( ('http:stackoverflow.com',str(n)) ) found = check in a_list return (a_list, found) def build_and_check_a_set(n): a_set = set( [ '/'.join( ('http:stackoverflow.com',str(i)) ) for i in xrange(1,n+1) ] ) check = '/'.join( ('http:stackoverflow.com',str(n)) ) found = check in a_set return (a_set, found) timeit.timeit('a_list, found = build_and_check_a_list(100000)', 'from __main__ import build_and_check_a_list', number=50) 3.211972302022332 timeit.timeit('a_set, found = build_and_check_a_set(100000)', 'from __main__ import build_and_check_a_set', number=50) 4.5497120006930345 #Case 2: The list/set already exists (timing excludes list/set creation) check = '/'.join( ('http:stackoverflow.com',str(100000)) ) timeit.timeit('found = check in a_list', 'from __main__ import a_list, check', number=50) 0.12173540635194513 timeit.timeit('found = check in a_set', 'from __main__ import a_set, check', number=50) 1.01052391983103e-05 </code></pre> <p>For 1 million entries, to build and/or check membership on my computer:</p> <pre><code>#Case 1: list/set creation included timeit.timeit('a_list, found = build_and_check_a_list(1000000)', 'from __main__ import build_and_check_a_list', number=50) 35.71641090788398 timeit.timeit('a_set, found = build_and_check_a_set(1000000)', 'from __main__ import build_and_check_a_set', number=50) 51.41244436103625 #Case 2: list/set already exists check = '/'.join( ('http:stackoverflow.com',str(1000000)) ) timeit.timeit('found = check in a_list', 'from __main__ import a_list, check', number=50) 1.3113457772124093 timeit.timeit('found = check in a_set', 'from __main__ import a_set, check', number=50) 8.180430086213164e-06 </code></pre>
QA:
Django says there are no changes to be made when I migrate
<p>Your models should derive from models.Model:</p> <pre><code> class Person(models.Model): ... class Subject(models.Model): ... ... </code></pre>
QA:
find repeated element in list of list python
<p>Using your idea, you can try this:</p> <pre><code>MI_network = [] complete_net = [(1, 2, 4.99), (3, 6, 5.99), (1, 4, 3.00), (5, 1, 1.12), (7, 8, 1.99)] genesis = list(complete_net) while genesis != []: for x in genesis: for gen in genesis: if x[0] in gen and x[1] not in gen: if x[0] != gen[2] and x[1] != gen[2]: if x not in MI_network: MI_network.append(x) elif x[0] not in gen and x[1] in gen: if x[0] != gen[2] and x[1] != gen[2]: if x not in MI_network: MI_network.append(x) elif x[0] not in gen and x[1] not in gen: pass genesis.remove(genesis[0]) print(MI_network) [(1, 2, 4.99), (1, 4, 3.0), (5, 1, 1.12)] </code></pre>
QA:
Executing C++ code from python
<p>Fairly easy to execute an external program from Python - regardless of the language:</p> <pre><code>import os import subprocess for filename in os.listdir(os.getcwd()): print filename proc = subprocess.Popen(["./myprog", filename]) proc.wait() </code></pre> <p>The list used for arguments is platform specific, but it should work OK. You should alter <code>"./myprog"</code> to your own program (it doesn't have to be in the current directory, it will use the PATH environment variable to find it).</p>
QA:
setAttr of a list in maya python
<p>Going from examples in the documentation:</p> <blockquote> <p><a href="http://help.autodesk.com/cloudhelp/2017/ENU/Maya-Tech-Docs/CommandsPython/getAttr.html" rel="nofollow">http://help.autodesk.com/cloudhelp/2017/ENU/Maya-Tech-Docs/CommandsPython/getAttr.html</a></p> <p><a href="http://help.autodesk.com/cloudhelp/2017/ENU/Maya-Tech-Docs/CommandsPython/setAttr.html" rel="nofollow">http://help.autodesk.com/cloudhelp/2017/ENU/Maya-Tech-Docs/CommandsPython/setAttr.html</a></p> </blockquote> <p>You need to specify the object name and attribute, as a string, when you pass it into the getAttr() function.</p> <p>e.g. </p> <pre><code>translate = cmds.getAttr('pSphere1.translate') </code></pre> <p>will return the attribute value for the translate on pSphere1</p> <p>or </p> <pre><code>jointList = cmds.ls(type='joint') for joint in jointList: jointRadius = cmds.getAttr('{}.radius'.format(joint)) #Do something with the jointRadius below </code></pre> <p>And if you want to set it</p> <pre><code>newJointRadius = 20 jointList = cmds.ls(type='joint') for joint in jointList: cmds.setAttr('{}.radius'.format(joint), newJointRadius) </code></pre>
QA:
Find a certain difference between members of a list (or set) of numbers
<pre><code> count = len(numbers) numbers1 = numbers[:count - 1] numbers2 = numbers[1:] for i in range(0, count - 1): dif = numbers2[i] - numbers1[i] if abs(dif) &lt;= tol: match.add(numbers1[i]) match.add(numbers2[i]) </code></pre>
QA:
cannot quit jupyter notebook server running
<p>I ran into the same issue and followed the solution posted above. Just wanted to clearify the solution a little bit.</p> <pre><code>netstat -tulpn </code></pre> <p>will list all the active connections.</p> <pre><code>tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN 19524/python </code></pre> <p>you will need the PID "19524" in this case. you can even use the following to get the PID of the port you are trying to shut down</p> <pre><code>fuser 8888/tcp </code></pre> <p>this will give you 19524 as well.</p> <pre><code>kill 19524 </code></pre> <p>will shut down the port</p>
QA:
how do i find my ipv4 using python?
<p>That's all you need for the local address (returns a string):</p> <pre><code>socket.gethostbyname(socket.gethostname()) </code></pre>
QA:
How to expose user passwords in the most "secure" way in django?
<p>No, there is no logical way of doing this that doesn't imply a huge security breach in the software. </p> <p>If the passwords are stored correctly (salted and hashed), then even site admins with unrestricted access on the database can not tell you what the passwords are in plain text. </p> <p>You should push back against this unreasonable request. If you have a working "password reset" functionality, then nobody but the user ever needs to know a user's password. <strong>If you don't have a reliable "password reset" feature, then try and steer the conversation and development effort in this direction</strong>. There is rarely any real business need for knowing/printing user passwords, and these kind of feature requests may be coming from non-technical people who have misunderstandings (or no understanding) about the implementation detail of authentication and authorization. </p>
QA:
Scrapy Shell - How to change USER_AGENT
<p>Inside the scrapy shell, you can set the <code>User-Agent</code> in the <code>request</code> <code>header</code>.</p> <pre><code>url = 'http://www.example.com' request = scrapy.Request(url, headers={'User-Agent': 'Mybot'}) fetch(request) </code></pre>
QA:
Invalid Syntax from except
<p>An <code>except</code> clause only makes sense after a <code>try</code> block, and there isn't one. It seems you're not looking for exception handling but simply an <code>else</code> clause.</p> <p>Either</p> <pre><code>try: code_that_might_fail() except ValueError: print("ouch.") </code></pre> <p>or</p> <pre><code>if condition: do_this() else: do_that() </code></pre>
QA:
Invalid Syntax from except
<p>you should put a</p> <p><code>try-except</code> block together but in your code. You have used only the <code>except block</code>... No <code>try</code> statement.</p> <pre><code> try: if name == (): mouse = "+".join(name) link = "http://api.micetigri.fr/json/player/" + mouse async with aiohttp.get(link) as r: result = await r.json() name = result['name'] msg = "**Mouse:** {}".format(name) await self.bot.say(msg) except: await self.bot.say("Invalid username!") </code></pre> <p>you should use something like above if the error is only because of the <code>except</code> syntax. or there could be you wish to use <code>else:</code> and have used <code>except</code></p>
QA:
Python: How to filter a DataFrame of dates in Pandas by a particular date within a window of some days?
<p>The function I created to accomplish this is <code>filterDaysWindow</code> and can be used as follows:</p> <pre><code>import pandas as pd import numpy as np import datetime dates = pd.date_range(start="08/01/2009",end="08/01/2012",freq="D") df = pd.DataFrame(np.random.rand(len(dates), 1)*1500, index=dates, columns=['Power']) def filterDaysWindow(df, date, daysWindow): """ Filter a Dataframe by a date within a window of days @type df: DataFrame @param df: DataFrame of dates @type date: datetime.date @param date: date to focus on @type daysWindow: int @param daysWindow: Number of days to perform the days window selection @rtype: DataFrame @return: Returns a DataFrame with dates within date+-daysWindow """ dateStart = date - datetime.timedelta(days=daysWindow) dateEnd = date + datetime.timedelta(days=daysWindow) return df [dateStart:dateEnd] df_filtered = filterDaysWindow(df, datetime.date(2010,8,3), 5) print df_filtered </code></pre>
QA:
Bokeh Python: Laying out multiple plots
<p>Please note that <code>hplot</code> is deprecated in recent releases. You should use <code>bokeh.layout.row</code>:</p> <pre><code>from bokeh.layouts import row # define some plots p1, p2, p3 layout = row(p1, p2, p3) show(layout) </code></pre> <p>Functions like <code>row</code> (and previously <code>hplot</code>) take all the things to put in the row as individual arguments. </p> <p>There is an entire section on layouts in the user's guide: </p> <p><a href="http://bokeh.pydata.org/en/latest/docs/user_guide/layout.html" rel="nofollow">http://bokeh.pydata.org/en/latest/docs/user_guide/layout.html</a></p>
QA:
Using Boto3 to Manage AWS from Google App Engine
<p>You will have to fake the <code>pwd</code> module.</p> <p>Create a file named <code>fake_pwd.py</code> with the necessary shims:</p> <pre><code>class struct_passwd_dummy(object): def __init__(self, uid): self.pw_name = "user" self.pw_passwd = "x" self.pw_uid = uid self.pw_gid = uid self.pw_gecos = "user" self.pw_dir = "/home/user" self.pw_shell = "/bin/sh" def getpwuid(uid): return struct_passwd_dummy(uid) </code></pre> <p>Then, in <code>appengine_config.py</code>, try this hack:</p> <pre><code>try: import boto3 except ImportError: import sys import fake_pwd sys.modules["pwd"] = fake_pwd import boto3 </code></pre>
QA:
Method like argument in function
<p>You can use <a href="https://docs.python.org/3.6/library/functions.html#getattr" rel="nofollow"><code>getattr</code></a> with a str of the name of the method. This gets the attribute with that name from the object (In this case, a method)</p> <pre><code>def rolling (df, prefix='r', window=3, method='sum'): for name in df.columns: df[prefix + name] = getattr(df[name].rolling(window), method)() return df </code></pre> <p>Or you could just pass in the method. When calling it, the first argument will be <code>self</code>.</p> <pre><code>def rolling (df, prefix='r', window=3, method=DataReader.sum): for name in df.columns: df[prefix + name] = method(df[name].rolling(window)) return df </code></pre>
QA:
How to create a subset of document using lxml?
<p>I am not sure there is something built-in for it, but here is a terrible, "don't ever use it in real life" type of a workaround using the <a href="http://lxml.de/api/lxml.etree._Element-class.html#iterancestors" rel="nofollow"><code>iterancestors()</code> parent iterator</a>:</p> <pre><code>from lxml import etree as ET data = """&lt;root&gt; &lt;element1&gt; &lt;subelement1&gt;blabla&lt;/subelement1&gt; &lt;/element1&gt; &lt;element2&gt; &lt;subelement2&gt;blibli&lt;/subelement2&gt; &lt;/element2&gt; &lt;/root&gt;""" root = ET.fromstring(data) element = root.find(".//subelement1") result = ET.tostring(element) for node in element.iterancestors(): result = "&lt;{name}&gt;{text}&lt;/{name}&gt;".format(name=node.tag, text=result) print(ET.tostring(ET.fromstring(result), pretty_print=True)) </code></pre> <p>Prints:</p> <pre><code>&lt;root&gt; &lt;element1&gt; &lt;subelement1&gt;blabla&lt;/subelement1&gt; &lt;/element1&gt; &lt;/root&gt; </code></pre>
QA:
Method like argument in function
<p>I do this</p> <pre><code>def rolling (df, prefix='r', window=3, method='method_name'): for name in df.columns: df[prefix + name] = df[name].rolling(window).__getattribute__(method)() return df </code></pre>
QA:
Method like argument in function
<p>A method is an attribute like any other (it just happens to be callable when bound to an object), so you can use <code>getattr</code>. (A default value of <code>None</code> is nonsense, of course, but I didn't want to reorder your signature to make <code>method</code> occur earlier without a default value.)</p> <pre><code>def rolling (df, prefix='r', window=3, method=None): for name in df.columns: obj = df[name].rolling(window) m = getattr(obj, method) df[prefix + name] = m() return df </code></pre>
QA:
python filter 2d array by a chunk of data
<p>Untested since in a hurry, but this should work:</p> <pre><code>import numpy_indexed as npi g = npi.group_by(data[:, 1]) ids, valid = g.any(data[:, 3]) result = data[valid[g.inverse]] </code></pre>
QA:
Read a CSV to insert data into Postgres SQL with Pyhton
<p>The immediate problem with your code is that you are trying to include the literal <code>%s</code>. Since you probably did run it more than once you already have a literal <code>%s</code> in that unique column hence the exception.</p> <p>It is necessary to pass the values wrapped in an iterable as parameters to the <code>execute</code> method. The <code>%s</code> is just a value place holder.</p> <pre><code>passdata = """ INSERT INTO project (project_code, program_name ) VALUES (%s, %s) """ cursor.execute(passdata, (the_project_code, the_program_name)) </code></pre> <p>Do not quote the <code>%s</code>. Psycopg will do it if necessary.</p> <p>As your code does not include a loop it will only insert one row from the csv. There are some patterns to insert the whole file. If the requirements allow just use <a href="http://initd.org/psycopg/docs/usage.html#using-copy-to-and-copy-from" rel="nofollow"><code>copy_from</code></a> which is simpler and faster.</p>
QA:
php - What is quicker for search and replace in a csv file? In a string or in an array?
<p>If your csv file is that big (> 1 million rows), it might not be the best idea to load it all at once unless memory usage is of no concern to you.</p> <p>Therefor, I'd recommend running the replace line by line. Here's a very basic example:</p> <pre><code>$input = fopen($inputFile, 'r'); $output = fopen($outputFile, 'r+'); while (!feof($input)) { $input = fgets($input); $parsed = str_replace($search, $replace, $input); fputs($output, $parsed); } </code></pre> <p>This should be fast enough, and it allows you to easily track progress as well. If you would ever like to replace only specific column, you can use <code>fgetcsv</code> and <code>fputcsv</code> instead of <code>fgets</code> and <code>fputs</code>.</p> <p>I definitely wouldn't try to do this using mysql, as simply inserting this much data into a database will take a while.</p> <p>As for python, I'm not sure whether it can actually benefit the algorithm in any way.</p>
QA:
Continue on exception in Python
<p>It is generally good practice to keep <code>try: except:</code> blocks as small as possible. I would wrap your <code>textstat</code> functions in some sort of decorator that catches the exception you expect, and returns the function output and the exception caught.</p> <p>for example:</p> <pre><code>def catchExceptions(exception): #decorator with args (sorta boilerplate) def decorator(func): def wrapper(*args, **kwargs): try: retval = func(*args, **kwargs) except exception as e: return None, e else: return retval, None return wrapper return decorator @catchExceptions(ZeroDivisionError) def testfunc(x): return 11/x print testfunc(0) print '-----' print testfunc(3) </code></pre> <p>prints:</p> <pre><code>(None, ZeroDivisionError('integer division or modulo by zero',)) ----- (3, None) </code></pre>
QA:
Using Google API for Python- where do I get the client_secrets.json file from?
<p>If you go to your <a href="https://console.developers.google.com/apis/credentials" rel="nofollow">Google developers console</a> you should see a section titled <strong>OAuth 2.0 client IDs</strong>. Click on an entry in that list, and you will see a number of fields, including <strong>Client secret</strong>. </p> <p>If you have not yet created credentials, click the <strong>Create credentials</strong> button, and follow the instructions to create new credentials, and then follow the steps outlined above to find the <strong>Client secret</strong>.</p>
QA:
How to copy content of a numpy matrix to another?
<p>If the shapes are the same, then any of these meet both of your requirements:</p> <pre><code>self.x_last[...] = x # or self.x_last[()] = x # or self.x_last[:] = x </code></pre> <p>I'd argue that the first one is probably most clear</p> <hr> <p>Let's take a look at your requirements quickly:</p> <blockquote> <p>Copy only the content of x to self.x_last</p> </blockquote> <p>Seems reasonable. This means if that if <code>x</code> continues to change, then <code>x_last</code> won't change with it</p> <blockquote> <p>Don't change the address of <code>self.x_last</code></p> </blockquote> <p>This doesn't buy you anything. IMO, this is actively worse, because functions using <code>x_last</code> in another thread will see it change underneath them unexpectedly, and worse still, could work with the data when it is incompletely copied from <code>x</code></p>
QA:
How to assert call order and parameters when mocking multiple calls to the same method?
<p>You probably want to use the <a href="https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.assert_has_calls" rel="nofollow"><code>Mock.assert_has_calls</code></a> method.</p> <pre><code>self.assertEqual(self.request_mock.call_count, 2) self.request_mock.assert_has_calls([ mock.call( 'POST', 'https://www.foobar.com', headers=None, allow_redirects=False, params=None, data=json.dumps(data)), mock.call( 'GET', 'https://www.foobar.com', headers=None, allow_redirects=False, params=None, data=None) ]) </code></pre> <p>By default, <code>assert_has_calls</code> will check that the calls happen in the proper order. If you don't care about the order, you can use the <code>any_order</code> keyword argument (set to <code>True</code>).</p>
QA:
importing PyQt4 modules
<p>Have you installed multiple versions of Python or PyQt?, I just recommend in uninstalling both the Python and PyQt. Also check the version of Python in your IDE's Interpreter.</p>
QA:
Unable to install pyOpenSSL
<p>I had a similar issue and installed the libssl header files so I could compile the python module:</p> <pre><code>sudo apt-get install libssl-dev </code></pre>
QA:
Accelerate or decelerate a movie clip
<p>I think the best way of accelerating and decelerating clip objects is using <strong>easing functions</strong>.</p> <p>Some reference sites:</p> <ul> <li><a href="http://easings.net" rel="nofollow">http://easings.net</a></li> <li><a href="http://www.gizma.com/easing/" rel="nofollow">http://www.gizma.com/easing/</a></li> <li><a href="http://gsgd.co.uk/sandbox/jquery/easing/" rel="nofollow">http://gsgd.co.uk/sandbox/jquery/easing/</a></li> </ul> <p>Here's part of a script I made when trying to understand these functions. Maybe you can use some of this concepts to solve your issue.</p> <pre><code>from __future__ import division from moviepy.editor import TextClip, CompositeVideoClip import math def efunc(x0, x1, dur, func='linear', **kwargs): # Return an easing function. # It will control a single dimention of the clip movement. # http://www.gizma.com/easing/ def linear(t): return c*t/d + b def out_quad(t): t = t/d return -c * t*(t-2) + b def in_out_sine(t): return -c/2 * (math.cos(math.pi*t/d) - 1) + b def in_quint(t): t = t/d return c*t*t*t*t*t + b def in_out_circ(t): t /= d/2; if t &lt; 1: return -c/2 * (math.sqrt(1 - t*t) - 1) + b t -= 2; return c/2 * (math.sqrt(1 - t*t) + 1) + b; def out_bounce(t): # http://gsgd.co.uk/sandbox/jquery/easing/jquery.easing.1.3.js t = t/d if t &lt; 1/2.75: return c*(7.5625*t*t) + b elif t &lt; 2/2.75: t -= 1.5/2.75 return c*(7.5625*t*t + .75) + b elif t &lt; 2.5/2.75: t -= 2.25/2.75 return c*(7.5625*t*t + .9375) + b else: t -= 2.625/2.75 return c*(7.5625*t*t + .984375) + b # Kept the (t, b, c, d) notation found everywhere. b = x0 c = x1 - x0 d = dur return locals()[func] def particle(x0, x1, y0, y1, d, func='linear', color='black', **kwargs): # Dummy clip for testing. def pos(t): return efunc(x0, x1, d, func=func)(t), efunc(y0, y1, d, func=func)(t) return ( TextClip('*', fontsize=80, color=color) .set_position(pos) .set_duration(d) ) # Make a gif to visualize the behaviour of the functions: easing_functions = [ ('linear', 'red'), ('in_out_sine', 'green'), ('in_out_circ', 'violet'), ('out_quad', 'blue'), ('out_bounce', 'brown'), ('in_quint', 'black'), ] d = 4 x0, x1 = 0, 370 clips = [] for i, (func, c) in enumerate(easing_functions): y = 40*i clips.append(particle(x0, x1, y, y, d=d, func=func, color=c)) clips.append(particle(x1, x0, y, y, d=d, func=func, color=c).set_start(d)) clip = CompositeVideoClip(clips, size=(400,250), bg_color=(255,255,255)) clip.write_gif('easing.gif', fps=12) </code></pre> <p><br></p> <p>The output of the script:</p> <p><a href="https://i.stack.imgur.com/73oVB.gif" rel="nofollow"><img src="https://i.stack.imgur.com/73oVB.gif" alt="Easing functions demo"></a></p>
QA:
argparse: Emulating GCC's "-fno-<option>" semantics
<p>Without any fancy foot work I can setup a pair of arguments that write to the same <code>dest</code>, and take advantage of the fact that the last write is the one that sticks:</p> <pre><code>In [765]: parser=argparse.ArgumentParser() In [766]: a1=parser.add_argument('-y',action='store_true') In [767]: a2=parser.add_argument('-n',action='store_false') </code></pre> <p>Without a <code>dest</code> parameter these use a name deterived from the option strings. But I can give a <code>dest</code>, or change that value after creation:</p> <pre><code>In [768]: a1.dest Out[768]: 'y' In [769]: a2.dest Out[769]: 'n' In [770]: a1.dest='switch' In [771]: a2.dest='switch' </code></pre> <p>Now use of either will set the <code>switch</code> attribute.</p> <pre><code>In [772]: parser.parse_args([]) Out[772]: Namespace(switch=False) </code></pre> <p>The default comes from the first defined argument. That's a function of how defaults are set at the start of parsing. For all other inputs, it's the last argument that sets the value</p> <pre><code>In [773]: parser.parse_args(['-y']) Out[773]: Namespace(switch=True) In [774]: parser.parse_args(['-n']) Out[774]: Namespace(switch=False) In [775]: parser.parse_args(['-n','-y','-n','-y']) Out[775]: Namespace(switch=True) In [776]: parser.parse_args(['-n','-y','-n']) Out[776]: Namespace(switch=False) </code></pre> <p>The default could also be set with a separate command:</p> <pre><code>parser.set_defaults(switch='foo') </code></pre> <p>If you wanted to use this sort of feature a lot you could write a little utility function that creates the pair of arguments with any flags and dest you want. There's even a bug/issues request for such an enhancement, but I doubt if it will be implemented.</p>
QA:
'Stack()' output with all Individual index's filled in Pandas DataFrame
<p>the relevant pandas option is <code>'display.multi_sparse'</code><br> you can set it yourself with</p> <pre><code>pd.set_option('display.multi_sparse', False) </code></pre> <p>or use <code>pd.option_context</code> to temporarily set it in a <code>with</code> block</p> <pre><code>with pd.option_context('display.multi_sparse', False): dates = pd.date_range('20130101',periods=6) print(pd.DataFrame(np.random.randn(6,4),index=dates,columns=list('ABCD')).stack()) 2013-01-01 A 0.074056 2013-01-01 B 0.565971 2013-01-01 C 0.312375 2013-01-01 D 0.000926 2013-01-02 A 0.669702 2013-01-02 B 0.458241 2013-01-02 C 0.854965 2013-01-02 D 1.608542 2013-01-03 A 0.358990 2013-01-03 B 0.194446 2013-01-03 C -0.988489 2013-01-03 D -0.967467 2013-01-04 A -0.768605 2013-01-04 B 0.791746 2013-01-04 C 0.073552 2013-01-04 D -0.604505 2013-01-05 A 0.254031 2013-01-05 B 0.143891 2013-01-05 C -0.351159 2013-01-05 D 0.642623 2013-01-06 A 0.499416 2013-01-06 B -0.588694 2013-01-06 C 1.418078 2013-01-06 D -0.071737 dtype: float64 </code></pre>
QA:
google cloud dataflow can run locally but can't run on the cloud
<p>Reading a tar file using StringIO as mentioned above cannot be recommended due to having to load all data into memory.</p> <p>Seems like your original implementation didn't work since tarfile.open() using methods seek() and tell() which are not supported by fileio._CompressedFile object returned by filebasedsource.open_file().</p> <p>I filed <a href="https://issues.apache.org/jira/browse/BEAM-778" rel="nofollow">https://issues.apache.org/jira/browse/BEAM-778</a> for this.</p>
QA:
Append to dictionary in defaultdict
<p>if i understood you correctly, you want to replace the list with a dict, that you can later add to it values.</p> <p>if so you can do this:</p> <pre><code>dates = [datetime.date(2016, 10, 17), datetime.date(2016, 10, 18), datetime.date(2016, 10, 19), datetime.date(2016, 10, 20), datetime.date(2016, 10, 21), datetime.date(2016, 10, 22), datetime.date(2016, 10, 23)] e = defaultdict(dict) for key, value in d.iteritems(): value = (sorted(value, key=itemgetter('date'), reverse=False)) for date in dates: for i in value: if i['date'] == str(date) and i['time'] == 'morning': value1 = float(i['value1']) temp = {'val_morning': value1 } e[str(date)].update(temp) #### HERE i replaced append with update! elif ii['date'] == str(date) and i['time'] == 'evening': value2 = float(i['value2']) temp = {'val_evening': value2 } e[str(date)].update(temp)#### HERE i replaced append with update! </code></pre> <p>i simply replaced the append with <a href="https://docs.python.org/2/library/stdtypes.html#dict.update" rel="nofollow" title="update">update</a> (and of course made the defaultdict use dict instead of list) </p>
QA:
How to find the longest sub-array within a threshold?
<p>Here's a simple loop-based solution in Python:</p> <pre><code>def longest_subarray_within_threshold(sorted_array, threshold): result = (0, 0) longest = 0 i = j = 0 end = len(sorted_array) while i &lt; end: if j &lt; end and sorted_array[j] - sorted_array[i] &lt;= threshold: current_distance = j - i if current_distance &gt; longest: longest = current_distance result = (i, j) j += 1 else: i += 1 return result </code></pre>
QA:
Django: authenticate the user
<p>Save the user in one var, and then call user.save() because User can't call the method save() try it:</p> <pre><code>def create_user(request): if request.method == 'POST': user_info = forms.UserInfoForm(request.POST) if user_info.is_valid(): cleaned_info = user_info.cleaned_data user = User.objects.create_user(username=cleaned_info['username'], password=cleaned_info['password']) user.save() render(.......) </code></pre> <p>Then you need to call auth.authenticate in your function get_entry:</p> <pre><code>def get_entry(request): if request.method == 'POST': user = auth.authenticate(username='testcase', password='test') if user: ......... </code></pre>
QA:
How to create a subset of document using lxml?
<p>The following code removes elements that don't have any <code>subelement1</code> descendants and are not named <code>subelement1</code>.</p> <pre><code>from lxml import etree tree = etree.parse("input.xml") # First XML document in question for elem in tree.iter(): if elem.xpath("not(.//subelement1)") and not(elem.tag == "subelement1"): if elem.getparent() is not None: elem.getparent().remove(elem) print etree.tostring(tree) </code></pre> <p>Output:</p> <pre><code>&lt;root&gt; &lt;element1&gt; &lt;subelement1&gt;blabla&lt;/subelement1&gt; &lt;/element1&gt; &lt;/root&gt; </code></pre>
QA:
Multiple Pandas DataFrame Bar charts on the same chart
<p>Is this what you are looking for? </p> <pre><code>import pandas import matplotlib.pyplot as plt import numpy as np Scores = [ {"TID":7,"ScoreRank":1,"Score":834,"Average":690}, {"TID":7,"ScoreRank":2,"Score":820,"Average":690}, {"TID":7,"ScoreRank":3,"Score":788,"Average":690}, {"TID":8,"ScoreRank":1,"Score":617,"Average":571}, {"TID":8,"ScoreRank":2,"Score":610,"Average":571}, {"TID":8,"ScoreRank":3,"Score":600,"Average":571}, {"TID":9,"ScoreRank":1,"Score":650,"Average":584}, {"TID":9,"ScoreRank":2,"Score":644,"Average":584}, {"TID":9,"ScoreRank":3,"Score":618,"Average":584}, {"TID":10,"ScoreRank":1,"Score":632,"Average":547}, {"TID":10,"ScoreRank":2,"Score":593,"Average":547}, {"TID":10,"ScoreRank":3,"Score":577,"Average":547}, {"TID":11,"ScoreRank":1,"Score":479,"Average":409}, {"TID":11,"ScoreRank":2,"Score":445,"Average":409}, {"TID":11,"ScoreRank":3,"Score":442,"Average":409}, {"TID":12,"ScoreRank":1,"Score":370,"Average":299}, {"TID":12,"ScoreRank":2,"Score":349,"Average":299}, {"TID":12,"ScoreRank":3,"Score":341,"Average":299}, {"TID":13,"ScoreRank":1,"Score":342,"Average":252}, {"TID":13,"ScoreRank":2,"Score":318,"Average":252}, {"TID":13,"ScoreRank":3,"Score":286,"Average":252}, {"TID":14,"ScoreRank":1,"Score":303,"Average":257}, {"TID":14,"ScoreRank":2,"Score":292,"Average":257}, {"TID":14,"ScoreRank":3,"Score":288,"Average":257}, {"TID":15,"ScoreRank":1,"Score":312,"Average":242}, {"TID":15,"ScoreRank":2,"Score":276,"Average":242}, {"TID":15,"ScoreRank":3,"Score":264,"Average":242}, {"TID":16,"ScoreRank":1,"Score":421,"Average":369}, {"TID":16,"ScoreRank":2,"Score":403,"Average":369}, {"TID":16,"ScoreRank":3,"Score":398,"Average":369}, {"TID":17,"ScoreRank":1,"Score":479,"Average":418}, {"TID":17,"ScoreRank":2,"Score":466,"Average":418}, {"TID":17,"ScoreRank":3,"Score":455,"Average":418}, {"TID":18,"ScoreRank":1,"Score":554,"Average":463}, {"TID":18,"ScoreRank":2,"Score":521,"Average":463}, {"TID":18,"ScoreRank":3,"Score":520,"Average":463}] df = pandas.DataFrame(Scores) f, ax1 = plt.subplots(1, figsize=(10,5)) bar_width = 0.75 bar_l = [i+1 for i in range(len(np.unique(df['TID'])))] tick_pos = [i+(bar_width/2) for i in bar_l] ax1.bar(bar_l, df['Score'][df['ScoreRank'] == 1], width=bar_width, label='Rank1', alpha=0.5, color='#eaff0a') ax1.bar(bar_l, df['Score'][df['ScoreRank'] == 2], width=bar_width, label='Rank2', alpha=0.5, color='#939393') ax1.bar(bar_l, df['Score'][df['ScoreRank'] == 3], width=bar_width, label='Rank3', alpha=0.5, color='#e29024') ax1.bar(bar_l, df['Average'][df['ScoreRank'] == 3], width=bar_width, label='Average', alpha=0.5, color='#FF0000') plt.xticks(tick_pos, np.unique(df['TID'])) ax1.set_ylabel("Score") ax1.set_xlabel("TID") plt.legend(loc='upper right') plt.xlim([min(tick_pos)-bar_width, max(tick_pos)+bar_width]) plt.show() </code></pre> <p>result: </p> <p><a href="https://i.stack.imgur.com/wh4bi.png" rel="nofollow"><img src="https://i.stack.imgur.com/wh4bi.png" alt="enter image description here"></a></p>
QA:
Python (win10): python35.dll and VCRUNTIME140.dll missing
<p>Thanks for your help</p> <p>I discovered that executing C:\Windows\System32\python.exe just doesn't work</p> <p>so I just erased python.exe symlink in the system32 folder and added C:\Users\usr\Documents\MyExes\WinPython-64bit-3.5.2.2Qt5\python-3.5.2.amd64\ to PATH</p> <p>and it works</p> <p>Thank you again !!!</p>
QA:
Validating input with inquirer
<p>The function used for <code>validate</code> must take <strong>two</strong> arguments; the first is a dictionary with previously given answers, and the second is the current answer.</p> <p>The <a href="https://github.com/magmax/python-inquirer/blob/master/inquirer/questions.py#L115-L121" rel="nofollow">code to handle validation</a> catches <em>all</em> exceptions and turns those into validation errors, so using a lambda with just one argument will always result in validation failing.</p> <p>Make your lambda accept the answers dictionary too; you can ignore the value given:</p> <pre><code>questions = [ inquirer.Text('b_file', message='.GBK File', validate=lambda answers, file: len(str(file))), inquirer.Text('e_file', message='.XLS File', validate=lambda answers, file: len(str(file)))] </code></pre> <p>With that change, the questions work:</p> <pre><code>&gt;&gt;&gt; import inquirer &gt;&gt;&gt; questions = [ ... inquirer.Text('b_file', message='.GBK File', ... validate=lambda answers, file: len(str(file))), ... inquirer.Text('e_file', message='.XLS File', ... validate=lambda answers, file: len(str(file)))] &gt;&gt;&gt; answers = inquirer.prompt(questions) [?] .GBK File: foo [?] .XLS File: bar &gt;&gt;&gt; pprint(answers) {'b_file': 'foo', 'e_file': 'bar'} </code></pre>
QA:
How to make multiple file from different folder same name in one file in python
<p>Firstly, you are trying to open the folder itself. Secondly, we have to close the file everytime we read it to avoid Permission issues</p> <p>I tried this code. It should work now</p> <pre><code>import os import glob #So that * in directory listing can be interpretted as all filenames filenames = [glob.glob(os.path.join(os.path.expanduser('~'),'Desktop','Test_folder','Input','*.txt')), glob.glob(os.path.join(os.path.expanduser('~'),'Desktop','Test_folder','Output','*.txt'))] filenames[0].extend(filenames[1]) filenames=filenames[0] if( not os.path.isdir(os.path.join(os.path.expanduser('~'), 'Desktop', 'Test_output'))): os.mkdir(os.path.join(os.path.expanduser('~'), 'Desktop', 'Test_output')) for fname in filenames: with open(fname) as file: for line in file.readlines(): f = open(os.path.join(os.path.expanduser('~'), 'Desktop', 'Test_output','{:}.txt'.format(os.path.split(fname)[-1] )), 'a+') f.write(line) f.close() #This should take care of the permissions issue </code></pre>
QA:
How can I optimize the intersections between lists with two elements and generate a list of lists without duplicates in python?
<p>I solved my problem with this beautiful function:</p> <pre><code>net = [] for a, b in zip(nga, ngb): net.append([a, b]) def nets_super_gen(net): not_con = list(net) netn = list(not_con[0]) not_con.remove(not_con[0]) new_net = [] while len(netn) != len(new_net): new_net = list(netn) for z in net: if z[0] in netn and z[1] not in netn: netn.append(z[1]) not_con.remove(z) elif z[0] not in netn and z[1] in netn: netn.append(z[0]) not_con.remove(z) try: if z[0] in netn and z[1] in netn: not_con.remove(z) except ValueError: pass return(netn, not_con) list_of_lists, not_con = nets_super_gen(net) </code></pre>
QA:
Call a Python function with arguments based on user input
<p>Split the function name from the arguments. Look up the function by name using a predefined map. Parse the arguments with <code>literal_eval</code>. Call the function with the arguments.</p> <pre><code>available = {} def register_func(f): available[f.__name__] = f @register_func def var(value): print(value) from ast import literal_eval def do_user_func(user_input): name, args = user_input.split('(', 1) return available[name](*literal_eval('(' + args[:-1] + ',)')) do_user_func("var('test')") # prints "test" </code></pre> <p>This is still incredibly brittle, any invalid input will fail (such as forgetting parentheses, or an invalid function name). It's up to you to make this more robust.</p> <p><code>literal_eval</code> is still somewhat unsafe on untrusted input, as it's possible to construct small strings that evaluate to large amounts of memory. <code>'[' * 10 + ']' * 10</code>, for a safe but demonstrative example.</p> <p>Finally, <strong>do not use <code>eval</code> on untrusted user input</strong>. <a href="http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html" rel="nofollow">There is no practical way to secure it from malicious input.</a> While it will evaluate the nice input you expect, it will also evaluate code that, for example, will delete all your files.</p>
QA:
Learn Python the Hard way ex25 - Want to check my understanding
<p>How Python functions more or less work is the following:</p> <pre><code>def function_name(parameter_name_used_locally_within_function_name): #do stuff with parameter_name_used_locally_within_function_name some_new_value = parameter_name_used_locally_within_function_name return some_new_value </code></pre> <p>Notice how the parameter is only with in the scope of the function <code>function_name</code>. As that variable will only be used in that function and not outside of it. When we return a variable from a function, we can assign it to another variable calling the function:</p> <pre><code>my_variable = function_name("hello") </code></pre> <p><code>my_variable</code> now has <code>"hello"</code> as it's value since we called the function, passing in the value <code>"hello"</code>. Notice I didn't call the function with a specify variable name? We don't care what the parameter name is, all we know is it takes one input for the function. That parameter name is only used in the function. Notice how we receive the value of <code>some_new_value</code> with out knowing the name of that variable when we called the function?</p> <p>Let me give you a more broad example of what's going on. Functions can be thought of a task you give someone to do. Lets say the function or task is to as them to cook something for us. The chef or task needs ingredients to cook with (that's our input), and we wish to get food back (our output return). Lets say I want an omelette, I know I have to give the chef eggs to make me one, I don't care how he makes it or what he does to it as long as I get my output/omelette back. He can call the eggs what he wants, he can break the eggs how he wants he can fry it in the pan how he likes, but as long as I get my omelette, I'm happy.</p> <p>Back to our programming world, the function would be something like:</p> <pre><code>def cook_me_something(ingredients): #I don't know how the chef makes things for us nor do I care if ingredients == "eggs": food = "omelette" elif ingredients == "water": food = "boiled water" return food </code></pre> <p>We call it like this: </p> <pre><code>my_food_to_eat = cook_me_something("eggs") </code></pre> <p>Notice I gave him "eggs" and I got some "omelette" back. I didn't say the eggs are the ingredients nor did I know what he called the food that he gave me. He just return <code>food</code> that contain <code>omelettes</code></p> <p>Now let's talk about chaining functions together. </p> <p>So we got the basic down about me giving something to the chef and he giving me food back based on what I gave him. So what if we gave him something that he needs to process before cooking it with. Let's say what if he doesn't know how to grind coffee beans. But his co-chef-worker knows how too. He would pass the beans to that person to grind the coffee beans down and then cook with the return process.</p> <pre><code>def cook_me_something(ingredients): #I don't know how the chef makes things for us nor do I care if ingredients == "eggs": food = "omelette" elif ingredients == "water": food = "boiled water" elif ingredients == "coffee beans" co_worker_finished_product = help_me_co_worker(ingredients) #makes coffee with the co_worker_finished_product which would be coffee grindings food = "coffee" return food #we have to define that function of the co worker helping: help_me_co_worker(chef_passed_ingredients): if chef_passed_ingredients == "coffee beans" ingredients = "coffee grinding" return ingredients </code></pre> <p>Noticed how the co worker has a local variable <code>ingredients</code>? it's different from what the chef has, since the chef has his own ingredients and the co worker has his own. Notice how the chef didn't care what the co worker called his ingredients or how he handle the items. Chef gave something to the co worker and expected the finished product. </p> <p>That's more or less how it's work. As long as functions get's their input, they will do work and maybe give an output. We don't care what they call their variables inside their functions cause it's their own items. </p> <p>So let's go back to your example:</p> <pre><code>def break_words(stuff): words = stuff.split(' ') return words def sort_sentence(sentence): words = break_words(sentence) return sort_words(words) &gt;&gt;&gt; sentence = "All good things come to those who wait." &gt;&gt;&gt; sorted_words = ex25.sort_sentence(sentence) &gt;&gt;&gt; sorted_words ['All', 'come', ’good’, ’things’, ’those’, ’to’, ’wait.’, ’who’] </code></pre> <p>Let's see if we can break it down for you to understand.</p> <p>You called <code>sorted_words = ex25.sort_sentence(sentence)</code> and set <code>sorted_words</code> to the output of the function <code>sort_sentence()</code> which is <code>['All', 'come', ’good’, ’things’, ’those’, ’to’, ’wait.’, ’who’]</code>. You passed in the input <code>sentence</code> </p> <p><code>sort_sentence(sentence)</code> get's executed. You passed in the string is now called <code>sentence</code> inside the variable. Note that you could have called the function like this and it will still work:</p> <pre><code>sorted_words = ex25.sort_sentence("All good things come to those who wait.") </code></pre> <p>And the function <code>sort_sentence()</code> will still call that string <code>sentence</code>. The function basically said what ever my input is, I'm calling it sentence. You can pass me your object named sentence, which I'm going to rename it to sentence while I'm working with it. </p> <p>Next on the stack is:</p> <pre><code>words = break_words(sentence) </code></pre> <p>which is now calling the function break_words with that the function <code>sort_sentence</code> called it's input as <code>sentence</code>. So if you follow the trace it's basically doing:</p> <pre><code>words = break_words("All good things come to those who wait.") </code></pre> <p>Next on the stack is:</p> <pre><code>words = stuff.split(' ') return words </code></pre> <p>Note that the function call it's input as <code>stuff</code>. So it took the sort_sentence's input that sort_sentence called <code>sentence</code> and function <code>break_words</code> is now calling it <code>stuff</code>. </p> <p>It splits the "sentence" up into words and stores it in a list and returns the list "words"</p> <p>Notice how the function <code>sort_sentence</code> is storing the output of <code>break_words</code> in the variable <code>words</code>. Notice how the function <code>break_words</code> is returning a variable named <code>words</code>? They are the same in this case but it doesn't matter if one called it differently. <code>sort_sentence</code> can store the output as <code>foo</code> and it still work. We are talking about different scope of variables. Outside of the function <code>break_words</code> the variable <code>words</code> can be anything, and <code>break_words</code> would not care. But inside <code>break_words</code> that variable is the output of the function. </p> <p>Under my house my rules? Outside of my house you can do what ever you want type of thing.</p> <p>Same deal with <code>sort_sentence</code> return variable, and how we store what we got back from it. It doesn't matter how we store it or what we call it. </p> <p>If you wanted you can rename it as:</p> <pre><code>def break_words(stuff): break_words_words = stuff.split(' ') return break_words_words def sort_sentence(sentence): words = break_words(sentence) return sort_words(words) #not sure where this function sort_words is coming from. #return words would work normally. &gt;&gt;&gt; sentence = "All good things come to those who wait." &gt;&gt;&gt; sorted_words = ex25.sort_sentence(sentence) &gt;&gt;&gt; sorted_words ['All', 'come', ’good’, ’things’, ’those’, ’to’, ’wait.’, ’who’] </code></pre> <p>You just have to think of local variables, and parameters as like just naming things to work with. Like our example with the chef, Chef might called the eggs, ingredients, but I called it what ever I wanted and just passed it "eggs". It's all about the scope of things, think of functions as a house, while you are in the house, you can name what ever objects you want in the house, and outside of the house those same names could be different things but inside the house, they are what you want them to be. And when you throw something out, you naming that item has nothing to do with the outside world, since the outside world will name it something else. Might name it the same thing tho...</p> <p>If I just rambled too much, ask questions I will try to clear it up for you.</p> <p>Edited</p> <p>Coming back from lunch I thought of variable as containers, They hold the values but you don't care what other people's containers are named. You only care about yours and when someone gives you something you put it in a container and name it something you care about that will help you know what inside it. When you give away an item, you don't give the container, cause you need it to store other things..</p>
QA:
Append to dictionary in defaultdict
<p>if i understand correctly</p> <pre><code>import datetime dates = [datetime.date(2016, 10, 17), datetime.date(2016, 10, 18), datetime.date(2016, 10, 19), datetime.date(2016, 10, 20), datetime.date(2016, 10, 21), datetime.date(2016, 10, 22), datetime.date(2016, 10, 23)] dict_x = {} for i in map(str,set(dates)): dict_x[i] = {'val_morning': 0.0, 'val_evening': 0.0} dict_x output : {'2016-10-17': {'val_evening': 0.0, 'val_morning': 0.0}, '2016-10-18': {'val_evening': 0.0, 'val_morning': 0.0}, '2016-10-19': {'val_evening': 0.0, 'val_morning': 0.0}, '2016-10-20': {'val_evening': 0.0, 'val_morning': 0.0}, '2016-10-21': {'val_evening': 0.0, 'val_morning': 0.0}, '2016-10-22': {'val_evening': 0.0, 'val_morning': 0.0}, '2016-10-23': {'val_evening': 0.0, 'val_morning': 0.0}} </code></pre>
QA:
read data in specific column and row of a text file
<p>You are trying to use <code>float()</code> on something that contains letters. this happens when you call:</p> <pre><code>numfloat = map(float , line.split()) </code></pre> <p>You need to tell us the exact output that you are looking for but here is one possible solution</p> <pre><code>num_float = map(float, line.split()[1]) or better yet num_float = float(line.split()[1]) </code></pre> <p>This will only get you the middle column, I'm not certain if you need the entire row or not.</p> <p>Additionally, as noted below, you need to change <code>=</code> to <code>==</code> in your if statement. <code>=</code> is for assigment, <code>==</code> is for comparison.</p>
QA:
Implement Bhattacharyya loss function using python layer Caffe
<p>Lets say bottom[0].data is p, bottom[1].data is q and Db(p,q) denotes the Bhattacharyya Distance between p and q.</p> <p>The only thing you need to do in your backward function is to compute the partial derivatives of Db with respect to its inputs (p and q) and store them in the respective bottom diff blobs:</p> <p><a href="https://i.stack.imgur.com/Q3ckj.gif" rel="nofollow">diff_p = dDb(p,q)/dp</a><br> <a href="https://i.stack.imgur.com/6s5Jl.gif" rel="nofollow">diff_q = dDb(p,q)/dq</a></p> <p>so your backward function would look something like:</p> <pre><code>def backward(self, top, propagate_down, bottom): if propagate_down[0]: bottom[0].diff[...] = # calculate dDb(p,q)/dp if propagate_down[1]: bottom[1].diff[...] = # calculate dDb(p,q)/dq </code></pre> <p>Note that you normally use the average (instad of the total) error of your batch. Then you would end up with something like this:</p> <pre><code>def forward(self,bottom,top): self.mult[...] = np.multiply(bottom[0].data,bottom[1].data) self.multAndsqrt[...] = np.sqrt(self.mult) top[0].data[...] = -math.log(np.sum(self.multAndsqrt)) / bottom[0].num def backward(self, top, propagate_down, bottom): if propagate_down[0]: bottom[0].diff[...] = # calculate dDb(p,q)/dp / bottom[0].num if propagate_down[1]: bottom[1].diff[...] = # calculate dDb(p,q)/dq / bottom[1].num </code></pre> <p>Once you calculated the partial derivatives of Db, you can insert them in the templates above as you did for the function of the forward pass.</p>
QA:
Call a Python function with arguments based on user input
<p>I am going to post this solution as an alternative, under the assumption that you are dealing with <em>simple</em> inputs such as: </p> <pre><code>var(arg) </code></pre> <p>Or, a single function call that can take a list of positional arguments. </p> <p>By using <code>eval</code> it would be a horrible un-recommended idea, as already mentioned. I think that is the security risk you were reading about.</p> <p>The ideal way to perform this approach is to have a dictionary, mapping the string to the method you want to execute.</p> <p>Furthermore, you can consider an alternative way to do this. Have a space separated input to know how to call your function with arguments. Consider an input like this: </p> <pre><code>"var arg1 arg2" </code></pre> <p>So when you input that: </p> <pre><code>call = input().split() </code></pre> <p>You will now have: </p> <pre><code>['var', 'arg1', 'arg2'] </code></pre> <p>You can now consider your first argument the function, and everything else the arguments you are passing to the function. So, as a functional example: </p> <pre><code>def var(some_arg, other_arg): print(some_arg) print(other_arg) d = {"var": var} call = input().split() d[call[0]](*call[1:]) </code></pre> <p>Demo: </p> <pre><code>var foo bar foo bar </code></pre>