source
sequence | text
stringlengths 99
98.5k
|
---|---|
[
"tex.stackexchange",
"0000071315.txt"
] | Q:
Vertical alignment in 'tabular'
I have a problem with vertically aligning content in a tabular environment:
\documentclass{article}
\begin{document}
\begin{tabular}{ p{2em} p{2em} }
test & \begin{tabular}{l} s s s \\ s s s \end{tabular} \\
\end{tabular}
\end{document}
Which as far as I understand the array documentation should give me two table cells which are top-aligned and have a with of 2em.
The thing is that if I don't have a table in the second cell but simply text then it works as intended:
\documentclass{article}
\begin{document}
\begin{tabular}{ p{2em} p{2em} }
\hline
test & s s s s s s s s s s s s s s s \\
\hline
\end{tabular}
\end{document}
What is the reason for the difference in appearance?
A:
It's easier to see the baselines of the two columns if the second column has some additional text:
\documentclass{article}
\begin{document}
\begin{tabular}{ p{2em} p{7em} }
test & s \begin{tabular}{l} first \\ second \end{tabular} s s s s s \\\hline
test & s \begin{tabular}[t]{l} first \\ second \end{tabular} s s s s s \\
\end{tabular}
\end{document}
See in the second row the inner table aligns on its top row as [t] has been added, but in both cases the first column of the outer table is aligned with its baseline aligned with the baseline of the second column.
(Note the image in your question is not generated from the posted code)
|
[
"stackoverflow",
"0048597174.txt"
] | Q:
C++: Friend Functions
class A {
friend void display();
};
friend void display() {
cout<<"friend";
}
int main() {
display();
}
Works fine...
class A {
friend void display() {
cout<<"friend";
}
};
int main() {
display();
}
It shows :
display is not declared in this scope.
Why is it so ?
A:
In the first example (which should fail to compile for another reason: You can't use friend when defining the function) you define the function display to be in the global scope.
In the second example the display function is not a member function (it's in the scope surrounding the class), but it's still declared only in the scope of the A class. You need to re-declare it in the global scope for it to actually be in the global scope.
|
[
"pt.stackoverflow",
"0000163310.txt"
] | Q:
Operadores lógicos em validação de formulário com php
Bom dia,
Tenho esses campos em um formulário html e estou tentando validá-lo com php.
Na minha validação com o php, eu preciso que apenas um dos campos (qualquer um) seja obrigatório. No caso, o usuário precisa preencher pelo menos um dos campos para que possa ser feito o envio do formulário.
<input type="email" name="email" id="oemail" placeholder="Digite seu E-mail">
<input type="tel" name="whats" id="whats" placeholder="Digite seu whatsapp" maxlength="15">
<input type="tel" name="telefone" id="telefone" placeholder="Digite seu telefone" maxlength="14">
estou fazendo a validação com o seguinte código. (funcionando)
if (empty($whats) OR strstr($whats, ' ')==false) {
$erro = 1;
}
if ($erro != 0) {
echo '<script type="text/javascript">window.location = "erro.php";</script>';
exit;
}
Tentei usar o seguinte código, mas como os entendedores podem ver, não funciona:
if (empty($whats) OR strstr($whats, ' ')==false) and (empty($telefone) OR strstr($telefone, ' ')==false) and (empty($email) OR strstr($email, ' ')==false) {
$erro = 1;
}
if ($erro != 0) {
echo '<script type="text/javascript">window.location = "erro.php";</script>';
exit;
}
As perguntas são as seguintes:
Para o meu problema, está é a maneira mais correta de validar o
formulário? Se não, qual seria a melhor?
Como eu conseguiria montar a lógica da segunda tentativa, corretamente dentro do php?
A:
preciso que apenas um dos campos (qualquer um) seja obrigatório.
Com base apenas nessa regra, o código abaixo vai funcionar:
if ( !empty($whats) || !empty($telefone) || !empty($email) ) {
// válido, pelo menos um campo não está vazio
} else {
// inválido
echo '<script type="text/javascript">window.location = "erro.php";</script>';
exit;
}
Para o meu problema, está é a maneira mais correta de validar o
formulário? Se não, qual seria a melhor?
Como validar melhor o formulário:
A validação acima funciona mas tem algumas coisas que você pode fazer pra deixar sua validação mais segura e útil.
Conferir o formato do telefone com preg_match():
Se você tem uma máscara validando telefones no front-end pode validar o formato no PHP pra garantir que está recebendo só dados válidos. Exemplo:
if ( !preg_match( '|\(\d{2}\)\s\d{4}\d?\-\d{4}|', trim($telefone) ) {
// telefone inválido pois não está no formato
// (99) 9999-9999 ou (99) 99999-9999
}
Conferir o formato do email com filter_var():
if ( ! filter_var( trim($email), FILTER_VALIDATE_EMAIL ) ) {
// email inválido
}
Note que eu usei trim() nas variáveis para retirar qualquer espaço no começo ou no final da string. É bom usar pra evitar erros de validação causados por espaços extras.
A:
Outra alternativa é usar a função array_filter() ela vai verificar cada item do array, caso seja vazio ele não será devolvido, ou seja se existir pelo menos um valor ele será avaliado como true no if.
$itens = array($whats, $telefone, $email);
if(array_filter($itens)){
echo 'algum valor foi preenchido';
}else{
echo 'nada foi preenchido';
}
|
[
"stackoverflow",
"0041910583.txt"
] | Q:
Errno 13 Permission denied Python
In python, I am currently experimenting with what I can do with open command. I tried to open a file, and got an error message. Here's my code:
open(r'C:\Users\****\Desktop\File1')
My error message was:
PermissionError: [Errno 13] Permission denied: 'C:\\Users\\****\\Desktop\\File1'
I looked on the website to try and find some answers and I saw a post where somebody mentioned chmod. 1. I'm not sure what this is and 2. I don't know how to use it, and thats why I've come here.
A:
Your user don't have the right permissions to read the file, since you used open() without specifying a mode.
Since you're using Windows, you should read a little more about File and Folder Permissions.
Also, if you want to play with your file permissions, you should right-click it, choose Properties and select Security tab.
Or if you want to be a little more hardcore, you can run your script as admin.
SO Related Questions:
Example1
A:
The problem here is your user doesn't have proper rights/permissions to open the file this means that you'd need to grant some administrative privileges to your python ide before you run that command.
As you are a windows user you just need to right click on python ide => select option 'Run as Administrator' and then run your command.
And if you are using the command line to run the codes, do the same open the command prompt with admin rights. Hope it helps
|
[
"stackoverflow",
"0037931671.txt"
] | Q:
How to make an XSS-safe browser-based code editor
I would like to use a browser based code editor such as Monaco or Ace in my application to allow users to write some code in the browser that will be executed by other users. You can imagine jsfiddle or similar. However, I don't want to open up Cross-Site-Scripting vulnerabilities. I'm not finding much about how to properly implement these tools in an application and prevent XSS.
Is there a way to "sandbox" the javascript written in these tools when it runs? How do tools such as JSFiddle, CodePen, and online editors etc. protect against malicious scripts? In general, what techniques should I use to prevent XSS vulnerabilities when using a browser-based code editor in my app?
A:
Typically these tools run the script on another domain. So they are (intentionally) vulnerable to Cross-Site Scripting, but they sandbox it by leveraging the same origin policy. That's the simplest and easiest way to do it. Even if the editor site has logins etc. scripting running on this sandbox domain is blocked by the same origin policy from accessing any content on the main domain, and as such the XSS is rather useless.
|
[
"stackoverflow",
"0051460765.txt"
] | Q:
Determine if the name is a Short name, average length name, or a long name
I am trying to write an if/else statement as I think that would be my best way. What I need it to do is determine if the name is a Short name, average length name, or a long name. Now with 13 being the average length that would mean I would need to code as below, but I cannot seem to make it work
IT WORKS NOW
System.out.println("Please Enter your first and last name");
String str = input.nextLine();
// here the code wil display the string's length and its first character
System.out.println("The number of characters in your name is " +
str.length());
if(str.length() == 13)
System.out.println("Your name is average length.");
else if (str.length() > 13)
System.out.println("Your name is long length.");
else if (str.length() < 13)
System.out.println("Your name is short length.");
A:
You've got the right idea with the overall concept, but as Andreas points out, you've got some syntax errors with your code.
Java also requires that you import a scanner to read the users input. I also tweaked a line that subtracts one from your total length of the user input to compensate for the space in the users name.
Everyone starts somewhere... and this is my first response to a post! Keep at it!
package javaapplicationName;
import java.util.Scanner;
public class JavaApplication41 {
public static void main(String[] args) {
// You'd want to grab the user's
// input so we need to initialize a scanner object.
Scanner input = new Scanner(System.in);
System.out.println("Please Enter your first and last name");
String str = input.nextLine();
System.out.println("The number of characters in your name is " + (str.length() - 1));
if (str.length() - 1 == 13) {
System.out.println("Your name is average length.");
} else if (str.length() - 1 > 13) {
System.out.println("Your name is long length.");
} else if (str.length() - 1 < 13) {
System.out.println("Your name is short length.");
}
}
}
|
[
"stackoverflow",
"0012025220.txt"
] | Q:
Ajax call failing for PHP and working fine for ASP.NET
I am doing a simple Ajax call which is working in ASP.NET but is failing with some weird DOM Exception when you put a breakpoint inside the onreadystatechange function. What is the extra header magic ASP.NET is doing that PHP is failing to do causing the responseText to be blank for PHP only with the DOM Exceptions when you look into the xmlhttp variable in Google Chrome for example.
var xmlhttp;
if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp = new XMLHttpRequest();
}
else {// code for IE6, IE5
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange = function () {
if (xmlhttp.readyState == 4 && xmlhttp.status == 200) {
//Here is where you unwrap the data
var arr = UnWrapVars(xmlhttp.responseText);
if (callBackFunc) {
callBackFunc(arr);
}
}
};
xmlhttp.open("POST", ajaxURL, true);
if (navigator.userAgent.toLowerCase().indexOf('msie') == -1) {
xmlhttp.overrideMimeType("application/octet-stream");
}
xmlhttp.send("[FunctionName]" + functionName + "[/FunctionName][CanvasID]" + canvasid + "[/CanvasID][WindowID]" + windowid.toString() + "[/WindowID][Vars]" + getEncodedVariables() + "[/Vars]");
The incoming data from the Ajax page is like:
[root][Vars][windows]
I am intentionally doing [ and not < so please dont point that out and again it works from an ASP.NET page but not from a PHP page. Data returned is the same I have checked server side. So what is the missing header magic if any ASP.NET does that PHP isnt doing.
A:
This is because the $_POST is actually shorthand for accessing data which was posted through form. It expects the appropriate header. Put this line before the xmlhttp.send():
xmlhttp.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
This will force PHP to populate the $_POST array.
Alternatively you could create a Request instance, which acts as custom wrapper for php://input stream. Then your XHR calls would not need to send the additional header.
|
[
"stackoverflow",
"0043528324.txt"
] | Q:
How can I create multiple directories using a loop in python?
I want to create 10 directories with a loop and I tried this code:
import os
pathname = 1
directory = "C:\Directory\Path\Name\\" + str(pathname)
while pathname < 11:
if not os.path.exists(directory):
os.makedirs(directory)
pathname += 1
But it is only creating the first directory and stopping as if it's not even going through the rest of the loop.I'm fairly new to python and this code made sense to me and I don't know why it might not work.Any help is appreciated.
A:
import os
pathname = 1
directory = "C:\Directory\Path\Name\\" + str(pathname)
while pathname < 11:
if not os.path.exists(directory):
os.makedirs(directory)
pathname += 1
directory = "C:\Directory\Path\Name\\" + str(pathname)
|
[
"blender.stackexchange",
"0000073581.txt"
] | Q:
real multilayer sss in cycles
It seems to me that blender cycles is quite limited for doing a real multilayer sss...
What I want to achieve is something like that.
Here is my bench, a shape with an increasing thickness
in renderman, you can define a multilayer sss like this:
and you obtain something like this
which is exactly what I want. To explain with a little graph, here the way I want my scatter to behave:
But in blender cycles, it seems that you cannot achieve something better than that:
The reason for that is that you can tell the sss shader to be present from 0 thickness to x (via scale and radius), but there is no way to set presence from a to b where a > 0.
The graph looks like something like that in blender cycles:
the deep layer is overlapping every layers...
very simple dummy node setup to summurize what is performed for any skin shader I found or tried to do (also this is a very bad setup here, no energy conservation):
Is there a way in blender cycles to know the deepness of the scatering? Or am I thinking my shader the wrong way?
Thanks a lot! (and sorry for my average english...)
edit:
digging...
I have made a pretty nice step forward using absorption and translucency. It seems that sss is not absorbing the light rays, thus, make impossible the use of raylength, whereas translucency of course does. Here is the same bench:
with suzanne (right one has a quick white skull using solidify)
edit 2
Still digging around
I have created a node setup to create a 3 colors ramp. It is quite usefull for my current tests:
(cursor here is the V coordinate, should be the deepness in the final shader)
edit3
What about this attempt?
I am not sure about energy conservation here, I think the add shader at the end should be balanced with something in the diffuse shader... maybe not. Any idea?
The 8.0 value is just to make the effect more obvious. Should be 1.0 in real world I guess..
Also, isn't there a IOR coefficient in real world? I have tried the glass shader with 1.0 of roughness, but it clearly doesn't make the trick...
anyway, here it is:
A:
This can be achieved by combining multiple Subsurface Scattering shader nodes to 'cancel out' part of the scatter at short distances. This allows the scatter profiles to be manipulated as desired.
As explained in your question, standard Subsurface Scattering shader only provides control of the 'radius' of the scatter in each of the channels - ie, how far each of the colors is scattered within the surface with falloff over distance - with no control over when the scattering actually starts. This means that close to the point of illumination all colors are scattered equally and each drops off at it's own rate as the distance increases.
Using the default 'Cubic' sub-surface scatter mode, the distribution follows the Cubic distribution function :
y = e ^ -((x^3)/(a^3))
For Quadratic, the distribution is very similar :
y = e ^ -((x^2)/(a^2))
Online tools are available to graph these functions so that they can be easily manipulated - for example, https://www.desmos.com/calculator, which allows you to instantly see the result of changes to the function.
Cubic :
Quadratic :
To control the scattering we need to be able to manipulate the profile of those graphs and this can be achieved by combining multiple scatters, each with different properties. For example, by adding two shaders together we can produce a more interesting distribution :
In order to block the scattering at 'short' distances we would need to subtract one distribution from the other. eg, 'y = (e ^-((x^3)/(a^3)) - e^-((x^3)/(b^3)))*c' with carefully selected values for 'a', 'b', 'c' (a controls one Cubic, b controls the other Cubic, c scales the result).
(https://www.desmos.com/calculator/fnixk2lkqv)
Under normal circumstances it's not possible to subtract the effect of one shader from another - since we only have an 'Add' shader and a 'Mix' shader but no 'Subtract'. However, there is a trick we can use to subtract the effect of a shader by using 'negative' color values. The color of a surface is represented by a Vector of 3 values - Red, Green, Blue. These values are usually in the range of 0.0 to 1.0 but are equally valid to be outside of that range. For values greater than 1.0, the surface will have the effect of amplifying any incoming light rays. However, values less than 0.0 result in any incoming 'positive' light rays being converted into "negative" light. Combining such negative rays with the output of another shader using the Add shader will result in the negative light being subtracted from the positive light. Care should be taken to ensure that the overall effect does not result in negative light escaping the system - this can cause some unexpected effects - but this can be used to 'cancel out' the effect of one shader with that of another.
The Subsurface Scattering Shader consists of two elements - effectively a 'Diffuse' element to handle the surface interaction and the 'Subsurface' element for the below-surface scattering. Subtracting one Subsurface Scattering shader from another will remove the 'Diffuse' element, so we need to add it in again - so we need 3 shaders : one for the 'base' subsurface (which should have Radius set to 0,0,0 to disable scattering), one for the 'positive' subsurface scatter and another for the 'negative' subsurface scatter.
This can be implemented with the following nodes :
The material consists of three Subsurface Scattering nodes. The top one has its Radius set to 0,0,0 so that it does not perform any scattering - it acts effectively as a Diffuse shader (and, in fact, it could be replaced by one - although my tests seem to show that it's not actually more efficient that way) to provide the surface interaction. The next Subsurface Scattering node provides the 'positive' element of the scattering. Its Radius is fed from a Combine XYZ node which can be set to the desired scattering parameters. The final Subsurface Scattering node provides the 'negative' element of the scattering and it also has a Combine XYZ to provide the scattering parameters. A Vector Subtract node generates the 'negative' color from the input RGB node by subtracting from 0,0,0. The Color is also multiplied by an additional Combine XYZ node to provide scaling to adjust each channel. The outputs from the three Subsurface Scattering shaders are combined using Add shader nodes to provide the final output - ie, 'surface' + 'positive' - 'negative'.
I used the same online graphing tool mentioned above to adjust the parameters until I was happy with the parameters for each channel. The values of a,b,c can be simply plugged into the 3 Combine XYZ nodes in the above material - 'a' into the Radius input of the 'positive' shader, 'b' into the 'negative' shader, 'c' into the multiplier for the Color input.
An interactive version of this should be available here : https://www.desmos.com/calculator/jp3nbgellf.
To see how Quadratic would affect the distribution, see here : https://www.desmos.com/calculator/zycoycntdl
To demonstrate on a back-lit wedge :
Blend file attached
|
[
"stackoverflow",
"0015170330.txt"
] | Q:
Save and Retrieve Image file on Windows Phone 8 (using SQL Server Compact)
Can anyone give me an example that I can store and retrieve image files on WP8.
It seems that SQL Server Compact only supports Byte[] data type?
So I have to convert BitmapImage and Byte[] to achieve this purpose?
Thanks.
A:
Using SQL database to store an image seems like an overkill to me. I'd suggest you to use IsolatedStorage instead, and store image as a file. Internet is full of examples on how to achieve that, take a look here for example.
|
[
"stackoverflow",
"0041916495.txt"
] | Q:
Webpack.config for v2.2.0
Webpack has changed a lot and dont find a valid Webpack.config that work for v2.2.0.
I do want to migrate my Webpack.config from 2.1 to 2.2
I got a lot of errors like this:
ERROR in ./src/styles/core.scss
Module build failed: ReferenceError: window is not defined
So What I need to change to work with v2.2?
My file is:
import webpack from 'webpack';
import cssnano from 'cssnano';
import ExtractTextPlugin from 'extract-text-webpack-plugin';
import HtmlWebpackPlugin from 'html-webpack-plugin';
const cssModulesLoader = [
'css?sourceMap&-minimize',
'modules',
'importLoaders=1',
'localIdentName=[name]__[local]__[hash:base64:5]'
].join('&');
export default function(options) {
const webpackConfig = {
entry: [
'./src/index.js'
],
output: {
path: __dirname + '/public',
publicPath: '/',
filename: 'bundle.[hash].js'
},
plugins: [
new HtmlWebpackPlugin({
template: './src/index.html',
favicon: './src/static/favicon.png',
filename: 'index.html',
inject: 'body'
}),
new ExtractTextPlugin({
filename: 'styles.[hash].css',
allChunks: true
})
],
module: {
loaders: [{
test: /\.js$/,
exclude: /node_modules/,
loader: 'babel',
query: {
cacheDirectory: true,
plugins: ['transform-runtime'],
presets: [
['es2015', {'modules': false}],
'react',
'stage-0'
],
env: {
production: {
presets: ['react-optimize'],
compact: true
},
test: {
plugins: [
['__coverage__', {'only': 'src/'}],
'babel-plugin-rewire'
]
}
}
}
}, {
test: /\.json$/,
loader: 'json'
}, {
test: /\.html$/,
loader: 'html'
}, {
test: /\.scss$/,
loader: ExtractTextPlugin.extract({
disable: options.dev,
fallbackLoader: 'style-loader',
loader: [cssModulesLoader, 'postcss', 'sass?sourceMap']
})
}, {
test: /\.css$/,
loader: ExtractTextPlugin.extract({
fallbackLoader: 'style-loader',
loader: ['css-loader', 'postcss']
})
}]
},
resolve: {
modules: ['node_modules'],
extensions: ['', '.js', '.jsx', '.json'],
alias: {}
},
globals: {},
postcss: [
cssnano({
autoprefixer: {
add: true,
remove: true,
browsers: ['last 2 versions']
},
discardComments: {
removeAll: true
},
discardUnused: false,
mergeIdents: false,
reduceIdents: false,
safe: true,
sourcemap: true
})
]
};
if (options.dev) {
webpackConfig.devtool = 'source-map';
webpackConfig.plugins.push(
new webpack.DefinePlugin({
'__DEV_': true
})
);
}
if (options.test) {
process.env.NODE_ENV = 'test';
webpackConfig.devtool = 'cheap-module-source-map';
webpackConfig.resolve.alias.sinon = 'sinon/pkg/sinon.js';
webpackConfig.module.noParse = [
/\/sinon\.js/
];
webpackConfig.module.loaders.push([
{
test: /sinon(\\|\/)pkg(\\|\/)sinon\.js/,
loader: 'imports?define=>false,require=>false'
}
]);
// Enzyme fix, see:
// https://github.com/airbnb/enzyme/issues/47
webpackConfig.externals = {
'react/addons': true,
'react/lib/ExecutionEnvironment': true,
'react/lib/ReactContext': 'window'
};
webpackConfig.plugins.push(
new webpack.DefinePlugin({
'__COVERAGE__': options.coverage,
'__TEST_': true
})
);
}
if (options.prod) {
process.env.NODE_ENV = 'production';
webpackConfig.plugins.push(
new webpack.LoaderOptionsPlugin({
minimize: true,
debug: false
}),
new webpack.DefinePlugin({
'process.env': {
'NODE_ENV': JSON.stringify('production'),
'__PROD__': true
}
}),
new webpack.optimize.OccurrenceOrderPlugin(),
new webpack.optimize.DedupePlugin(),
new webpack.optimize.UglifyJsPlugin({
compress: {
unused: true,
dead_code: true,
warnings: false
}
})
);
}
if (options.deploy) {
webpackConfig.output.publicPath = '/MoonMail-UI/';
}
return webpackConfig;
}
A:
For anyone trying to migrate from Webpack 2.1 to 2.2, here is my new config file:
package.json
{
"devDependencies": {
"autoprefixer": "^6.4.0",
"babel-cli": "^6.11.4",
"babel-core": "^6.13.2",
"babel-eslint": "^6.1.0",
"babel-loader": "^6.2.4",
"babel-plugin-__coverage__": "^11.0.0",
"babel-plugin-rewire": "^1.0.0-rc-5",
"babel-plugin-transform-runtime": "^6.15.0",
"babel-polyfill": "^6.13.0",
"babel-preset-es2015": "^6.13.2",
"babel-preset-react": "^6.11.1",
"babel-preset-react-optimize": "^1.0.1",
"babel-preset-stage-0": "^6.5.0",
"css-loader": "^0.25.0",
"cssnano": "^3.7.3",
"enzyme": "^2.4.1",
"eslint": "^3.3.0",
"eslint-config-standard": "^6.0.1",
"eslint-config-standard-react": "^4.0.2",
"eslint-plugin-babel": "^3.3.0",
"eslint-plugin-promise": "^2.0.1",
"eslint-plugin-react": "^6.1.0",
"eslint-plugin-standard": "^2.0.0",
"express": "^4.14.0",
"extract-text-webpack-plugin": "^2.0.0-rc.2",
"gh-pages": "^0.11.0",
"html-loader": "^0.4.4",
"html-webpack-plugin": "^2.24.1",
"imports-loader": "^0.6.5",
"json-loader": "^0.5.4",
"mocha": "^3.0.2",
"node-sass": "^3.8.0",
"postcss-cssnext": "^2.9.0",
"postcss-import": "^9.1.0",
"postcss-loader": "^1.2.2",
"sass": "^0.5.0",
"sass-loader": "^4.1.1",
"sinon": "^1.17.5",
"sinon-chai": "^2.8.0",
"style-loader": "^0.13.1",
"url-loader": "^0.5.7",
"webpack": "2.2.0",
"webpack-dev-server": "2.2.0",
"yargs": "^5.0.0"
}
}
webpack.config.js
import webpack from 'webpack';
import cssnano from 'cssnano';
import ExtractTextPlugin from 'extract-text-webpack-plugin';
import HtmlWebpackPlugin from 'html-webpack-plugin';
export default function(options) {
const webpackConfig = {
entry: [
'./src/index.js'
],
output: {
path: __dirname + '/public',
publicPath: '/',
filename: 'bundle.[hash].js'
},
plugins: [
new HtmlWebpackPlugin({
template: './src/index.html',
favicon: './src/static/favicon.png',
filename: 'index.html',
inject: 'body'
}),
new ExtractTextPlugin({ filename: 'styles.[hash].css', disable: false, allChunks: true })
],
module: {
rules: [{
test: /\.(js|jsx)$/,
exclude: /node_modules/,
loader: 'babel-loader',
query: {
cacheDirectory: true,
plugins: ['transform-runtime'],
presets: [
['es2015', {'modules': false}],
'react',
'stage-0'
],
env: {
production: {
presets: ['react-optimize'],
compact: true
},
test: {
plugins: [
['__coverage__', {'only': 'src/'}],
'babel-plugin-rewire'
]
}
}
}
}, {
test: /\.json$/,
loader: 'json'
}, {
test: /\.html$/,
loader: 'html-loader'
}, {
test: /\.(css|scss)$/,
loader: ExtractTextPlugin.extract({
loader: [
{ loader: 'css-loader?sourceMap&modules&importLoaders=1&localIdentName=[path]___[local]___[hash:base64:5]'},
{ loader: 'sass-loader?sourceMap'},
{ loader: 'postcss-loader?sourceMap' },
]
})
}]
},
resolve: {
modules: ['node_modules'],
extensions: ['.js', '.jsx', '.json'],
alias: {}
}
};
if (options.dev) {
webpackConfig.devtool = 'source-map';
webpackConfig.plugins.push(
new webpack.DefinePlugin({
'__DEV_': true
})
);
}
if (options.test) {
process.env.NODE_ENV = 'test';
webpackConfig.devtool = 'cheap-module-source-map';
webpackConfig.resolve.alias.sinon = 'sinon/pkg/sinon.js';
webpackConfig.module.noParse = [
/\/sinon\.js/
];
webpackConfig.module.loaders.push([
{
test: /sinon(\\|\/)pkg(\\|\/)sinon\.js/,
loader: 'imports?define=>false,require=>false'
}
]);
// Enzyme fix, see:
// https://github.com/airbnb/enzyme/issues/47
webpackConfig.externals = {
'react/addons': true,
'react/lib/ExecutionEnvironment': true,
'react/lib/ReactContext': 'window'
};
webpackConfig.plugins.push(
new webpack.DefinePlugin({
'__COVERAGE__': options.coverage,
'__TEST_': true
})
);
}
if (options.prod) {
process.env.NODE_ENV = 'production';
webpackConfig.plugins.push(
new webpack.LoaderOptionsPlugin({
minimize: true,
debug: false
}),
new webpack.DefinePlugin({
'process.env': {
'NODE_ENV': JSON.stringify('production'),
'__PROD__': true
}
}),
new webpack.optimize.OccurrenceOrderPlugin(),
new webpack.optimize.DedupePlugin(),
new webpack.optimize.UglifyJsPlugin({
compress: {
unused: true,
dead_code: true,
warnings: false
}
})
);
}
if (options.deploy) {
webpackConfig.output.publicPath = '/MoonMail-UI/';
}
return webpackConfig;
}
postcss.config.js
module.exports = {
plugins: {
'postcss-import': {},
'postcss-cssnext': {
browsers: ['last 2 versions', '> 5%'],
},
},
}
If you dont use url-resolve you can remove this:
{ loader: 'sass-loader'}, // remove this line
{ loader: 'resolve-url-loader'} // remove this line
More info about this can be found here: https://github.com/postcss/postcss-loader/issues/92
|
[
"stackoverflow",
"0002858210.txt"
] | Q:
CSS Horizontal sub-menu
I am working on a horizontal CSS dropdown menu. It is still working nearly fine for IE 7, IE 8 , Firefox and Chrome. But I want to make the top <ul> to be on top level (e.g. z-index: 100). I want this because the top level <ul> has a graphical background and the dropdown is just styled with css and in the current way it is destroying the layout.
HTML Code:
<div id="mainMenu">
<ul>
<li><a href="t1">TOP1<!--[if gt IE 6]><!--></a><!--<![endif]-->
<!--[if lte IE 6]><table><tr><td><![endif]-->
<ul>
<li><a href="l1">LINK1</a></li>
<li><a href="l2">LINK2</a></li>
<li><a href="l3">LINK3</a></li>
<li><a href="l4">LINK4</a></li>
</ul>
<!--[if lte IE 6]></td></tr></table></a><![endif]-->
</li>
<li class="center"><a href="t2">TOP2<!--[if gt IE 6]><!--></a><!--<![endif]-->
<!--[if lte IE 6]><table><tr><td></td></tr></table></a><![endif]--></li>
<li><a name="t3">TOP3<!--[if gt IE 6]><!--></a><!--<![endif]-->
<!--[if lte IE 6]><table><tr><td><![endif]-->
<ul class="last">
<li><a href="l5">LINK5</a></li>
<li><a href="l6">LINK6</a></li>
<li><a href="l7">LINK7</a></li>
</ul>
<!--[if lte IE 6]></td></tr></table></a><![endif]-->
</li>
</ul>
</div>
CSS Code
/* style the outer div to give it width */
#mainMenu {
position: absolute;
margin-left: 6px;
margin-top: 180px;
}
/* remove all the bullets, borders and padding from the default list styling */
#mainMenu ul {
position: absolute;
width: 494px;
padding: 0;
margin: 0;
list-style-type: none;
background: #FFF url(../images/mainMenu_bg.gif) no-repeat;
}
/* float the list to make it horizontal and a relative positon so that you can control the dropdown menu positon */
#mainMenu li {
position: relative;
float: left;
padding-left: 5px;
width: 160px;
vertical-align: middle;
text-align: left;
}
#mainMenu li.center {
padding-left: 0px;
text-align: center;
}
/* style the links for the top level */
#mainMenu a, #mainMenu a:visited {
display: block;
font: bold 12px/1em Helvetica, arial, sans-serif;
color: #FFF;
text-decoration: none;
height: 42px;
line-height: 35px;
}
/* hide the sub levels and give them a positon absolute so that they take up no room */
#mainMenu ul ul {
visibility: hidden;
position: absolute;
height: 0;
top: 35px;
left: -5px;
width: 165px;
}
/* style the table so that it takes no part in the layout - required for IE to work */
#mainMenu table {
position: absolute;
top: 0;
left: 0;
}
/* style the second level links */
#mainMenu ul ul a, #mainMenu ul ul a:visited {
width: 165px;
height: 20px;
line-height: 19px;
font: bold 10px Helvetica, arial, sans-serif;
background: #EF7D0E;
color: #FFF;
text-align: left;
padding: 6px 0 0 5px;
border-top: 1px solid #C1650B;
}
#mainMenu ul ul.last a, #mainMenu ul ul.last a:visited {
width: 162px;
}
/* style the top level hover */
#mainMenu a:hover, #mainMenu ul ul a:hover{
color: #FFF;
text-decoration: underline;
}
#mainMenu :hover > a, #mainMenu ul ul :hover > a {
color: #FFF;
text-decoration: underline;
}
/* make the second level visible when hover on first level list OR link */
#mainMenu ul li:hover ul,
#mainMenu ul a:hover ul{
visibility: visible;
}
I have still a problem with showing the table in IE 6 but my main problem here is to show the LINK1...6 under the TOP links.
I have tried many settings with z-index but nothing worked here.
I hope you can help me ;)
A:
Try this:
div#mainMenu ul
{
position:relative;
z-index:100;
}
|
[
"stackoverflow",
"0044959384.txt"
] | Q:
Form new array from associative arrays
I have 3 arrays and looks like this
Array1
(
[0] => Array
(
[Month] => 'Jan 2015'
[Total] => 10
)
[1] => Array
(
[Month] => 'Feb 2015'
[Total] => 8
)
[2] => Array
(
[Month] => 'Mar 2015'
[Total] => 10
)
)
Array2
(
[0] => Array
(
[Month] => 'Jan 2016'
[Total] => 7
)
[1] => Array
(
[Month] => 'Feb 2016'
[Total] => 5
)
[2] => Array
(
[Month] => 'Mar 2016'
[Total] => 15
)
)
Array3
(
[0] => Array
(
[Month] => 'Jan 2017'
[Total] => 13
)
[1] => Array
(
[Month] => 'Feb 2017'
[Total] => 10
)
[2] => Array
(
[Month] => 'Mar 2017'
[Total] => 11
)
)
All 3 arrays are of the same size.
What I want to achieve is to output identical months sequentially and corresponding values in the same order. The new arrays should look like this:
Desired result:
$months = array['Jan 2015','Jan 2016','Jan 2017', 'Feb 2015', 'Feb 2016'...'Mar 2016','Mar 2017'];
$totals = array[10,7,13,5,5,...,15,11];
A:
try this https://eval.in/829000
<?php
$array1 = Array
(
0 => Array ( 'Month' => 'Jan 2015', 'Total' => 10 ),
1 => Array ( 'Month' => 'Feb 2015', 'Total' => 8),
2 => Array ( 'Month' => 'Mar 2015', 'Total' => 10)
);
$array2 = Array
(
0 => Array( 'Month' => 'Jan 2016', 'Total' => 7),
1 => Array( 'Month' => 'Feb 2016', 'Total' => 5),
2 => Array( 'Month' => 'Mar 2016', 'Total' => 15)
);
$array3 = Array
(
0 => Array( 'Month' => 'Jan 2017', 'Total' => 13),
1 => Array( 'Month' => 'Feb 2017', 'Total' => 10),
2 => Array( 'Month' => 'Mar 2017', 'Total' => 11)
);
//ordered months names in lower case
$monthsNames = ['jan','feb','mar','abr','may','jun','jul','aug','sept','oct','nov','dec'];
$months = [];
$totals = [];
$orderedMonths = [];
$orderedTotals = [];
$orderedIndexes = [];
foreach($array1 as $array){
$months[] = $array['Month'];
$totals[] = $array['Total'];
}
foreach($array2 as $array){
$months[] = $array['Month'];
$totals[] = $array['Total'];
}
foreach($array3 as $array){
$months[] = $array['Month'];
$totals[] = $array['Total'];
}
//now the $months and $totals has all the values but not ordered, we need to order them
//get the indexes order;
foreach ($monthsNames as $monthName){
$index = 0; //reset the index for the next month
foreach ($months as $inputMonth){
//if the $inputMonth == $monthName then collected it's index
if (strpos(strtolower($inputMonth), $monthName) !== false ){
$orderedIndexes[] = $index;
}
$index ++;
}
}
//build the final ordered array
foreach ($orderedIndexes as $index){
$orderedMonths[] = $months[$index];
$orderedTotals[] = $totals[$index];
}
var_dump($orderedMonths);
var_dump($orderedTotals);
exit;
?>
this outputs
array(9) {
[0]=>
string(8) "Jan 2015"
[1]=>
string(8) "Jan 2016"
[2]=>
string(8) "Jan 2017"
[3]=>
string(8) "Feb 2015"
[4]=>
string(8) "Feb 2016"
[5]=>
string(8) "Feb 2017"
[6]=>
string(8) "Mar 2015"
[7]=>
string(8) "Mar 2016"
[8]=>
string(8) "Mar 2017"
}
array(9) {
[0]=>
int(10)
[1]=>
int(7)
[2]=>
int(13)
[3]=>
int(8)
[4]=>
int(5)
[5]=>
int(10)
[6]=>
int(10)
[7]=>
int(15)
[8]=>
int(11)
}
note that the totals in your desired result is not the correct order you want, since Feb 2015 is 8 not 5.
also note that if you trust the months string format you can also make use of the PHP date and time functions.
|
[
"stackoverflow",
"0008755132.txt"
] | Q:
MySQL Join Best Practice on Large Data
table1_shard1 (1,000,000 rows per shard x 120 shards)
id_user hash
table2 (100,000 rows)
value hash
Desired Output:
id_user hash value
I am trying to find the fastest way to associate id_user with value from the tables above.
My current query ran for 30 hours without result.
SELECT
table1_shard1.id_user, table1_shard1.hash, table2.value
FROM table1_shard1
LEFT JOIN table2 ON table1_shard1.hash=table2.hash
GROUP BY id_user
UNION
SELECT
table1_shard2.id_user, table1_shard2.hash, table2.value
FROM table1_shard1
LEFT JOIN table2 ON table1_shard2.hash=table2.hash
GROUP BY id_user
UNION
( ... )
UNION
SELECT
table1_shard120.id_user, table1_shard120.hash, table2.value
FROM table1_shard1
LEFT JOIN table2 ON table1_shard120.hash=table2.hash
GROUP BY id_user
A:
Firstly, do you have indexes on the hash fields
I think you should merge your tables in one before the query (at least temporarily)
CREATE TEMPORARY TABLE IF NOT EXISTS tmp_shards
SELECT * FROM table1_shard1;
CREATE TEMPORARY TABLE IF NOT EXISTS tmp_shards
SELECT * FROM table1_shard2;
# ...
Then do the main query
SELECT
table1_shard120.id_user
, table1_shard120.hash
, table2.value
FROM tmp_shards AS shd
LEFT JOIN table2 AS tb2 ON (shd.hash = tb2.hash)
GROUP BY id_user
;
Not sure for the performance gain but it'll be at least more maintainable.
|
[
"ja.stackoverflow",
"0000069396.txt"
] | Q:
拡張機能のiconを指定した箇所に表示されるエラーの意味が分からない
簡単な拡張機能を作成したのですが、iconの指定のところでエラーが表示されます。(下図参照)
エラーメッセージは出てますが、拡張機能の説明のところに画像は表示されています。
気にしなければこのままでもいいのですが、なんとなく気になります。
エラーを解消するためにはどうしたらいいのでしょうか。
A:
以下リンク先の Issue が関連しそうです。
Warning in package.json when specifying icon without https repository · Issue #90900 · microsoft/vscode
表示されているのはエラーではなくワーニング (警告) だと思います。
拡張機能を配布する際、指定したアイコンがオンラインのリポジトリ上に存在し、かつ (HTTPではなく) HTTPS 経由で参照可能な必要がある、と言っているように思われます。
package.json with a png icon shows a seemingly bogus warning · Issue #30434 · microsoft/vscode
"icon": "LanguageCCPP_color_128x.png",
"repository": {
"type": "git",
"url": "https://github.com/Microsoft/vscode-cpptools.git"
},
|
[
"superuser",
"0000632867.txt"
] | Q:
virtual pbx using x-lite stuck at offline
I am using x-lite to connect to virtualpbx.net. One of my user's x-lite is stuck at offline and the availability dropdown box is grayed out. I don't see any error messages in x-lite or virtualpbx.net. He can still make outgoing calls.
I didn't setup this system and don't know much about VOIP phones, etc. Everytime I google x-lite or virtualpbx i get sales pages. I've looked at https://support.counterpath.com/ but don't see anything there.
A:
In X-lite
Softphone -> Account Settings
Under "Allow this account for"
Check "IM / Presence"
I can't believe this took me so long to figure out but I also can't believe I couldn't find an answer to this anywhere.
|
[
"es.stackoverflow",
"0000030744.txt"
] | Q:
Cannot assign "'1'": "Cliente.tipo_cliente" must be a "TipoCliente" instance
Estoy tratando de guardar el formulario que hize en django el cual tiene una foreignkey del modelo cliente al modelo tipo de cliente pero al momento de guardar sale este error.
Cannot assign "'1'": "Cliente.tipo_cliente" must be a "TipoCliente"
instance
modelos.py
class TipoCliente(models.Model):
codigo = models.IntegerField()
descripcion = models.CharField(max_length=40)
class Cliente(models.Model):
tipo_cliente = models.ForeignKey('TipoCliente')
nombre = models.CharField(max_length=80)
views.py
tipo_cliente = TipoCliente.objects.all()
cliente = Cliente()
cliente.tipo_cliente = request.POST['tipo_cliente']
cliente.nombre = request.POST['nombre']
cliente.save()
error
ValueError at /generales/clientes
Cannot assign "'1'": "Cliente.tipo_cliente" must be a "TipoCliente" instance.
Django Version: 1.10.2
Exception Type: ValueError
Exception Value:
Cannot assign "'1'": "Cliente.tipo_cliente" must be a "TipoCliente" instance.
Utilizando el form de Django funciona perfectamente pero me toca hacerlo sin usar el form de Django.
A:
El error es claro , tipo_cliente debe recibir un Objeto de tipo TipoCliente pero en su lugar usted asigna una cadena que recibe por POST
Filtrar el TipoCliente por el tipo que recibe por POST para luego asignar al atributo tipo_cliente del Modelo Cliente
cliente = Cliente()
cliente.tipo_cliente = TipoCliente.objects.get(codigo = request.POST['tipo_cliente'])
cliente.nombre = request.POST['nombre']
cliente.save()
|
[
"stackoverflow",
"0061648226.txt"
] | Q:
Why Does Azure DevOps Server interleave output
Sorry in advance, I can't post actual code because of security restrictions at my job, but I'll try to make a contrived example.
I am working with python 3.6.1 and running a module in an Azure Pipeline (ADS 2019). In the module we have output done using a dictionary with the following structure
#dummy data, assume files could be in any order in any category
{
"compliant": ['file1.py', 'file2.py'], #list of files which pass
"non-compliant":['file3.py'], #list of files which fail
"incompatible":['file4.py'] #list of files which could not be tested due to exceptions
}
When a failure occurs one of our customers wants the script to output the command to call a script that can be run to correct the non-compliant files. The program is written similar to what follows
result = some_func() #returns the above dict
print('compliant:')
for file in result['compliant']:
print(file)
print('non-compliant:')
for file in result['non-compliant']:
print(file)
print('incompatible:')
for file in result['incompatible']:
print(file)
# prints a string to sys.stderr simillar to python -m script arg1 arg2 ...
# script that is output is based on the arguments used to call
print_command_to_fix(sys.argv)
When run normally I would get the correct output like follows:
#correct output: occurs on bash and cmd
compliant:
file1.py
file2.py
non-compliant:
file3.py
incompatible:
file4.py
python -m script arg1 arg2 arg_to_fix
when I run on the Azure Pipeline though, the output gets interleaved like follows
#incorrect output: occurs only on azure pipeline runs
compliant:
python -m script arg1 arg2 arg_to_fix
file1.py
file2.py
non-compliant:
file3.py
incompatible:
file4.py
Whether I try to use print or sys.stderr.write it doesn't seem to resolve the interleave, and I'm assuming the print_command_to_fix() is being called asynchronously somehow. But my guess probably isn't accurate since I haven't been working with ADS or python for very long.
TL;DR: What am I doing wrong to get the above interleaved output on Pipelines only?
Edit: clarified certain points and fixed typos
A:
Discovered the answer after a few hours of troubleshooting and solutions.
ADS tracks both output streams in the program but does it asynchronously. The error was cause by outputting to both stdout and stderr. This being the case, outputting all output to one stream resolved the issue. The approach I took ended up being something like follows
result = some_func() #returns the above dict
output = []
output.append('compliant:')
output.extend(result['compliant'])
output.append(file)
output.extend(result['non-compliant'])
output.append('incompatible:')
output.extendresult['incompatible'])
# returns a string to simillar to python -m script arg1 arg2 ...
# script that is output is based on the arguments used to call
output.append(format_command_to_fix(sys.argv))
print('\n'.join(output))
Alternatively, I imagine other techniques for outputting async information should resolve as well.
|
[
"stackoverflow",
"0011155417.txt"
] | Q:
Child process returns nil for fork call
puts "Process #{Process.pid} started"
return_value = fork
puts "Process #{Process.pid} returned #{return_value}"
puts "Process #{Process.pid} exited"
Above ruby code printed this output to STDOUT
Process 6644 started
Process 6644 returned 6645
Process 6644 exited
Process 6645 returned
Process 6645 exited
When parent process executes fork, it returns the pid of child process. When the child process calls fork why is it returning nil, shouldn't this be a recursive-kind-of-call? Like.. child also forks another child process.. and that child process forks another process..etc., Am I missing something here?
A:
This is how you distinguish in the code whether the code is running in the parent process or in the child process: the parent process receives the PID of the forked child, and the child gets nil.
Personally, I prefer to use the syntax:
pid = fork do
# this is the code that will be executed by the child
end
# this is the code executed by parent
Since most of the time, child is supposed to do something other than the parent process, this code reflects the intent much better.
|
[
"math.stackexchange",
"0002594908.txt"
] | Q:
Usage of mean value theorem ; bounded derivative and open interval
Let $f : (0,1) \to \mathbb{R}$ be a function such that $ |f'(x)| \leq 5 $ on the open interval $(0,1)$. Prove that $\lim_{x \to 1^-} f(x)$ exists.
It involves the derivative and the actual function itself, so I think I have to somehow use the mean value theorem.. Also, $f$ is continuous on $(0,1)$ and differentiable on $(0,1)$ ( because the derivative exists there ).
But then, the function is defined on the open interval, so the requirements for the mean value theorem aren't satisfied. I'm guessing we have to consider intervals of the form $(a,b)$ with $a > 0$ and $b < 0$.
Lastly, I don't see the significance of the $5$ ... Is it only there to establish that the derivative is bounded, or does the number itself have some signifiance ( would the same thing hold if we had $3$ for example? ).
Please give me a hint, not the solution. Something like "consider the mean value theorem on intervals of the form ... " would be very helpful.
A:
Pick a sequence $(x_{n})\subseteq(0,1)$ such that $x_{n}\rightarrow 1$. Then
\begin{align*}
|f(x_{n})-f(x_{m})|=|f'(\eta_{n,m})||x_{n}-x_{m}|\leq 5|x_{n}-x_{m}|,
\end{align*}
where $\eta_{n,m}$ is chosen by Mean Value Theorem. So $(f(x_{n}))$ is convergent. For other sequence $(y_{n})$ such that $y_{n}\rightarrow 1$, consider the sequence $(z_{n})$ defined by $z_{2n}=x_{n}$, $z_{2n+1}=y_{n}$ to claim that the limits of $(f(x_{n}))$ and $(f(y_{n}))$ are the same. So $\lim_{x\rightarrow 1^{-}}f(x)$ exists.
|
[
"stackoverflow",
"0030274298.txt"
] | Q:
x-csrf-token validation fails on HttpPost
I have to post xml payload to an ODATA service which requires Authentication and x-csrf-token.
I have two AsyncTasks. In First one has URLConnection object and fetches x-csrf-token with code below:
URL obj = new URL(Util.ODATA_URL + "SO_BEATPSet");
URLConnection conn = obj.openConnection();
conn.setRequestProperty("Authorization", "Basic " + authStringEnc);
conn.addRequestProperty("x-csrf-token", "fetch");
......
......
String server = conn.getHeaderField("x-csrf-token");
Now, in Second AsyncTask executed right after the first one finishes succesfully, i encounter 403 error. It bascially says that my x-csrf-token validation has failed.
I ran a simple looping test, where i ran the first AsyncTask three times, and i got three different tokens.
That is where i think the problem is. When in Second AsyncTask I use HttpPost, the server is expecting a different token other than an already fetched one.
Is there any way that i can fetch and pass the X-csrf-token in the same call?
My Second AsyncTask is like below:
HttpPost postRequest = new HttpPost(url);
String credentials = UUSERNAME + ":" + PASSWORD;
String base64EncodedCredentials = Base64.encodeToString(credentials.getBytes(), Base64.NO_WRAP);
postRequest.addHeader("Authorization", "Basic " + base64EncodedCredentials);
postRequest.addHeader("x-csrf-token", X_CSRF_TOKEN); // JHc4mG8siXrDtMSx0eD9wQ==
StringEntity entity = new StringEntity(XMLBuilders.BeatXMLBuilder());
entity.setContentType(new BasicHeader("Content-Type",
"application/atom+xml"));
postRequest.setEntity(entity);
A:
I was eventually able to get the successful POST result. However, the solution seems a bit dirty to me. But, it did sort out my problem for now.
I put the Code like below in doInBackground() method of AsyncTask
HttpClient httpclient = new DefaultHttpClient();
HttpPost postRequest = new HttpPost(url);
String credentials = USERNAME + ":" +PASSWORD;
String base64EncodedCredentials = Base64.encodeToString(credentials.getBytes(), Base64.NO_WRAP);
/**---------------------------------------------------------------------------------- **/
/** THIS CODE BELOW CALLS THE SERVER FOR THE TOKEN AND PASSES THE VALUE TO THE SUBSEQUENT POSTREQUEST CALL.
BY DOING THIS, THE SERVER IS NOT CALLED AGAIN BEFORE POSTREQUEST, AND USER GETS THE LATEST TOKEN **/
{
HttpGet httpget = new HttpGet(url);
httpget.setHeader("Authorization", "Basic " + base64EncodedCredentials);
httpget.setHeader("x-csrf-token", "fetch");
System.out.println("request:-------------------");
System.out.println(httpget.getRequestLine());
Header headers[] = httpget.getAllHeaders();
for (Header h : headers) {
System.out.println(h.getName() + "---:---- " + h.getValue());
}
HttpResponse res = httpclient.execute(httpget);
System.out.println("response:-------------------");
System.out.println(res.getStatusLine());
headers = res.getAllHeaders();
for (Header h : headers) {
System.out.println(h.getName() + "---:---- " + h.getValue());
if (h.getName().equals("x-csrf-token")) {
X_CSRF_TOKEN = h.getValue();
}
}
}
/**--------------------------------------------------------------------- **/
// The main POST REQUEST
postRequest.addHeader("Authorization", "Basic " + base64EncodedCredentials);
postRequest.setHeader("x-csrf-token", X_CSRF_TOKEN); // PASSING THE TOKEN GOTTEN FROM THE CODE ABOVE
StringEntity entity = new StringEntity(myString);
entity.setContentType(new BasicHeader("Content-Type",
"application/atom+xml"));
postRequest.setEntity(entity);
HttpResponse response = httpclient.execute(postRequest);
Log.d("Http Post Response:", response.toString());
String result = EntityUtils.toString(response.getEntity());
Log.d("Http Response:", result);
int responseCode = response.getStatusLine().getStatusCode();
Log.d("Http Response: ", "Response code " + responseCode);
As I explain in the Code comments as well, the Code makes another call to the server while HttpPost has already made one and gets the latest token, which in turn is passed to the subsequent POST request.
|
[
"ux.stackexchange",
"0000013904.txt"
] | Q:
Hiding empty categories or displaying "nothing in this category"
In a shop-like website, we have a horizontal menu in the header where all categories of items in stock are listed (7-9). Depending on availability of items, some categories may be empty. Usually the list of categories changes once in 3-7 days.
We have the following options:
Hide links to categories without items.
Display category page as usual, but displaying "Nothing's found. Come back later." instead of items.
At first sight, the former option is obvious one since we don't want to make user think and waste time while navigating through empty categories (there could be more than one empty category). But the latter option gives us consistency as some old lady who used to navigate to items of her preference from a link which is hidden at the moment could be frustrated with its absence (nope, we're not designing the website for elderly people, just as example).
Which option is better based on your experience?
A:
Most e-shops tend to keep all the products listed at all times, but put a notice on the product page when out of stock. This helps SEO to avoid having pages appear and disappear all the time.
When viewing a category with OOS products it's helpful to note which ones are OOS - perhaps greying them out or moving them to the bottom of the list so they don't get in the way of saleable goods.
If none of the above is feasible for you, then you should still display the empty categories, because it gives users a good indication of the products you sell. They could otherwise leave your site not knowing you sell a certain thing, so never return to see if you have any in stock.
|
[
"gaming.stackexchange",
"0000232680.txt"
] | Q:
How do I break blocks with command blocks?
I am familiar with the /setblock command, but I'm having trouble whenever I try to replace a block with air. The chat reads in red text, "Cannot place blocks outside of world", though I'm putting in the right coordinates. I can't seem to figure it out, can you help me? Command:
/setblock ~-147 ~74 ~-150 minecraft:air destroy
A:
Look at your command:
/setblock ~-147 ~74 ~-150 minecraft:air destroy
Notice that you have a ~ before every coordinate, meaning that you will replace a block relative to your position, and being 74 blocks above you, it might be outside the world.
If you want to destroy a block at (-147,74,-150) you should remove the ~ like so:
/setblock -147 74 -150 minecraft:air 0 destroy
As user3878893 pointed out, you also need to include a data value for the block to be placed. For regular blocks just use a 0 (as above).
|
[
"stackoverflow",
"0045674814.txt"
] | Q:
Progress bar and mapply (input as list)
I would like to monitor the progress of my mapply function. The data consists of 2 lists and there is a function with 2 arguments.
If I do something similar with a function that takes 1 arguments I can use ldply instead of lapply. (I'd like to rbind.fill the output to a data.frame)
If I want to do the same with mdply it doesn't work as the function in mdply wants values taken from columns of a data frame or array. Mapply takes lists as input.
These plyr apply functions are handy, not just because I can get the output as a data.frame but also because I can use the progress bar.
I know there is the pbapply package but that there is no mapply version and there is the txtProgressBar function but I could not figure out how to use this with mapply.
I tried to create a reproducible example (takes around 30 s to run)
I guess bad example. My l1 is a list of scraped websites (rvest::read_html) which I cannot send as a data frame to mdply. The lists really need to be lists.
mdply <- plyr::mdply
l1 <- as.list(rep("a", 2*10^6+1))
l2 <- as.list(rnorm(-10^6:10^6))
my_func <- function(x, y) {
ab <- paste(x, "b", sep = "_")
ab2 <- paste0(ab, exp(y), sep = "__")
return(ab2)
}
mapply(my_func, x = l1, y = l2)
mdply does't work
mdply(l1, l2, my_func, .progress='text')
Error in do.call(flat, c(args, list(...))) : 'what' must be a function or character string
A:
Answering my own question.
There is now a package that can do that. It is called pbapply. The function I was looking for is pbmapply.
|
[
"cooking.stackexchange",
"0000097226.txt"
] | Q:
Is there such a thing as unsteamed rolled oats?
Is there such a thing as truly raw (unsteamed) rolled oats? Why are rolled oats usually lightly steamed?
A:
Rolled uncooked groats will shatter. You can get uncooked, unrolled oats though. Food coops and organic grocery stores/coops have them. $1.49 a pound is a good price. You want hulled oat groats, as it takes considerable technology to get the hulls off. Sold in bulk, or one pound bags. You can get 50Lb bags online. They'll last a year or more.It takes about 2 hours to cook them in a rice cooker. Brown rice 2X. Add just enough water to cover on the second cycle. Let them sit on warm fir half an hour to reduce stickiness. Chicken broth and brown sesame seed oil are good additives. Dirty oats, Mexican style, or Stroganoff style, with sour cream, mushrooms and onions added at end are both pretty tasty. Of course, you can eat them plain too.
|
[
"math.stackexchange",
"0003596936.txt"
] | Q:
quick help with an eigenvalue
Hey I'm running short on time and I clearly have some dumb hole in my understanding of finding eigenvalues. There is a question and basically I have to find the eigenvalues of the unforced equation: $ \frac{d^2 y}{dt^2}+2y = 0$. Nice and easy. Except I am converting this thing into a matrix of a system, so I get $\begin{bmatrix}0 & 1\\0 & -2\end{bmatrix}$, and I end up with the polynomial $\lambda^2 + 2\lambda = \lambda(\lambda +2)= 0 \implies \lambda_1 = 0, \lambda_2 = -2%$. These are wrong according to the book and I believe them – why would an undamped oscillator have straight line solutions? I am wondering if there is a better way to get eigenvalues in a second order system, and what I am doing wrong in my current method. It's actually pretty scary because this should be super easy at this point in the class. Thanks
A:
You found the correct eigenvalues for the matrix, but the matrix itself is wrong. To convert a second order equation into a first order system, make a new variable $x = y'$, depending on $t$. Then we get a system,
\begin{cases} x' = 0x - 2y \\ y' = x + 0y. \end{cases}
As a matrix (as this is a linear system),
$$\begin{pmatrix}x' \\ y' \end{pmatrix} = \begin{pmatrix}0 & -2 \\1 & 0\end{pmatrix}\begin{pmatrix}x \\ y\end{pmatrix}.$$
So, the correct matrix is
$$\begin{pmatrix}0 & -2 \\1 & 0\end{pmatrix},$$
which has complex eigenvalues $\pm \sqrt{2}i$. This gives the undamped oscillations you need.
|
[
"stackoverflow",
"0015559821.txt"
] | Q:
Using sed to insert text at end of line matching string
I have a line in a text file containing a list of items assigned to a variable ...
ITEMS="$ITEM1 $ITEM2 $ITEM3"
And I would like write a bash script that uses sed to find the line matching ITEMS and append another item to the end of the list within the double quotes, so it results in ...
ITEMS="$ITEM1 $ITEM2 $ITEM3 $ITEM4"
Furthermore, I have the number of the item to add stored in a variable, let's say it's $number. So I'm trying to get it to add $ITEM4$number and have it replace $number with whatever I assigned to that variable, let's say it's the number 4 in this case. How could I best accomplish this? Thanks!
A:
Try this :
num=4
sed "/ITEMS=/s/\"$/ \$ITEM${num}\"/"
Explanations
the sed form used here is /re/s/before/after/ where re is a regex (like a grep), s/// is substitution
\s is a space and * mean 0 ore more occurence(s)
& stands for the string matched in the left part of the substitution
^ as first character of a regex means start of string/line
$ as last character of a regex means end of string/line
A:
$ cat file
ITEMS="$ITEM1 $ITEM2 $ITEM3"
$ number=4
$ sed "/ITEMS/s/\"$/ \$ITEM$number&/" file
ITEMS="$ITEM1 $ITEM2 $ITEM3 $ITEM4"
|
[
"stackoverflow",
"0027708935.txt"
] | Q:
Searching Post by Origin & Destination locations using Geokit-Rails
Ok what i have is a Trucking Load board where Truckers come to post there available Trucks. I have the Trucks posting. But am having issues setting up the search functions and the way i need to associate different tables.
Rails 3
postgres
gem 'geokit-rails'
The way i have it now is I have a locations table setup like:
class Location < ActiveRecord::Base
attr_accessible :cs, :lat, :lon, :city, :state
acts_as_mappable :default_units => :miles,
:default_formula => :sphere,
:distance_field_name => :distance,
:lat_column_name => :lat,
:lng_column_name => :lon
When someone post a truck it has and Origin and a Destination. so it has 2 locations and have set it up like this:
class Truck < ActiveRecord::Base
attr_accessible :available, :company_id, :dest_id, :origin_id, :equipment, :origin, :dest
belongs_to :origin, class_name: "Location", foreign_key: :origin_id
belongs_to :dest, class_name: "Location", foreign_key: :dest_id
belongs_to :company
With the way i have it set up i can get the location information from:
Truck.find(1).origin || Truck.find(1).dest
It will return the Location record associated with it
Now my issue is that i want to be able to write a search function to find any Trucks within a "given" amount of miles from origin || dest || origin & Destination
I know i can do Location.within(25, :origin => "Springfield, Mo") and it will search all the locations and return the ones that are within 25 miles of Springfield Mo
But how would i use this on Trucks where there is 2 locations (origin & dest) and they are associated with location id.
I currently have some other search Params already coded in and working just not sure how i could incorporate this into it:
def search(search)
where = []
where << PrepareSearch.states('dest', search.dest_states) unless search.dest_states.blank?
where << PrepareSearch.states('origin', search.origin_states) unless search.origin_states.blank?
where << PrepareSearch.location('origin', search.origin_id, search.radius) unless search.origin.blank?
where << PrepareSearch.location('dest', search.dest_id, search.radius) unless search.dest.blank?
where << PrepareSearch.equipment(search.equipment) unless search.equipment.blank?
where << PrepareSearch.date('available', search.available, '<') unless search.available.blank?
where = where.join(' AND ')
Truck.where(where)
end
module PrepareSearch
def PrepareSearch.location(type, location, radius)
loc = Location.find(location)
*type will be origin/destination Location active record
*location will be Location id
*radius will be a given mileage
**This is where i need to figure out what to put here**
end
end
Would it be better just to incorporate the equation:
def sphere_distance_sql(origin, units)
lat = deg2rad(origin.lat)
lng = deg2rad(origin.lng)
multiplier = 3963.1899999999996 # for miles
sphere_distance_sql(lat, lng, multiplier)
end
def sphere_distance_sql(lat, lng, multiplier)
%|
(ACOS(least(1,COS(#{lat})*COS(#{lng})*COS(RADIANS(#{qualified_lat_column_name}))*COS(RADIANS(#{qualified_lng_column_name}))+
COS(#{lat})*SIN(#{lng})*COS(RADIANS(#{qualified_lat_column_name}))*SIN(RADIANS(#{qualified_lng_column_name}))+
SIN(#{lat})*SIN(RADIANS(#{qualified_lat_column_name}))))*#{multiplier})
|
end
A:
Ok Well I have figured out a solution to my problem... if there is a better one i would love to know.
where << PrepareSearch.location_ids("origin", search.origin, search.radius) unless search.origin_id.blank?
where << PrepareSearch.location_ids("dest", search.dest, search.radius) unless search.dest_id.blank?
def PrepareSearch.location_ids(type, location, radius)
if location.nil?
return nil
else
loc = Location.find(location)
location(type, loc, radius)
end
end
def PrepareSearch.location(type, location, radius)
locs = Location.within(radius, :origin => location.cs).pluck(:id)
st = ""
locs.each {|s| st += "'#{s}'," }
st = st[0..-2]
"#{type}_id IN (#{st})"
end
|
[
"stackoverflow",
"0028957828.txt"
] | Q:
Command line filter with protocol ICMP
I want to use Wireshark command line (tshark.exe) to capture the icmp traffic.
I used this and worked well for src and dst host-
C:\Program Files\Wireshark>tshark.exe -f "src or dst host 192.192.1.1" -i 1 -a duration:10 -w C:\temp\mycap.cap
This works fine. But what if i just want to capture the traffic for protocol "icmp" and save the traffic to a file. This does not work-
C:\Program Files\Wireshark>tshark.exe -f "icmp" -i 1 -a duration:10 -w C:\temp\mycap3.cap
If i do this then it works-
C:\Program Files\Wireshark>tshark.exe -f "icmp"
For the above command, is there any way to know the wireshark has captured a particular count of icmp traffic with given list of ip? Lets say 10 count of icmp traffic for 10 different ip.
Or, what do i need to change the command to save the icmp traffic to a file with given duration?
A:
This does not work
What "does not work" about it? Does it not write any packets to the file? If so, are you certain that there were ICMP packets to write?
Try doing a "ping" command in another command window while you're running TShark; if that captures packets, perhaps the problem is just that no ICMP traffic was sent or received during the 10 seconds that TShark was capturing.
If i do this then it works
That command doesn't have a time limit, so if it runs for a longer period of time, perhaps that's long enough that some ICMP packets were sent or received.
For the above command, is there any way to know the wireshark has captured a particular count of icmp traffic with given list of ip?
Well, if this were a BSD-flavored UN*X, such as *BSD or OS X, you could type control-T and it'd report how many packets it'd captured. However, this is Windows, so that doesn't work.
However, if you don't run TShark with the -q flag, it should print out a running count of captured packets; you should have seen that count with C:\Program Files\Wireshark>tshark.exe -f "icmp".
Lets say 10 count of icmp traffic for 10 different ip.
TShark will report captured packet counts, but it won't report a count of addresses, just the total number of packets.
Or, what do i need to change the command to save the icmp traffic to a file with given duration?
The first command you typed, with -a duration:10, is the correct command for a duration of 10 seconds. Perhaps what you need to change is the duration, for example, -a duration:120 to capture for 2 minutes, in order to see ICMP packets. I ran tcpdump on my machine for longer than 10 seconds, with a filter of "icmp", and saw no ICMP traffic; ICMP packets either indicate problems (which are, hopefully, rare on your network) or the result of information queries and pings (which may also be rare), so you simply might not have a lot of ICMP traffic.
|
[
"askubuntu",
"0001243359.txt"
] | Q:
(yet another) No sound in Ubuntu 18.04
Yesterday's behavior: Sound works fine from built-in speakers and from headphones. At one point, I launch a Windows VM and a screensharing Zoom call at the same time; it is too much for my laptop and it crashes.
Today's behavior: I turn on my laptop and cannot hear any sound, either from the built-in speakers or from headphones. If I run pavucontrol, I see the output level monitor go up and down if I play an audio file, but can't hear anything. This does not change if I mute/unmute, change volume levels, change the 'profile', etc. System sound settings show 'Speakers' and 'Headphones' as possible outputs.
System details:
$ lspci -v
[...]
00:1f.3 Audio device: Intel Corporation Sunrise Point-LP HD Audio (rev 21) (prog-if 80)
Subsystem: Dell Sunrise Point-LP HD Audio
Flags: bus master, fast devsel, latency 32, IRQ 140
Memory at d1128000 (64-bit, non-prefetchable) [size=16K]
Memory at d1100000 (64-bit, non-prefetchable) [size=64K]
Capabilities: <access denied>
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel, snd_soc_skl
[...]
$ echo list-sinks | pacmd
1 sink(s) available.
* index: 0
name: <alsa_output.pci-0000_00_1f.3.analog-stereo>
driver: <module-alsa-card.c>
flags: HARDWARE HW_MUTE_CTRL HW_VOLUME_CTRL DECIBEL_VOLUME LATENCY DYNAMIC_LATENCY
state: SUSPENDED
suspend cause: IDLE
priority: 9039
volume: front-left: 58980 / 90% / -2.75 dB, front-right: 58980 / 90% / -2.75 dB
balance 0.00
base volume: 65536 / 100% / 0.00 dB
volume steps: 65537
muted: no
current latency: 0.00 ms
max request: 0 KiB
max rewind: 0 KiB
monitor source: 0
sample spec: s16le 2ch 48000Hz
channel map: front-left,front-right
Stereo
used by: 0
linked by: 0
configured latency: 0.00 ms; range is 0.50 .. 341.33 ms
card: 0 <alsa_card.pci-0000_00_1f.3>
module: 7
properties:
alsa.resolution_bits = "16"
device.api = "alsa"
device.class = "sound"
alsa.class = "generic"
alsa.subclass = "generic-mix"
alsa.name = "ALC3253 Analog"
alsa.id = "ALC3253 Analog"
alsa.subdevice = "0"
alsa.subdevice_name = "subdevice #0"
alsa.device = "0"
alsa.card = "0"
alsa.card_name = "HDA Intel PCH"
alsa.long_card_name = "HDA Intel PCH at 0xd1128000 irq 140"
alsa.driver_name = "snd_hda_intel"
device.bus_path = "pci-0000:00:1f.3"
sysfs.path = "/devices/pci0000:00/0000:00:1f.3/sound/card0"
device.bus = "pci"
device.vendor.id = "8086"
device.vendor.name = "Intel Corporation"
device.product.id = "9d71"
device.product.name = "Sunrise Point-LP HD Audio"
device.form_factor = "internal"
device.string = "front:0"
device.buffering.buffer_size = "65536"
device.buffering.fragment_size = "32768"
device.access_mode = "mmap+timer"
device.profile.name = "analog-stereo"
device.profile.description = "Analog Stereo"
device.description = "Built-in Audio Analog Stereo"
alsa.mixer_name = "Realtek ALC3253"
alsa.components = "HDA:10ec0225,10280740,00100002"
module-udev-detect.discovered = "1"
device.icon_name = "audio-card-pci"
ports:
analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: unknown)
properties:
device.icon_name = "audio-speakers"
analog-output-headphones: Headphones (priority 9000, latency offset 0 usec, available: no)
properties:
device.icon_name = "audio-headphones"
active port: <analog-output-speaker>
$ lsmod | grep snd
snd_soc_skl 86016 0
snd_soc_skl_ipc 65536 1 snd_soc_skl
snd_hda_ext_core 24576 1 snd_soc_skl
snd_soc_sst_dsp 32768 1 snd_soc_skl_ipc
snd_soc_sst_ipc 16384 1 snd_soc_skl_ipc
snd_soc_acpi 16384 1 snd_soc_skl
snd_soc_core 241664 1 snd_soc_skl
snd_compress 20480 1 snd_soc_core
snd_hda_codec_realtek 106496 1
ac97_bus 16384 1 snd_soc_core
snd_pcm_dmaengine 16384 1 snd_soc_core
snd_hda_codec_generic 73728 1 snd_hda_codec_realtek
snd_hda_intel 45056 6
snd_hda_codec 126976 3 snd_hda_codec_generic,snd_hda_intel,snd_hda_codec_realtek
snd_hda_core 81920 6 snd_hda_codec_generic,snd_hda_intel,snd_hda_ext_core,snd_hda_codec,snd_hda_codec_realtek,snd_soc_skl
snd_hwdep 20480 1 snd_hda_codec
snd_pcm 98304 7 snd_hda_intel,snd_hda_ext_core,snd_hda_codec,snd_soc_core,snd_soc_skl,snd_hda_core,snd_pcm_dmaengine
snd_seq_midi 16384 0
snd_seq_midi_event 16384 1 snd_seq_midi
snd_rawmidi 32768 1 snd_seq_midi
snd_seq 65536 2 snd_seq_midi,snd_seq_midi_event
snd_seq_device 16384 3 snd_seq,snd_seq_midi,snd_rawmidi
snd_timer 32768 2 snd_seq,snd_pcm
snd 81920 24 snd_hda_codec_generic,snd_seq,snd_seq_device,snd_hwdep,snd_hda_intel,snd_hda_codec,snd_hda_codec_realtek,snd_timer,snd_compress,snd_soc_core,snd_pcm,snd_rawmidi
soundcore 16384 1 snd
$ sudo fuser -v /dev/snd/*
USER PID ACCESS COMMAND
/dev/snd/controlC0: gdm 1967 F.... pulseaudio
shardul 4099 F.... pulseaudio
$ uname -a
Linux shardul-laptop 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Searching for output from Pulseaudio and ALSA in /var/log/syslog or dmesg does not show any errors, except only occasionally,
[alsa-sink-ALC3253 Analog] alsa-sink.c: Error opening PCM device front:0: Device or resource busy
[pulseaudio] sink-input.c: Failed to create sink input: sink is suspended.
I tried to manually unsuspend the sink with pacmd but that didn't change anything. I have also tried:
reinstalling pulseaudio, alsa-base, and linux-sound-system
uninstalling alsa-base altogether
adding myself to the audio group
bunch of pulseaudio kills and restarts, alsa reloads
rebooting a bunch of times throughout the debugging process
How can I get my audio to work again? Thanks for your attention.
A:
Turns out this was due to a recent kernel upgrade, from 4.15.0-96 to 4.15.0-101. I did not immediately see the problem because the new kernel would have been used only after a reboot, which I had to do due to the crash described in my question. This is the relevant kernel bug although my laptop is an Inspiron 5378 instead of 5368. I was able to solve the issue by:
sudo apt-get purge pulseaudio alsa-base
Reboot, choosing the older kernel version at boot time
sudo apt-get install pulseaudio alsa-base
Reboot, again choosing the older kernel
Not sure that this will be sufficient to fix it for others because I reinstalled, deleted config, played around with settings, etc. a whole bunch before discovering this.
|
[
"stackoverflow",
"0034727945.txt"
] | Q:
Have 2 classes as parent selectors at different levels in Less?
I need this CSS:
.class-1.class-a .target {
background: red;
}
.class-1.class-b .target {
background: blue;
}
I've got this working fine with this Less
.target {
.class-1.class-a & {
background: red;
}
.class-1.class-b & {
background: blue;
}
}
However can my LESS be written more succinctly? Its seems a shame to write class-1 twice. I've tried this:
.target {
.class-1 & {
&.class-a {
background: red;
}
&.class-b {
background: blue;
}
}
}
And also this:
.target {
.class-1 & {
.class-a & {
background: red;
}
.class-b & {
background: blue;
}
}
}
A:
Your last attempt is pretty close, if you modify it like:
.target {
.class-1 & {
.class-a& {
background: red;
}
.class-b& {
background: blue;
}
}
}
it will work.
Note that the order of classes of the same level does not matter, thus the resulting .class-a.class-1 .target is equal to desired .class-1.class-a .target
(I'm not mentioning though that usually the idea of avoiding repetitions by any cost is flawed. All those cryptic chains of ampersands and brackets make the code totally unreadable if compared to your initial code and even pure CSS itself).
|
[
"stackoverflow",
"0003448383.txt"
] | Q:
Best way to start with smalltalk in a windows environment (win 7)
I am a c# developer and most of my friends are much smarter than me, and they laugh at me and start to swear at me in smalltalk. I want to learn this so that I might better be insulted at their insults... and maybe learn a thing or two in the process.
So, what is the best place to start with regard to smalltalk in a windows environment?
A:
The best current free Smalltalk is probably Squeak. This currently out-performs its near relative Pharo, at least on my ancient box, but you should really take a look at both of them.
The big problem with Smalltalk is that there are no really high-quality text books. There's a list of free ones here, but I couldn't recommend any of them strongly. If you decide to use Squeak, take a look at Squeak By Example, which isn't too bad.
A:
Since you are on Windows, I will say you should try DCE (Dolphin Community Edition), It has every thing (Including better integration with the OS than Pharo or Squeak, especially the GUI).
The Professional Edition will take you one level up since it contains other extra get-in-the-flow tools plus the Delivery wizard to directly produce executables (in short EXEs).
It's more than enough for learning Smalltalk. It includes a Help file (containing fast intro to Smalltalk, a comprehensive coverage of the environment, pattern usage, and a nice gui example).
If you think you will stay for long on Windows (an environment issue, etc...) or as igouy said 'leverage your Windows experience', then by All means check DCE.
As an extra I suggest you download it, check the intro to Smalltalk in the Help file and later on enjoy these videos:
A Better Hello World
Fun with MS Speech library (ActiveX wizard sample tutorial)
Interfacing with iTunes
Programming Animation with Dolphin (this shows the interactive
nature of Smalltalk in general, still in Dolphin is integrated with
Windows so you can play with Windows's windows and controls in an
easy and thrilling way.)
and by the way it's addictive!
happy small talks with Dolphin Smalltalk ;)
A:
Whichever environment you pick to start playing with, don't forget to check out Stéphane Ducasse's collection of FREE (and LEGAL) Smalltalk books:
Free Smalltalk Books
|
[
"stackoverflow",
"0008510547.txt"
] | Q:
C++, reading letters from a file
I have the following code, but it only reads lowercase letters. Ideally, it would read both upper and lower case letters and then store this info in an array. Any help or suggestions would be welcome.
Cheers.
A:
There are several problems with the code above but to answer your question directly, simply do some kind of check before incrementing the letterCount
if ( letter >= 'a' )
letterCount[int(letter)-'a']++;
else
letterCount[int(letter)-'A']++;
IMPORTANT:
This is not initializing the array to zeros, its just setting the first item to zero then the rest are garbage.
int letterCount[26] = {0};
to set the whole array to zero you have to iterate with a for loop and set each one to 0, manually type it out like {0, 0, 0, 26 times ... }, or use memset() to clear it all at once.
|
[
"stackoverflow",
"0054922836.txt"
] | Q:
unable to display result through foreach loop
I have array its name is $img_arr and it shows following records. I want to display images with its relevant ids. so I have the following array. but unable to implement it.
Array(
[0] 1
[1] 2
[2] 3
[3] 4
)
Array(
[0] adsense-approval.png
[1] feedback.jpg
[2] logo1.jpg
[3] logo2.png
)
my code is following. in id and data-id i want to get array result 0,1,2..
<?php
if(isset($img_arr) && !empty($img_arr)){
foreach($img_arr as $img){ ?>
<div class="img-wrap" id="img_<?php echo $img;?>">
<span class="close">×</span>
<img src="./gallery/<?php echo $img ?>" class="img-circle" style="height:50px; width:50px;" data-id="<?php echo $img ?>">
</div>
<?php
}
}
?>
A:
You can try this way with your HTML code:
$idArray = array(1, 2, 3, 4);
$imageArray = array('adsense-approval.png', 'feedback.jpg', 'logo1.jpg', 'logo2.png');
foreach($idArray as $key => $value)
{
echo $value;
echo $imageArray[$key];
}
?>
If we merge code with your HTML it looks as follow
<?php
if(is_array($imageArray))
{
foreach($imageArray as $key => $img)
{ ?>
<div class="img-wrap" id="img_<?php echo $img;?>">
<span class="close">×</span>
<img src="./gallery/<?php echo $img ?>" class="img-circle" style="height:50px; width:50px;" data-id="<?php echo $idArray[$key] ?>">
</div>
<?php
}
}
?>
Basically, you need to manage your image with your Id array in the key-value format for that you need to use the $key variable in you foreach
|
[
"stackoverflow",
"0060454026.txt"
] | Q:
How to Make docker compose status running,currently it stops after docker-compose up
I am trying to make my docker-compose status up and running by trying below docker-compose yaml, but seems when I execute docker-compose ps , I see the container is stopped, how do I make my docker-compose up and running for infinite times
docker-compose.yml
version: "3.7"
services:
execute:
command: tail -f /dev/null
image: abc/SREBlackBoxTester
labels:
- mylabelOne= "SREBlackBoxTester"
volumes:
- type: volume
source: AWS_CREDENTIALS_FOLDER
target: /home/scar/.aws
source: SCAR_CONFIG_FOLDER
target: /home/scar/.scar
volume:
nocopy: true
command: bash -c "while true; do sleep 10; done"
volumes:
AWS_CREDENTIALS_FOLDER:
SCAR_CONFIG_FOLDER:
Here goes Docker file
FROM python:3.8-alpine
RUN apk add zip unzip
RUN addgroup --system scar && adduser -S -G scar scar
USER scar
WORKDIR /home/scar/
RUN mkdir /home/scar/.scar && \
mkdir /home/scar/.aws && \
echo '[default]' > /home/scar/.aws/config && \
echo 'region=us-west-2' >> /home/scar/.aws/config && \
echo '[default]' > /home/scar/.aws/credentials && \
echo 'aws_access_key_id=AX' >> /home/scar/.aws/credentials && \
echo 'aws_secret_access_key=wctKx/KdRCSQ' >> /home/scar/.aws/credentials
ENV PATH=/home/scar/.local/bin:$PATH
ENV SCAR_LOG_PATH=/home/scar/.scar/
RUN pip3 install scar --user
CMD scar init -n SREBlack -i image
ENTRYPOINT /bin/sh
The output of docker ps -a
$ docker ps -a
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES
f5af892b3b19 abc/image "/bin/sh -c /bin/sh …"
Less than a second ago Exited (0) Less than a second ago
clever_wescoff
A:
In general a Docker container should be set up to run a specific program. In most cases the default shouldn't be an interactive shell or an artificial tail -f /dev/null command. In your example nothing is actually running the program you install. You should change the end of the Dockerfile to actually run it
FROM python:3.8-alpine
RUN pip install scar
CMD scar
and provide details like credentials through volumes: mounts. (Consider whether embedding your AWS credentials into a Docker image has compromised them; anyone who has the image can do anything they're allowed to according to your IAM permissions.)
In the example you show the combination of ENTRYPOINT and command: leads to a non-sensical command line. The Dockerfile documentation on Understand how CMD and ENTRYPOINT interact has technical details. Since you specified a shell-format ENTRYPOINT it gets wrapped in sh -c, and then the command: from the docker-compose.yml file gets appended to that. You wind up with something like
/bin/sh -c '/bin/sh' tail -f /dev/null
which just launches a shell (the "tail ..." is ignored), and since there's no input, it immediately exits.
In general Docker Compose is more oriented to running long-running applications, like databases or Web servers. The SCAR documentation has an example of running the tool in Docker. For command-line tools like this, though, given the need to do things like manually push AWS credentials from the host into the container and to have root-equivalent permissions to run the tool at all, you might find it more convenient to run the tool directly on the host, maybe installed in a Python virtual environment.
|
[
"cs.stackexchange",
"0000023721.txt"
] | Q:
Why is TIME(n log (log n)) \ TIME(n) = ∅?
In my computation book by Sipser, he says that since every language that can be decided in time $o(n \log n)$ is regular, then that can be used to show $TIME(n \log (\log n))\setminus TIME(n)$ must be the empty set. Can anyone show me why this is?
both $TIME(n\log(\log n))$ and $TIME(n)$ are regular. I think that only means we can subtract the two sets and the result will still be regular. I just dont understand how its possible to subtract the collection of $O(n\log(\log n))$ time TM decidable languages from the collection of $O(n)$ time TM decidable languages and get the empty set. These two collections are not equal so I feel like there will be something left over
A:
The quick explanation is that
$TIME[o(n\log n)]\subseteq REG\subseteq TIME[n]\subseteq TIME[o(n\log n)]$, and therefore $TIME[o(n\log n)]=REG$.
Similarly:
$TIME[n\log\log n]\subseteq TIME[o(n\log n)]\subseteq REG\subseteq TIME[n]\subseteq TIME[n\log \log n]$, so $TIME[n\log\log n]=REG$
But I think this is not the point you are missing.
You say that $TIME[n\log \log n]$ is regular. This is not exact. When we say that something
is regular, we mean that it is a language $L\subseteq \Sigma^*$, which is regular (i.e. can be recognized by a DFA).
The class REG is not a language, but a set of languages. That is, $REG\subseteq 2^{\Sigma^*}$. Similarly, $TIME[f(n)]\subseteq 2^{\Sigma^*}$ for every function $f$. These are all classes of languages.
Since we have that $TIME[n\log\log n]\subseteq TIME[o(n\log n)]\subseteq REG$, then
$REG\setminus TIME[n\log\log n]=\emptyset$. This follows from the simple property that if $A\subseteq B$, then $A\setminus B=\emptyset$.
|
[
"stackoverflow",
"0001927042.txt"
] | Q:
StringIndexOutOfBoundsException: String index out of range: 0
I am getting a weird exception code.
The code that I am trying to use is as follows:
do
{
//blah blah actions.
System.out.print("\nEnter another rental (y/n): ");
another = Keyboard.nextLine();
}
while (Character.toUpperCase(another.charAt(0)) == 'Y');
The error code is:
Exception in thread "main" java.lang.StringIndexOutOfBoundsException: String index out of range: 0
at java.lang.String.charAt(String.java:686)
at Store.main(Store.java:57)
Line 57 is the one that starts "while...".
Please help, this is driving me batty!
A:
That will happen if another is the empty string.
We don't know what the Keyboard class is, but presumably its nextLine method can return an empty string... so you should check for that too.
A:
Fix:
do
{
//blah blah actions.
System.out.print("\nEnter another rental (y/n): ");
another = Keyboard.nextLine();
}
while (another.length() == 0 || Character.toUpperCase(another.charAt(0)) == 'Y');
Or even better:
do
{
//blah blah actions.
System.out.print("\nEnter another rental (y/n): ");
while(true) {
another = Keyboard.nextLine();
if(another.length() != 0)
break;
}
}
while (Character.toUpperCase(another.charAt(0)) == 'Y');
This second version will not print "Enter another rental" if you accidentally press Enter.
|
[
"math.stackexchange",
"0001950585.txt"
] | Q:
Prove that if the sum of $n$ positive real numbers $x_1 + x_2 + ... + x_n \le 0.5$, then $(1-x_1)(1-x_2)*...*(1-x_n) \ge 0.5$.
So, I know this could be written as $x_1 + x_2 + ... + x_n \le 0.5 \le (1-x_1)(1-x_2)*...*(1-x_n)$.
And that the larger $x_1 + x_2 + ... + x_n$ is, the smaller $(1-x_1)(1-x_2)*...*(1-x_n)$ is.
Therefore we could assume that in the worst case scenario $x_1 + x_2 + ... + x_n = 0.5$, where one of the x's is close to 0.5, and the rest are close to 0, and then $(1-x_1)(1-x_2)*...*(1-x_n) = 0.5$.
But that's hardly proper proof.
A:
Note that for $0<a,b$, then $$(1-a)(1-b)=1-a-b+ab>1-(a+b) $$
Hence, as all factors are positive,
$$ (1-x_1)\cdots(1-x_{n-1})(1-x_n)>(1-x_1)\cdots(1-x_{n-2})(1-(x_{n-1}+x_n))$$
and by induction
$$ (1-x_1)\cdots(1-x_{n-1})(1-x_n)>1-(x_x+\ldots +x_n))$$
|
[
"stackoverflow",
"0041990550.txt"
] | Q:
how to test function in spark
let's say I have a spark rdd and need to process it.
rdd.mapPartitionsWithIndex{(index, iter)=>
def someFunc(){}
def anotherFunc(){}
val x = someFunc(iter)
val y = anotherFunc(index, iter, x)
x zip y
}
I define the someFunc and anotherFunc inside the mapParititions because I don't want to define them in the driver and then serialize them to the worker. it works, but I can not test it because it's a nested function. how to test this? need to write test case for those functions. currently I can serialize it. but what if the function is not serializable and can not send from driver to worker?
A:
Whole lambda will be serialized, so also inner functions ;)
You can:
create helper object to hold those functions and create test for this object
create static nested class
Remember to:
mark all non-serializable fields with @transient
mark your object/class with implements Serializable
You can create also an integration test, which will create Spark Context and run calculations in local mode
More information can be found i.e. here
|
[
"stackoverflow",
"0038210229.txt"
] | Q:
How to continue to get response data from REST API even if http status code is not 200?
I'm using retrofit 2.0.2 and okhttp3 to build my app. My server set http status code to 418 if server code has any logic error. like password doesn't match. response data is {"statuscode":500}. 500 means password doesn't match. I don't know how to read response data when okhttp3 get non-200 http status code. retrofit throw an exception when it gets 418.
My question is how to read Response Data even if http status code is not 200.
Any suggestion?
A:
I assume you are defining your call as:
Observable<YourModel> doStuff();
You get a onSuccess callback for HTTP codes 200-300 and onError for HTTP error codes, network errors, parsing errors...
You can also define your call as:
Observable<Response<YourModel>> doStuff();
and you will get a call to onSuccess when there is a HTTP error.
In onSuccess you need to check response.isSuccess(). It returns true for 200-300 status codes and you can access the response body with response.body()
If response.isSuccess() returns false you can convert the error body to your model class using:
if(throwable instanceof HttpException) {
//we have a HTTP exception (HTTP status code is not 200-300)
Converter<ResponseBody, Error> errorConverter =
retrofit.responseBodyConverter(Error.class, new Annotation[0]);
//maybe check if ((HttpException) throwable).code() == 400 ??
Error error = errorConverter.convert(((HttpException) throwable).response().errorBody());
}
|
[
"stackoverflow",
"0039965863.txt"
] | Q:
Validate form being submitted using javascript
so this might be something super simple to do, but I can't seem to get it. I have a form and a validation for it. The validation text works. If I use a normal submit button the form works correctly, but if I use javascript to submit the form, it does not work 100%.
<form id="transport" name="transport" method="post" action="submit_trans_request.php">
<input type="text" name="f_name" value="<?php echo htmlspecialchars($row['f_name'], ENT_QUOTES); ?>" style="font-size:16px;" />
<a href="javascript:onclick=validateForm(); document.transport.submit();" class="submit_btn">Submit</a>
So basically I am looking for a way to stop the submit if validateForm() returns an alert. Not sure how I go about that.
Any help is greatly appreciated. Thank you in advance!
A:
You could do it the way you've designed it by just returning false in the validationForm().
If I were you, I would just call whatever method you want when the user clicks submit, then inside of that method, call the validationForm() method. If that returns true, go forward with the process, otherwise alert the user.
Something like this:
<form>
<input type="text" />
<a href="javascript:onclick=submitData();"/>
</form>
Then in the submitData() function, do something like this:
function submitData() {
var b = validationForm();
if (b) {
// submit data
} else {
// alert user something entered wrong
}
}
Now, in the validationForm() function, you need to make sure that you return true or false.
|
[
"stackoverflow",
"0055557494.txt"
] | Q:
Use pgcrypto to verify passwords generated by password_hash
I have password hashes stored in a Postgresql database generated with:
password_hash($password, PASSWORD_DEFAULT);
Now I would like to also be able to verify a user password with Postgresql and pgcrypto.
But pgcrypto's crypt() function is not able to verify the existing password hashes.
However - I can verify password hashes generated by Postgresql with PHP's password_verify.
For example:
password_hash('hello', PASSWORD_DEFAULT);
$2y$10$fD2cw7T6s4dPvk1SFHmiJeRRaegalE/Oa3zSD6.x5WncQJC9wtCAS
postgres=# SELECT crypt('hello', gen_salt('bf'));
crypt
--------------------------------------------------------------
$2a$06$7/AGAXFSTCMu9r.08oD.UulYR0/05q7lmuCTC68Adyu/aNJkzpoIW
Verification:
// php_verify with the Postgresql hash
php > var_dump(password_verify('hello', '$2a$06$7/AGAXFSTCMu9r.08oD.UulYR0/05q7lmuCTC68Adyu/aNJkzpoIW'));
bool(true)
postgres=# SELECT crypt('hello', '$2y$10$fD2cw7T6s4dPvk1SFHmiJeRRaegalE/Oa3zSD6.x5WncQJC9wtCAS');
crypt
---------------
$2JgKNLEdsV2E
(1 Zeile)
My questions are basically:
Am I doing it wrong?
If this is not possible: Is there a migration path to make this possible?
A:
From the answer to: Where 2x prefix are used in BCrypt? which has all the gory details about the $2$ variants born from implementation bugs:
There is no difference between 2a, 2x, 2y, and 2b. If you wrote your
implementation correctly, they all output the same result.
Based on that, one may take the hash generated by PHP's password_hash, replace the leading $2y$ by $2a$ and pass it as the second argument of pgcrypto's crypt().
Using the value from your example:
postgres=# \set hash '$2a$10$fD2cw7T6s4dPvk1SFHmiJeRRaegalE/Oa3zSD6.x5WncQJC9wtCAS'
postgres=# SELECT crypt('hello', :'hash') = :'hash'
?column?
----------
t
|
[
"stackoverflow",
"0043011548.txt"
] | Q:
in autohotkey not all send tabs are working
i was trying to make a .ahk to macro a gif making, uploading, and url coping but it often times ignores keystrokes like send, {tab down}{tab up} which compleatly breaks the macro. also send, ^L isnt working when i send it in the middle of the string
^q::
Run, firefox.exe "gifcreator.me"
sleep 9000
Loop 9
{
Send, {tab}
sleep 100
}
send, {enter down}
sleep 500
send, {enter up}
sleep 200
Loop 4
{
send, {ctrl down}
send, ^L
send, {ctrl up}
}
sleep 200
send {ctrl down}
send, a
send, {ctrl up}
sleep 200
send, {delete down}
sleep 200
send {delete up}
sleep 5000
send, C:\Users\John Reuter\OneDrive\art
sleep 300
send, {enter down}
sleep 500
send, {enter up}
sleep 5000
click 1200, 50
sleep 3000
click 1200, 50
sleep 200
send, {ctrl down}
sleep 200
send, v
sleep 200
send, {Ctrl Up}
sleep 13000
click 50, 150
sleep 3000
send, {Ctrl Down}
sleep 200
send, a
sleep 200
send, {Ctrl Up}
sleep 200
send, {Ctrl Down}
sleep 200
send, a
sleep 200
send, {Ctrl Up}
sleep 7000
send, {enter}
sleep 5000
send, 5
sleep 200
send, 1
sleep 3000
send,^{ctrl down}-{ctrl up}
sleep 3000
send,^{ctrl down}-{ctrl up}
sleep 3000
send,^{ctrl down}-{ctrl up}
sleep 3000
send,^{ctrl down}-{ctrl up}
sleep 3000
send,^{ctrl down}-{ctrl up}
sleep 3000
send, {down}
sleep 300
send, {down}
sleep 300
send, {down}
sleep 300
send, {down}
sleep 300
send, {down}
sleep 300
send, {down}
sleep 300
send, {down}
sleep 300
send, {down}
sleep 300
send, {down}
sleep 300
send, {down}
sleep 300
send, {down}
sleep 300
send, {down}
sleep 300
send, {down}
sleep 300
send, {down}
sleep 300
send, {down}
sleep 300
send, {down}
sleep 300
send, {down}
sleep 3000
click 567, 227; miss
sleep 3000
send, {Ctrl Down}
send, f
send, {Ctrl Up}
sleep 3000
send, download gif
sleep 5000
send, {enter}
sleep 5000
send, {Ctrl Down}
send, l
send, {Ctrl Up}
sleep 5000
send, giphy.com/upload
sleep 3000
send, 2
sleep 3000
click 642, 325
sleep 8000
send, {down}
sleep 3000
send, {right}
sleep 3000
send, {down}
sleep 3000
send, {right}
sleep 3000
send, {down}
sleep 3000
send, {right}
sleep 3000
send, {down}
sleep 3000
send, {right}
sleep 3000
send, {enter}
sleep 7000
send, {Ctrl Down}
send, f
send, {Ctrl Up}
sleep 3000
send, upload gifs
sleep 3000
click right 661, 198
sleep 3000
click 713, 218
sleep 3000
send, {Ctrl Down}
send, l
send, {Ctrl Up}
sleep 3000
send, {Ctrl Down}
send, x
send, {Ctrl Up}
sleep 3000
send, <img src="
sleep 3000
send, {Ctrl Down}
send, v
send, {Ctrl Up}
sleep 3000
send, "alt=""style="width:12px;height:18px;">
;======================
Esc::ExitApp
A:
Instead of sleeps, try using WinWait etc. Look in AHK help file how the commands (Run, WinWait etc.) are properly used.
SetTitleMatchMode, 2
Run, firefox.exe "gifcreator.me"
WinWait, Online Animated GIF Maker
IfWinNotActive, Online Animated GIF Maker, ,WinActivate, Online Animated GIF Maker
WinWaitActive, Online Animated GIF Maker
sleep 100
Loop 8
{
Send, {tab}
sleep 100
}
Send, {Enter}
; WinWait, ...
; IfWinNotActive, ...
; ...
; SendInput, C:\Users\John\OneDrive\art
; ...
EDIT
If the next window doesn't appear after the first "Send, {Enter}", try using a loop:
SetTitleMatchMode, 2
Run, firefox.exe "gifcreator.me"
Loop
{
WinWait, Online Animated GIF Maker
IfWinNotActive, Online Animated GIF Maker, ,WinActivate, Online Animated GIF Maker
WinWaitActive, Online Animated GIF Maker
sleep 100
Loop 8
{
Send, {tab}
sleep 100
}
Send, {Enter}
sleep 1000
IfWinExist, title of next window
break
}
; WinWait, title of next window
; IfWinNotActive, ...
; ...
; SendInput, C:\Users\John\OneDrive\art
|
[
"french.stackexchange",
"0000032866.txt"
] | Q:
Est-ce que l'expression "sur papier libre" signifie qu'il faut que le document soit manuscrit ?
Je dois joindre une lettre à un dossier destiné au Ministère de la Justice. Il est écrit que cette "requête personnelle sur papier libre adressée au ministre de la justice [...] doit être datée et signée". J'ai du mal à comprendre le sens de "sur papier libre" dans ce contexte. Cela signifie-t-il qu'il faut que la requête soit écrite à la main ?
A:
« Sur papier libre » signifie qu'il n'y a pas de formulaire particulier à remplir. Il suffit de prendre une feuille de papier et d'écrire. Peu importe que ce soit écrit à la main ou à la machine, du moment que c'est lisible.
La signature, au minimum, doit être manuscrite. Certaines démarches exigent également une date manuscrite ou une formule spécifique manuscrite, mais cela sera mentionné explicitement. Tout le reste peut être imprimé.
L'antonyme de « sur papier libre » est « sur le formulaire n° XXX ».
Certains dictionnaires définissent « papier libre » comme « papier non timbré », mais cette définition est obsolète. Le papier timbré, une feuille de papier pour lequel il faut payer une taxe, a disparu en 1986. De nos jours, s'il faut payer, le timbre fiscal est à acheter séparément et à coller comme on colle un timbre postal sur une enveloppe (il existe aussi sous forme électronique). La plupart des démarches qui demandent un timbre fiscal exigent un formulaire spécifique, mais il doit bien exister des cas où il faut payer avec un timbre fiscal pour une demande qui se fait sur papier libre.
“Sur papier libre” means that you can use a plain sheet of paper. Its antonym is “sur le formulaire n° XXX” (using form nr. XXX). It doesn't matter whether the text is printed or handwritten as long as it's legible. The signature must be handwritten.
|
[
"es.stackoverflow",
"0000145613.txt"
] | Q:
No se visualiza el caracter ñ en pantalla
Estoy usando ASCII, pero ese es el resultado.
ejecutado en codeblocks 17.12
printf("\nAhora ingrese su contrase%a: ", 164);
A:
De tu pregunta parece deducirse que esperabas que el % fuera sustituido por un carácter, cuyo código sería el 164 que le pasas como parámetro (que aparentemente sería la ñ en la tabla de códigos que estés usando.
Esto no es así, % es un prefijo cuya intepretación depende del carácter que aparezca después. Para que funcione como esperabas, debes usar %c (pues esa es la cadena de formato para mostrar un único carácter).
En tu caso, tras el % había una a, y eso produce el extraño resultado que puedes ver. La cadena de formato %a es bastante poco frecuente, pues lo que hace es tomar el número de 32 bits que le especifiques (en tu caso el 164) e interpretarlo como si estuviera codificado en el formato IEEE-754 de punto flotante, para mostrarte entonces su mantisa y su exponente ambos en hexadecimal.
|
[
"gaming.stackexchange",
"0000014959.txt"
] | Q:
Definition of a baneling bust
I heard day9 mention a baneling bust. What is a baneling bust?
A:
A "Baneling bust" is when you use a lot of Banelings to breach the enemy's "front door" - the bunch of buildings used at the ramp of the main base to block or hinder the enemy from entering.
Since Banelings do so much damage against buildings, they are very useful for quickly bringing down low-HP buildings commonly used for fortifying this front door - mainly supply depots, bunkers and pylons - and thus opening the way to the opponent's base. This tactic is called the Baneling bust, even if you destroy just some of the buildings composing that "door", not all of them.
The Baneling bust is a lot less useful against front doors made from higher-HP buildings, like gateways or barracks.
|
[
"stackoverflow",
"0048291776.txt"
] | Q:
How to parse date-time with two or three milliseconds digits in java?
Here is my method to parse String into LocalDateTime.
public static String formatDate(final String date) {
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SS");
LocalDateTime formatDateTime = LocalDateTime.parse(date, formatter);
return formatDateTime.atZone(ZoneId.of("UTC")).toOffsetDateTime().toString();
}
but this only works for input String like
2017-11-21 18:11:14.05
but fails for 2017-11-21 18:11:14.057
with DateTimeParseException.
How can I define a formatter that works for both .SS and .SSS?
A:
You would need to build a formatter with a specified fraction
DateTimeFormatter formatter = new DateTimeFormatterBuilder()
.appendPattern("yyyy-MM-dd HH:mm:ss")
.appendFraction(ChronoField.MILLI_OF_SECOND, 2, 3, true) // min 2 max 3
.toFormatter();
LocalDateTime formatDateTime = LocalDateTime.parse(date, formatter);
A:
The answers by Basil Bourque and Sleiman Jneidi are excellent. I just wanted to point out that the answer by EMH333 has a point in it too: the following very simple modification of the code in the question solves your problem.
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.[SSS][SS]");
The square bracket in the format pattern string enclose optional parts, so this accepts 3 or 2 decimals in the fraction of seconds.
Potential advantage over Basil Bourque’s answer: gives better input validation, will object if there is only 1 or there are four decimals on the seconds (whether this is an advantage depends entirely on your situation).
Advantage over Sleiman Jneidi’s answer: You don’t need the builder.
Possible downside: it accepts no decimals at all (as long as the decimal point is there).
As I said, the other solutions are very good too. Which one you prefer is mostly a matter of taste.
A:
tl;dr
No need to define a formatter at all.
LocalDateTime.parse(
"2017-11-21 18:11:14.05".replace( " " , "T" )
)
ISO 8601
The Answer by Sleiman Jneidi is especially clever and high-tech, but there is a simpler way.
Adjust your input string to comply with ISO 8601 format, the format used by default in the java.time classes. So no need to specify a formatting pattern at all. The default formatter can handle any number of decimal digits between zero (whole seconds) and nine (nanoseconds) for the fractional second.
Your input is nearly compliant. Just replace the SPACE in the middle with aT.
String input = "2017-11-21 18:11:14.05".replace( " " , "T" );
LocalDateTime ldt = LocalDateTime.parse( input );
ldt.toString(): 2017-11-21T18:11:14.050
|
[
"stackoverflow",
"0004356943.txt"
] | Q:
local properties file for buildr
the buildr docs suggest using profiles.yaml for managing settings. however, i would like a way to define settings which an individual dev would use to run locally and thus shouldn't be in scm. is there a preferred way of doing this?
A:
Your solution looks good. Using buildr's _ function you can cut it down slightly:
Buildr.settings.profiles.merge!(
YAML.load(File.read(_("profiles.local.yml")))
A:
FWIW, I ended up with:
path = File.dirname(@application.rakefile)
loc = YAML.load(File.read(File.join(path, "profiles.local.yml")))
Buildr.settings.profiles.merge!(loc)
|
[
"stackoverflow",
"0000727975.txt"
] | Q:
jQuery hover() not working with absolutely positioned elements and animation
I have some html that looks like this:
<a href="#" class="move"><span class="text">add</span><span class="icon-arrow"></span></a>
And I have a jquery event registered on the anchor tag:
$('a.move').hover(
function (event) {
$(this).children('span.text').toggle();
$(this).animate({right: '5px'}, 'fast');
},
function (event) {
$(this).children('span.text').toggle();
$(this).animate({right: '0px'}, 'fast');
}
);
When I mouse over the anchor tag, it displays the span.text and moves the anchor 5px to the right.
Now, due to complications that I don't feel like getting into, I have to set position: relative; on the container and absolutely position the icon and the text so that the icon appears on the left and the text on the right.
THE PROBLEM:
When I mouse over the anchor tag, the icon moves right, and the mouse ends up over top of the text (which appears). Unfortunately, the 'out' function gets called if I move my mouse from the icon to the text and the animation starts looping like crazy. I don't understand what's causing the "out" event to fire, as the mouse is never leaving the anchor tag.
Thanks!
A:
Instead of hover you can use the "mouseenter" and "mouseleave" events, which do not fire when child elements get in the way:
$('a.move').bind('mouseenter', function (e) {
$(this).children('span.text').toggle();
$(this).animate({right: '5px'}, 'fast');
})
.bind('mouseleave', function (e) {
$(this).children('span.text').toggle();
$(this).animate({right: '0px'}, 'fast');
});
|
[
"physics.stackexchange",
"0000150841.txt"
] | Q:
Introducing cut-off in a renormalisation procedure for quantum mechanics
I've been reading a paper on renormalisation theory as applied to a simple one-particle Coulombic system with a short-range potential.
In the process of renormalisation, the authors introduce an ultraviolet cutoff into the Coulomb potential through its Fourier transform:
$$
\frac{1}{r} \xrightarrow{\text{F.T.}} \frac{4\pi}{q^{2}} \xrightarrow{\text{cutoff}} \frac{4\pi}{q^{2}} e^{-q^{2}a^{2}/2} \xrightarrow{\text{F.T.}} \frac{erf(r/\sqrt{2}a)}{r} $$
It would really help me out if you could explain in plain language what is going on here.
I am a fourth year undergraduate student with only a basic knowledge of quantum mechanics, so you might have to dumb down a bit.
A:
What you describe is usually called regularization, as distinct from renormalization, although the terms are related. It could help to cite the paper that you are reading, but in any case, it often happens that long wavelength physics do not depend exactly on the details of short distance physics. For example, if you are scattering a particle off of a coulomb potential, then with finite momentum (finite wavelength) there is not enough resolution to see what is happening deep inside the potential well. For example, in scattering one proton off of another (Rutherford Scattering), the amplitude for the two protons to get close enough for us to see sub-nuclear details is vanishingly small at low momentum. On the other hand, if we just try to plug the Coulomb potential into our formulas, the calculations blow up.
The idea then is to pick a length scale and agree that all momenta will be such that wavelengths are greater than this length scale. We then deform the Coulomb potential so that our calculations don't misbehave at $r=0$. Because our wavelengths are much longer than our cut-off, the details of how we deform the potential shouldn't affect the answer (this is not always guaranteed). If this is indeed the case, taking the cut-off length scale to $0$ is equivalent to our answers converging to some limit.
The reason we perform the cut-off in momentum space is because this is the natural space where short distance physics separates from long distance physics (the Fourier transform separates wave functions according to their wavelength). It is not always the way things are done. In quantum field theory, for example, one way is to take the dimension of spacetime as a free parameter. The exact choice of regularization procedure depends on what is convenient for the calculation. The divergence in your calculation is ultimately connected to the fact that the photon is massless. The regulator that is used is to give the photon a finite but small mass.
To summarize: it's a mathematical trick that depends on a decoupling between short distance and long distance physics. The latter is called renormalizability.
|
[
"stackoverflow",
"0036740220.txt"
] | Q:
JRXML - Eliminating repeated header on the bottom
We've got a problem when printing some PDF reports using JasperReports. These reports are basically a breakdown of all the sales made to a specific client over the course of the years.
We take information from a DB, transform it using Java, and print the reports in PDF. The problem is, in some rare cases, some headers appear repeated at the bottom of each page:
Ideally, we should be able to omit that loose header and keep only the one on the new page, but I can't seem to do it, at least via TIBCO Jaspersoft Studio.
I don't actually know much about these reports, but any information you need to help, feel free to ask.
Thanks in advance.
EDIT: Following Petter Friberg's comment, here's parts of the JRXML being used in this report. I omitted some parts that follow the same properties, but i think this should be clear enough.
<jasperReport xmlns="http://jasperreports.sourceforge.net/jasperreports" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://jasperreports.sourceforge.net/jasperreports http://jasperreports.sourceforge.net/xsd/jasperreport.xsd" name="Historique" language="groovy" pageWidth="595" pageHeight="842" columnWidth="483" leftMargin="56" rightMargin="56" topMargin="43" bottomMargin="43" isSummaryWithPageHeaderAndFooter="true" resourceBundle="reportLabels" whenResourceMissingType="Empty" uuid="80db05e1-8ca3-483d-86cc-5947dc62296b">
<property name="ireport.zoom" value="1.0"/>
<property name="ireport.x" value="0"/>
<property name="ireport.y" value="0"/>
<property name="com.jaspersoft.studio.unit." value="pixel"/>
<property name="com.jaspersoft.studio.layout" value="com.jaspersoft.studio.editor.layout.VerticalRowLayout"/>
<property name="com.jaspersoft.studio.unit.topMargin" value="mm"/>
<property name="com.jaspersoft.studio.unit.bottomMargin" value="mm"/>
<property name="com.jaspersoft.studio.unit.leftMargin" value="mm"/>
<property name="com.jaspersoft.studio.unit.rightMargin" value="mm"/>
<property name="com.jaspersoft.studio.unit.pageHeight" value="pixel"/>
<property name="com.jaspersoft.studio.unit.pageWidth" value="pixel"/>
<property name="com.jaspersoft.studio.unit.columnWidth" value="pixel"/>
<property name="com.jaspersoft.studio.unit.columnSpacing" value="pixel"/>
<template><![CDATA["/jasper/styles_letter.jrtx"]]></template>
<style name="Table Group Header" mode="Opaque" forecolor="#000000" backcolor="#99CCFF" vTextAlign="Middle" fontName="Arial" fontSize="10" isBold="true" pdfFontName="Helvetica-Bold"/>
<!-- ... multiple styles -->
<parameter name="title" class="java.lang.String"/>
<!-- ... multiple parameters-->
<parameter name="addressCity" class="java.lang.String"/>
<field name="reduction" class="java.lang.String">
<fieldDescription><![CDATA[reduction]]></fieldDescription>
</field>
<!-- ... multiple fields -->
<field name="year" class="java.lang.String"/>
<group name="DescriptionGroup">
<groupExpression><![CDATA[$F{year}]]></groupExpression>
<groupHeader>
<band height="30">
<property name="com.jaspersoft.studio.unit.height" value="pixel"/>
<textField evaluationTime="Group" evaluationGroup="DescriptionGroup" bookmarkLevel="2">
<reportElement style="Table Group Header" mode="Opaque" x="0" y="0" width="483" height="15" printWhenGroupChanges="DescriptionGroup" backcolor="#B0B0B0" uuid="f626bb36-a919-48b9-98b3-756d1ce9812b">
<property name="local_mesure_unitx" value="pixel"/>
<property name="com.jaspersoft.studio.unit.x" value="px"/>
<property name="com.jaspersoft.studio.unit.height" value="pixel"/>
<printWhenExpression><![CDATA[new Boolean($P{emptyList} != true)]]></printWhenExpression>
</reportElement>
<box leftPadding="10">
<topPen lineWidth="0.0" lineStyle="Solid" lineColor="#000000"/>
<leftPen lineWidth="0.0" lineStyle="Solid" lineColor="#000000"/>
<bottomPen lineWidth="0.5" lineStyle="Solid" lineColor="#000000"/>
<rightPen lineWidth="0.0" lineStyle="Solid" lineColor="#000000"/>
</box>
<textElement markup="none">
<font size="9" isBold="true"/>
</textElement>
<textFieldExpression><![CDATA[$F{year}]]></textFieldExpression>
</textField>
<!-- ... multiple text fields following the same principle -->
</band>
</groupHeader>
</group>
<background>
<band splitType="Stretch"/>
</background>
<title>
<!-- this only appears in the first page, has all the customer info -->
</title>
<columnHeader>
<band splitType="Stretch"/>
</columnHeader>
<detail>
<band height="25" splitType="Stretch">
<property name="com.jaspersoft.studio.layout" value="com.jaspersoft.studio.editor.layout.HorizontalRowLayout"/>
<property name="com.jaspersoft.studio.unit.height" value="pixel"/>
<textField pattern="" isBlankWhenNull="true">
<reportElement style="Zebra" mode="Opaque" x="0" y="0" width="130" height="25" uuid="234d3832-bd30-40a0-b8e5-eac964158000">
<property name="local_mesure_unitheight" value="pixel"/>
<property name="local_mesure_unitx" value="pixel"/>
<property name="com.jaspersoft.studio.unit.x" value="px"/>
<property name="com.jaspersoft.studio.unit.height" value="pixel"/>
<printWhenExpression><![CDATA[new Boolean($P{emptyList} != true)]]></printWhenExpression>
</reportElement>
<textElement>
<font size="8" isBold="false"/>
<paragraph leftIndent="3"/>
</textElement>
<textFieldExpression><![CDATA[$F{article}]]></textFieldExpression>
</textField>
<!-- ... multiple fields following the same principle. this is the body of each sub-table, showing the info for each year -->
</band>
</detail>
<pageFooter>
<band height="25" splitType="Stretch">
<!-- ... -->
</band>
</pageFooter>
<summary>
<band height="40" splitType="Stretch">
<!-- ... shows a summary of all the info shown on each table -->
</band>
</summary>
</jasperReport>
Thanks in advance.
A:
You are using a group the generate the header, so you can use this attribute minHeightToStartNewPage, to determine how much space needs to remain otherwise break to new page.
<group name="DescriptionGroup" minHeightToStartNewPage="60">
or if you like to force it to always start on new page
<group name="DescriptionGroup" isStartNewPage="true">
|
[
"stackoverflow",
"0020084147.txt"
] | Q:
c++ inheritance and shared / non shared functions
Thank you in advance for your help.
Here's my problem :
I've got sub-class (x, y, z) included in a class (A). Some functions are shared (declared in A) and other not (declared in the subclass).
All the objects are stored in one map map<string,A>Groups.
Then, I want to do a loop for all the functions using an iterator, but there come the problem of the functions that belonged only to one class, it says that class A doesn't contain functions ...
I would like to say
for(it=Groups.begin(); it!=Groups.end(); ++it)
{
it->second.functionShared1()
if objects1 belongs to class x : it->second.functionsOfClassX and it understand it has to find the function in class x.
I suppose this will be impossible but if you have an idea of how I can resolve this problem I will be really grateful.
I though about creating virtual functions but it will be a mess, or to create a map for each class and a big map containing all the maps. But then I don't know how to iterate it and to declared outermap["x"]=innermap[x]. So that's it, sorry I just began C++, I hope I explained well.
A:
Why not just make a virtual function:
class A
{
//...
virtual void executeMyJunk();
};
for( it=Groups.begin(); it!=Groups.end(); ++it ) it->second.executeMyJunk();
Provide an implementation for that in your subclasses. You can also provide an implementation in A to call functions that are common to all classes.
void A::executeMyJunk()
{
EveryoneHasThisFunction();
}
void x::executeMyJunk()
{
// Call common functions
A::executeMyJunk();
// Call functions specific to this class
DoExxyStuff();
}
One thing I should point out is that if you plan to have virtual methods, you will need to store A* (or a smart pointer, eg std::unique_ptr<A>) in your map, not just an instance of A.
I'm not sure if I interpreted your question correctly though. Perhaps this is not useful to you. If you actually meant that you want to execute a specific function only if a class supports that function, then you can use dynamic_cast. Here I assume that your map stores pointers:
for( it=Groups.begin(); it!=Groups.end(); ++it ) {
x *xinst = dynamic_cast<x*>(it->second);
if( x ) x->DoExxyStuff();
}
|
[
"stackoverflow",
"0027348605.txt"
] | Q:
How can I create a if-else loop with two optional conditions
I want my code to be basically like this:
If (something OR something else)
{
Do magical things;
}
else
{
cry me a river;
}
How can I do it so there are two optional conditions, meaning both do not need to be true (just one) in order for the loop to do 'magical' things.
A:
You have almost answered your own question. :)
Type simply:
if((first_logical_value) || (second_logical_value)){
// ... do magical things
} else {
// ... do other things
}
If in case of two false values you don't wont to do anything - skip else part of this statement. You can find things like this searching internet resources, e.g.:
https://docs.oracle.com/javase/tutorial/java/nutsandbolts/if.html
http://www.erpgreat.com/java/java-boolean-logical-operators.htm
|
[
"stackoverflow",
"0063191886.txt"
] | Q:
xUnit InlineData unexpected result
I've got the following class I'm trying to unit test (example class):
using System;
public class Checker
{
public bool Check<T>(T valueA, T valueB)
{
if (typeof(T) != typeof(string))
throw new NotSupportedException();
return true;
}
}
When I call new Checker().Check(null, "test") it correctly returns true but when I use xUnit with InlineData as follows:
[Theory]
[InlineData(null, "test")]
[InlineData("test", null)]
public void TestChecker<T>(T valueA, T valueB)
{
var checker = new Checker();
Assert.True(checker.Check(valueA, valueB));
}
Both tests should pass but they don't - instead a NotSupportedException exception is thrown on the first test. According to the Test Explorer... this was passed on the first test:
Namespace.TestChecker<Object>(valueA: null, valueB: "test") - why is T type of object instead of string as when I call it directly and how can I prevent this from happening?
A:
Ok actually there is a better way of achieving this while maintaining InlineData:
[Theory]
[InlineData(null, "test")]
[InlineData("test", null)]
public void TestChecker<T>(T valueA, T valueB)
{
var checker = new Checker();
Assert.True(checker.Check((dynamic) valueA, (dynamic) valueB));
}
Although this is not the prettiest because Test Explorer will still show TestChecker<Object>(valueA: null, valueB: "test") but it works...
|
[
"stackoverflow",
"0052070215.txt"
] | Q:
UWP: Binding ListView ItemTemplate MenuFlyOut events to ViewModel
I have an existing UWP app to manage passwords for websites and accounts but when I wrote it about 3 years ago I did not know MVVM very well and so all the event handlers are in the View's code behind and now I'm trying to resolve this and make it more MVVM adherent by moving this code to the ViewModel.
One of the existing features I have is a flyout menu on each ListView item so the user can edit/delete the entry (and do a couple of other functions) but because I am defining a DataType for the ListView ItemTemplate data template it will now not recognise the binding to the Click event handler in my Viewmodel.
As you'd expect I have my ViewModel defined both in the namespaces and in the page data context as follows:
<Page ....
xmlns:vm="using:PassPort.ViewModels"
...>
<Page.DataContext>
<vm:MainViewModel/>
</Page.DataContext>
And here is my ListView and it's ItemTemplate and DataTemplate. In each MenuFlyoutItem in the FlyOut I'm trying to tell it to use the handler in my ViewModel but it cannot resolve 'vm' - it shows the squiggly line and says "the property 'vm' was not found in type 'Account'".
</ListView Name="lvwAccounts"
ItemsSource="{x:Bind vm.AccountsView}"
.....>
<ListView.ItemTemplate>
<DataTemplate x:DataType="model:Account">
<Grid MinHeight="36" HorizontalAlignment="Left" RightTapped="AccountsList_RightTapped">
<FlyoutBase.AttachedFlyout>
<MenuFlyout Placement="Bottom">
<MenuFlyoutItem x:Name="OpenWebsiteButton" Text="Open Website" Click="{x:Bind vm.FlyoutOpenWebsiteButton_Click}"/>
<MenuFlyoutItem x:Name="EditButton" Text="Edit Account" Click="{x:Bind vm.FlyoutEditButton_Click}"/>
<MenuFlyoutItem x:Name="AddButton" Text="Add Account" Click="{x:Bind vm. FlyoutAddButton_Click}"/>
<MenuFlyoutItem x:Name="DeleteButton" Text="Delete Account" Click="{x:Bind vm.FlyoutDeleteButton_Click}"/>
</MenuFlyout>
</FlyoutBase.AttachedFlyout>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="200"/>
<ColumnDefinition Width="150"/>
<ColumnDefinition Width="200"/>
<ColumnDefinition Width="150"/>
<ColumnDefinition Width="*"/>
</Grid.ColumnDefinitions>
<TextBlock Grid.Row="0" Grid.Column="0" Text="{x:Bind AccountName}" Foreground="{StaticResource Light}"
VerticalAlignment="Center" HorizontalAlignment="Stretch"/>
<TextBlock Grid.Row="0" Grid.Column="1" Text="{x:Bind Category}" Foreground="{StaticResource Light}"
VerticalAlignment="Center" HorizontalAlignment="Stretch" TextWrapping="Wrap" />
<TextBlock Grid.Row="0" Grid.Column="2" Text="{x:Bind UserID}" Foreground="{StaticResource Light}"
VerticalAlignment="Center" HorizontalAlignment="Left"/>
<TextBlock Grid.Row="0" Grid.Column="3" Text="{x:Bind Password}" Foreground="{StaticResource Light}"
VerticalAlignment="Center" HorizontalAlignment="Left" />
<TextBlock Grid.Row="0" Grid.Column="4" Text="{x:Bind PasswordHint}" Foreground="{StaticResource Light}"/>
</Grid>
</DataTemplate>
</ListView.ItemTemplate>
</ListView>
Everywhere else in the code 'vm' is resolved without issue so it seems that because of the DataType it then can't/won't resolve my ViewModel reference.
I've also tried using 'Click="{Binding vm.[event handler]"} but it makes no difference - so does anyone know how I can resolve this?
A:
In DataTemplate and x:Bind the Microsoft say that:
Inside a DataTemplate (whether used as an item template, a content template, or a header template), the value of Path is not interpreted in the context of the page, but in the context of the data object being templated. So that its bindings can be validated (and efficient code generated for them) at compile-time, a DataTemplate needs to declare the type of its data object using x:DataType.
To bind the ViewModel in the DataTemplate, you need use binding instead of x:bind like this code.
<ListView Name="lvwAccounts"
ItemsSource="{x:Bind vm.AccountsView}" >
<ListView.ItemTemplate>
<DataTemplate x:DataType="model:Account">
<FlyoutBase.AttachedFlyout>
<MenuFlyout Placement="Bottom">
<MenuFlyoutItem x:Name="OpenWebsiteButton" Text="Open Website" Command="{Binding ElementName=lvwAccounts,Path=DataContext.FlyoutOpenWebsite}"/>
</MenuFlyout>
</FlyoutBase.AttachedFlyout>
</DataTemplate>
</ListView.ItemTemplate>
</ListView>
But binding in UWP cant bind the method and the binding can only bind the Command. You should change your method to command.
I can't find the Command property in MenuFlyoutItem that we may use Behavior to bind UI event to command in ViewModel see:WPF Binding UI events to commands in ViewModel
See: https://stackoverflow.com/a/40774956/6116637
Why can't I use {x:Bind {RelativeSource Self}} in a data template?
|
[
"rpg.stackexchange",
"0000125625.txt"
] | Q:
Can you still Target Allies with a spell if you have the Status Spell in effect and are blinded?
I was wondering if you could target an ally while being blinded if you had the Status spell on them.
Status
When you need to keep track of comrades who may get separated, status allows you to mentally monitor their relative positions and general condition. You are aware of direction and distance to the creatures and any conditions affecting them: unharmed, wounded, disabled, staggered, unconscious, dying, nauseated, panicked, stunned, poisoned, diseased, confused, or the like. Once the spell has been cast upon the subjects, the distance between them and the caster does not affect the spell as long as they are on the same plane of existence. If a subject leaves the plane, or if it dies, the spell ceases to function for it.
Can you for example cast Heal (with the Reach Spell metamagic feat) or Remove Fear on them?
A:
Strictly RAW, yes, sometimes
The description for the condition Blinded, as you have linked, states (emphasis mine):
Unable to see... All checks and activities that rely on vision (such
as reading and Spot checks) automatically fail. All opponents are
considered to have total concealment (50% miss chance) relative to the
blinded character.
Your allies are not opponents, so they do not have total concealment. So you still need to succeed on a ranged touch attack roll to hit your ally to cast (Reach) Heal (as Reach Spells are treated as rays), with Status definitely letting you know which way to aim, but by RAW you do not suffer the 50% miss chance.
Exploring targeted spells - touch spells can be delivered to allies even without Status. However, while blinded you can only cast Remove Fear on allies that you are touching. This is because the spell selects particular Target or Targets but is not a ray spell. Under the rules for Aiming a Spell on p175 of PHB:
Target or Targets: ...You must be able to see or touch the target, and you must specifically
choose that target.
The Status spell does not let you see your allies, so by RAW you cannot cast targeted spells on them.
|
[
"stackoverflow",
"0002689327.txt"
] | Q:
How could I pass the float number by value in Objective-C?
In the following code, I passed the "1.3f" to printFloat, but it was the wrong value (in this case, -2.000000) after was "f" received. And there is a warning that "AppDelegate may not respond to -printFloat:"
Where did I get it wrong?
- (void)applicationDidFinishLaunching:(UIApplication *)application{
// Override point for customization after application launch
[window makeKeyAndVisible];
[self printFloat:1.3f];
}
- (void)printFloat:(float)f {
NSLog(@"%f",f);
}
A:
You can easily resolve both your problems by adding the method prototype into the header file (<sameName>.h):
- (void)printFloat:(float)f;
|
[
"stackoverflow",
"0012344499.txt"
] | Q:
Filter data by month
I've a source data which contains the data in month as below:
Oct 2011
Nov 2011
Dec 2011
Jan 2012
Feb 2012
I need to select all the months that invoice falls on to. The invoice has properties below:
Invoice.StartDate=13-10-2011;
Invoice.EndDate=11-01-2012;
sourceData.Where(x => x.SourceDate.Month >= Invoice.StartDate.Month &&
x.SourceDate.Year==Invoice.StartDate.Year &&
x.SourceDate.Month <= Invoice.EndDate.Month &&
x.SourceDate.Year == Invoice.EndDate.Year).ToList();
The above query returns 0. I'm expecting the filtered data source result below:
Oct 2011
Nov 2011
Dec 2011
Jan 2012
Could someone help me to achieve above?
A:
This solution simply creates DateTimes with Day set to 0 so they can be compared directly. It uses query syntax because it's more convenient for creating the new DateTimes.
startYearMonth = new DateTime(Invoice.StartDate.Year, Invoice.StartDate.Month, 0);
endYearMonth = new DateTime(Invoice.EndDate.Year, Invoice.EndDate.Month, 0);
(from data in sourceData
let dataYearMonth = new DateTime(data.SourceDate.Year, data.SourceDate.Month, 0)
where dataYearMonth >= startYearMonth && dataYearMonth <= endYearMonth
select data).ToList();
|
[
"stackoverflow",
"0037258508.txt"
] | Q:
Unexpected T vs &T as type parameter in Rust
I'm genericising a Graph that I wrote. Current signature is
#[derive(Debug)]
pub struct Graph<T: Clone + Hash + Eq> {
nodes: HashMap<T, Node<T>>,
edges: HashMap<T, Vec<Edge<T>>>
}
#[derive(PartialEq, Debug)]
pub struct Node<T: Clone + Hash + Eq> {
pub id: T,
pub x: f64,
pub y: f64
}
#[derive(PartialEq, Debug)]
pub struct Edge<T: Clone + Hash + Eq> {
pub id: T,
pub from_id: T,
pub to_id: T,
pub weight: i64
}
I'm using it in a specific function and the calls to other functions are failing to compile.
First, the use:
fn reducer<T>(graph: Graph<T>, untested_nodes: HashSet<T>, mut results: Vec<HashSet<T>>) -> Graph<T>
where T: Clone + Hash + Eq {
match untested_nodes.iter().next() {
None => {
collapsed_graph(&graph, &results)
}
Some(root) => {
let connected_nodes = explore_from(&root, &graph);
let difference = untested_nodes.difference(&connected_nodes)
.cloned()
.collect();
results.push(connected_nodes);
reducer(graph,
difference,
results
)
}
}
}
The signature of explore_from
fn explore_from<T: Clone + Hash + Eq>(root: &T, graph: &Graph<T>) -> HashSet<T> {
The compiler error:
Compiling efficient_route_planning v0.1.0 (file:///Users/stuart/coding/efficient_route_planning)
src/connected_component.rs:19:55: 19:61 error: mismatched types:
expected `&weighted_graph::Graph<&T>`,
found `&weighted_graph::Graph<T>`
(expected &-ptr,
found type parameter) [E0308]
src/connected_component.rs:19 let connected_nodes = explore_from(&root, &graph);
^~~~~~
src/connected_component.rs:19:55: 19:61 help: run `rustc --explain E0308` to see a detailed explanation
src/connected_component.rs:20:56: 20:72 error: mismatched types:
expected `&std::collections::hash::set::HashSet<T>`,
found `&std::collections::hash::set::HashSet<&T>`
(expected type parameter,
found &-ptr) [E0308]
src/connected_component.rs:20 let difference = untested_nodes.difference(&connected_nodes)
^~~~~~~~~~~~~~~~
src/connected_component.rs:20:56: 20:72 help: run `rustc --explain E0308` to see a detailed explanation
src/connected_component.rs:23:26: 23:41 error: mismatched types:
expected `std::collections::hash::set::HashSet<T>`,
found `std::collections::hash::set::HashSet<&T>`
(expected type parameter,
found &-ptr) [E0308]
src/connected_component.rs:23 results.push(connected_nodes);
^~~~~~~~~~~~~~~
src/connected_component.rs:23:26: 23:41 help: run `rustc --explain E0308` to see a detailed explanation
error: aborting due to 3 previous errors
Could not compile `efficient_route_planning`.
The full code: https://github.com/shterrett/efficient_route_planning/blob/generic-graph/src/connected_component.rs#L19
As far as I can tell, everything should be &Graph<T>, not &Graph<&T>. I'm not sure where the error is.
Versions:
% multirust show override
multirust: default toolchain: stable
multirust: default location: /Users/xxxxxx/.multirust/toolchains/stable
rustc 1.8.0 (db2939409 2016-04-11)
cargo 0.9.0-nightly (8fc3fd8 2016-02-29)
A:
The problem seem to be here
let connected_nodes = explore_from(&root, &graph);
untested_nodes.iter().next() returns Option<&T>, so the match Some(root) makes root: &T. This implies that &root is &&T, and the inferred T in explore_from is &T (T of reducer). I expect that removing the reference from root fix this:
let connected_nodes = explore_from(root, &graph);
|
[
"stackoverflow",
"0050995292.txt"
] | Q:
React Duplicated Name but different fields
React Application I fetch JSON data from an API part of react-select:
import Select from "react-select";
import fetch from "isomorphic-fetch";
return fetch(`some API localhost`)
.then(response => response.json())
.then(json => {
return { options: json };
})
Now option looks like below:
{"Grade": "Math K", "Domain": "Counting & Cardinality"},
{"Grade": "Math K", "Domain": "Geometry"},
{"Grade": "Math 1", "Domain": "Counting & Cardinality"},
{"Grade": "Math 1", "Domain": "Orders of Operation"},
{"Grade": "Math 1", "Domain": "Geometry"},
and I want to combine the duplicate Grade and make it something like:
{"Grade": "Math K", "Domain": ["Counting & Cardinality", "Geometry"]},
{"Grade": "Math 1", "Domain": ["Counting & Cardinality" , "Geometry" , "Orders of Operation" ]}
how would I do it using react?
A:
This is not a super complex problem. You need to think about, how to make the given input array values iterable.
Once you have a way to iterate, it becomes easier to apply, the transformation logic you have asked, like merging domains for a given Grade.
const response = [{
"Grade": "Math K",
"Domain": "Counting & Cardinality"
},
{
"Grade": "Math K",
"Domain": "Geometry"
},
{
"Grade": "Math 1",
"Domain": "Counting & Cardinality"
},
{
"Grade": "Math 1",
"Domain": "Orders of Operation"
},
{
"Grade": "Math 1",
"Domain": "Geometry"
}
];
const output = {};
response.forEach((item) => {
const grade = item.Grade;
// Create a Map / Object to access a particular Grade easily
output[grade] = output[grade] || {};
output[grade].Grade = grade;
output[grade].Domain = output[grade].Domain || [];
output[grade].Domain.push(item.Domain);
})
const outputObj = Object.keys(output).map((item) => output[item]);
console.log(outputObj);
|
[
"stackoverflow",
"0008536883.txt"
] | Q:
Lambda expression compare boolean as false results in NotSupportedException
I can't seem to figure out how to compare boolean values in a C# lambda expression for EF4. I've tried:
cl.Where(c => c.Received == false);
and this:
cl.Where(c => !c.Received);
and this:
cl.Where(c => c.Received.Equals(false));
but I keep getting this error:
Exception Details: System.NotSupportedException: Unable to create a constant value
of type 'System.Object'. Only primitive types ('such as Int32, String, and Guid')
are supported in this context.
After spending a good amount of time researching this I'm still missing something. I'm fairly new to Lambdas so pointers would be appreciated.
Edit2: more code re:comment
int bar = 42;
var cl = db.foo.Where(c => c.baz.Equals(bar));
//codez (just an if statement)
cl.Where(c => c.Received == false).OrderByDescending(c => c.dateAdded);
That's it. Even if I remove the orderby it still doesn't work
Edit3:
Solution:
int bar = 42;
var cl = db.foo.Where(c => c.baz == bar);
cl.Where(c => c.Received == false).OrderByDescending(c => c.dateAdded);
A:
The issue is most likely in the c.baz.Equals(bar) line. If you change it to
var cl = db.foo.Where(c => c.baz.Equals(bar)).ToList();
you should see the exception thrown on that line, because you force evaluation of the IQueryable<T>.
Instead of comparing objects, you should compare their IDs, like this:
(edited to reflect the conversation in the comments and changes to the OP)
var cl = db.foo.Where(c => c.baz == bar.id);
|
[
"stackoverflow",
"0044622885.txt"
] | Q:
How to read xml file in stored procedure and insert it in table in sql server
Hi i have the Following XML file.how do i read it and insert the data in a table using a stored procedure
<NewDataSet>
<Root RowNumber=1; answer = 1; TAnswer=null/>
<Root RowNumber=2; answer = 6; TAnswer=yes for Q 2/>
<Root RowNumber=3; answer = 9; TAnswer=null/>
<Root RowNumber=4; answer = -1; TAnswer=q 4 no suggestions/>
</NewDataSet>
A:
Considering you have a valid xml just like the one below.
DECLARE @xml XML
SET @xml = '
<NewDataSet>
<Root RowNumber = "1" answer = "1" TAnswer = "null" />
<Root RowNumber = "2" answer = "6" TAnswer = "yes for Q 2" />
<Root RowNumber = "3" answer = "9" TAnswer = "null" />
<Root RowNumber = "4" answer = "-1" TAnswer = "q 4 no suggestions" />
</NewDataSet>'
SELECT RowNumber = T.A.value('@RowNumber', 'int'),
answer = T.A.value('@answer', 'int'),
TAnswer = T.A.value('@TAnswer', 'varchar(1000)')
FROM @xml.nodes('//NewDataSet/Root') T (A)
Note : There are two mistakes in your XML. Attributes values are not enclosed by double quotes. Then the attributes should be separated by space not by semi-colon
|
[
"stackoverflow",
"0054112965.txt"
] | Q:
Enforcing that all columns have different values
A particular val# should be in no more than one column. Multiple columns can be empty though. (I don't want to use NULL instead of empty.)
CREATE TYPE vals AS ENUM ('val1', 'val2', 'val3', 'val4', 'val5', ... 'empty');
CREATE TABLE some_table
( ...
column1 vals NOT NULL,
column2 vals NOT NULL,
column3 vals NOT NULL,
CONSTRAINT some_table_column_vals_check CHECK (???)
... );
Valid combinations e.g.:
column1: val1
column2: val2
column3: val4
column1: val1
column2: empty
column3: empty
Invalid combinations e.g.:
column1: val1
column2: val3
column3: val3
column1: val2
column2: empty
column3: val2
Is there a neat way to do this with a (preferably not too long) constraint, or should I write a trigger function for that?
A:
One method is a rather painful case expression in a check constraint:
alter table some_table add constraint chk_some_tablefields
check ( (val1 not in (val2, val3) or val1 = 'empty') and
(val2 not in (val3) or val2 = 'empty')
);
However, I would caution you about your data structure. You should probably have a junction/association table with one row per val and some_table id. Or you might want to just store the values in an array, if you want a variable number of them.
|
[
"stackoverflow",
"0008993754.txt"
] | Q:
How to update a M2M field in django via AJAX call
I am building a user profile in django, where I want the user to enter his skill set. The skill field is a ManyToMany field to a model name Skills. Below is shown the models.py file
class UserProfile(models.Model):
name = models.CharField(max_length = 300, null = True, blank=True)
location = models.CharField(max_length=500, null=True, blank=True)
birthday = models.DateField(null = True, blank = True)
user = models.ForeignKey(User, unique=True)
skills = models.ManyToManyField(Skill, blank=True, null=True)
class Skill(models.Model):
name = models.CharField(max_length=50)
def __unicode__(self):
return u'%s' %(self.name)
As you can see all the fields are set null=True. This is because I am keeping the fields empty and want the user to input them as and when he/she wants to. So I am updating all these fields using AJAX call. I have managed to edit all the other fields, but I do not know how can I edit a M2M field
I can get a list of all the skills linked to a profile using profile.skills.all() but I do not know how to update this list. I basically want to add or remove skill objects from this list. I think there is something in the django.db.models.fields.related.ManyRelatedManager using which I can edit the field
Any help is really appreciated. I have not found anything at all on this subject. There is some information about editing this field using a ModelForm but nothing about editing the individual field.
A:
To edit the m2m intermediary table, use the add and remove methods on the ManyRelatedManager.
https://docs.djangoproject.com/en/1.3/ref/models/relations/#django.db.models.fields.related.RelatedManager.add
It's true, the hardest things to google on the django docs are manytomanyfield and other continuous strings. I've blogged about formfield_for_manytomany solely to appear in search results for myself.
|
[
"stackoverflow",
"0006896205.txt"
] | Q:
Why should I not check bin and obj folders in, in SVN
This seems to be very basic question but I'm eager to know the answer. I'm using Subversion (SVN) for source control and I've been checking in all the files, but the client asked me to create a rule in SVN to avoid checking in the bin and obj folders.
Why should I not check the bin and obj folders in?
The client also asked me to keep the solution file outside the repository folder. Why is that?
A:
You should not add any temporary files to SVN, they're temporary. The entire obj directory consists of files that are created during the build process and are then discarded. (sure, they stay on disk because some are re-used, like a cache, when the source files don't change but that's the only reason they're not deleted after each build).
the bin directory is a slightly different matter. It is ok to add binary files to SVN, you probably already do it for icon and image files already. Some people add the built binaries as well, that's a decision that depends on your configuration management processes, there's no 'wrong' answer. However, sometimes your bin directory can become filled with other files that you do not want to add. If you're building .net apps, you'll get a load of dependant dlls copied to the bin directory that are not strictly part of your project. Adding those will just bloat your repository for no benefit. Similarly, there are supporting binaries in bin such as .pdb debug symbol files. These aren't really needed either.
For the solution file, I'm not sure of the question but if its not to be checked in it'll be because a .sln file is just a "wrapper" for one or more project files. Its not strictly needed to build a visual studio project as a new one will be created as needed. I guess your users might create their own .sln files with different groups of projects in them, making each one different to each user. That would be a reason to prevent checkin, so each user would not overwrite each other's custom files (though there are ways for a user to prevent modification of a file that is stored in svn).
So it sounds like your configuration strategy doesn't involve adding any binaries to svn. In which case its a very good idea to prevent this from accidentally happening with a pre-commit hook. I would also recommend adding these exclusions to the client-side global-ignores to assist your users from ever trying to add these files in the first place.
A:
"should not" doesn't apply to everyone. But generally:
1) Don't checkin binaries that can be generated from code.
2) SVN is a source code versioning system, and not designed with binaries in mind. Yes, SVN and other VCSs can handle binaries, but it is not their intended purpose, especially after point 1)
3) Since these are generated by your source code, they will change a lot and are not like the libraries that rarely change. Frequently changing binaries tax the VCS as any VCS cannot properly handle binaries and you tend to store more with every change to the binaries as the diffing ( delta ) is not as efficient as with source code.
Coming to the solution ( .sln ) files, it is ideal to checkin them to the repository, though not absolutely necessary. But most, if not all, .Net project are Visual Studio based and even for build purposes, having a .sln file makes the job much easier as you can call msbuild on the sln file rather than the csproj ( or other project ) files. You get other advantages like proper dependency compile, parallel compile etc.
A:
You shouldn't really check in any user specific files or generated output files as everytime you do a check in you'll be remerging recompiled output changes. I would recommend to ignore bin, obj and .suo (not .sln) as a starting point as these will be recreated with a compile then ignore any others that are user specific or regenerated with every build.
|
[
"stackoverflow",
"0015759630.txt"
] | Q:
qbxml quickbooks item customfield filter
Is there a way to produce an XML request to get only one item and its information by filtering one of custom fields?
For example, I have a "barcode" custom field and I want to get an item by its barcode number.
A:
Answer: No. You cannot filter by custom fields
|
[
"stackoverflow",
"0035813893.txt"
] | Q:
Issue with up and downvote system with AngularJS
I am trying to set up a simple up and down vote system for campaings:
JSON:
This is my simplified JSON string. It contains the campaign and all its up- and downvotes.
{
"Campaign": {
"id": "106",
"code": "ENDUS15-2RX2Y",
"start": "2016-02-29 23:00:00",
"end": "2016-03-31 22:00:00",
"votes": 4
},
"CampaignVote": [
{
"id": "259",
"vote_score": "1",
"user_id": "26"
},
{
"id": "261",
"vote_score": "1",
"user_id": "10"
},
{
"id": "268",
"vote_score": "1",
"user_id": "34"
},
{
"id": "270",
"vote_score": "-1",
"user_id": "41"
}
]
}
controller.js:
In the controller I am retrieving the campaigns and I also set the ID of the logged in user.
$scope.my_user_id = 10;
$http.post($scope.connection + "/campaigns/all.json")
.success(function(data, status, headers, config) {
$scope.deals = data.deals;
})
HTML:
On my HTML page I am showing a green up-arrow when the campaing was upvotes by the user. A red-down arrow when it was downvoted by the user.
or
<span ng-repeat="vote in deal.CampaignVote">
<button class="icon ion-chevron-up icon-up" ng-class="{'icon-up-selected': vote.user_id == my_user_id && vote.vote_score == 1 }" ng-click="upvote(deal.Campaign.id);" ng-disabled="vote.user_id == my_user_id"></button>
</span>
<span ng-show="deal.CampaignVote.length == 0">
<button class="icon ion-chevron-up icon-up" ng-click="upvote(deal.Campaign.id);"></button>
</span>
<span class="deals-points">{{deal.Campaign.votes}}</span>
<span ng-show="deal.Campaign.votes == null" class="deals-points">0</span>
<span ng-repeat="vote in deal.CampaignVote">
<button class="icon ion-chevron-down icon-down" ng-class="{'icon-down-selected': vote.user_id == my_user_id && vote.vote_score == -1 }" ng-click="downvote(deal.Campaign.id);" ng-disabled="vote.user_id == my_user_id"></button>
</span>
<span ng-show="deal.CampaignVote.length == 0">
<button class="icon ion-chevron-down icon-down" ng-click="downvote(deal.Campaign.id);"></button>
</span>
My problem is that this approach doesn't always work. I am currently looping over all votes. Is there a way to say "if the logged in user upvoted the campaign, make it green"?
A:
Does this help? Plunker
JS
$scope.Campaign = {
"id": "106",
"code": "ENDUS15-2RX2Y",
"start": "2016-02-29 23:00:00",
"end": "2016-03-31 22:00:00",
"votes": 0,
"up_votes": 0,
"down_votes": 0,
"CampaignVote": [
{
"id": "259",
"vote_score": "1",
"user_id": "26"
},
{
"id": "261",
"vote_score": "1",
"user_id": "10"
},
{
"id": "268",
"vote_score": "1",
"user_id": "34"
},
{
"id": "270",
"vote_score": "-1",
"user_id": "41"
}
]
}
for (var i = 0; i < $scope.Campaign.CampaignVote.length; i += 1) {
if (parseInt($scope.Campaign.CampaignVote[i].vote_score) === 1) {
$scope.Campaign.up_votes += 1;
}
else if (parseInt($scope.Campaign.CampaignVote[i].vote_score) === -1) {
$scope.Campaign.down_votes += 1;
}
}
$scope.vote = function (value) {
$scope.Campaign.CampaignVote.push({
"id": $scope.Campaign.CampaignVote.length + 1,
"vote_score": value,
"user_id": $scope.Campaign.CampaignVote.length + 1
});
if (value === 1) {
$scope.Campaign.up_votes += 1;
}
else if (value === -1) {
$scope.Campaign.down_votes += 1;
}
}
Markup
<body ng-controller="MainCtrl">
<div style="text-align:center; width:20px; position:absolute; top:10px; left:10px">
<div style="width:100%; height:10px; background:green;"></div>
<p>{{Campaign.up_votes}}</p>
<div style="width:100%; height:10px; background:black;"></div>
</div>
<br>
<div style="text-align:center; width:20px; position:absolute; top:10px; left:60px">
<div style="width:100%; height:10px; background:black;"></div>
<p>{{Campaign.down_votes}}</p>
<div style="width:100%; height:10px; background:red;"></div>
</div>
<br>
<div style="position:absolute; top:100px;">
<button ng-click="vote(1)">Up</button>
<button ng-click="vote(-1)">Down</button>
</div>
</body>
|
[
"stackoverflow",
"0001225256.txt"
] | Q:
Getting Allen Bauer's TMulticastEvent working
I've been mucking around with Allen Bauer's code for a generic multicast event dispatcher (see his blog posts about it here).
He gives just enough code to make me want to use it, and unfortunately he hasn't posted the full source. I had a bash at getting it to work, but my assembler skills are non-existent.
My problem is the InternalSetDispatcher method. The naive approach is to use the same assembler as for the other InternalXXX methods:
procedure InternalSetDispatcher;
begin
XCHG EAX,[ESP]
POP EAX
POP EBP
JMP SetEventDispatcher
end;
But this is used for procedures with one const parameter, like this:
procedure Add(const AMethod: T); overload;
And SetDispatcher has two parameters, one a var:
procedure SetEventDispatcher(var ADispatcher: T; ATypeData: PTypeData);
So, I assume that the stack would get corrupted. I know what the code is doing (cleaning up the stack frame from the call to InternalSetDispatcher by popping the hidden reference to self and I assume the return address), but I just can't figure out that little bit of assembler to get the whole thing going.
EDIT: Just to clarify, what I am looking for is the assembler that I could use to get the InternalSetDispatcher method to work, ie, the assembler to cleanup the stack of a procedure with two parameters, one a var.
EDIT2: I've amended the question a little, thank you to Mason for his answers so far. I should mention that the code above does not work, and when SetEventDispatcher returns, an AV is raised.
A:
The answer, after I have done a lot of running around on the web, is that the assembler assumes that a stack frame is present when calling in to InternalSetDispatcher.
It seems that a stack frame was not being generated for the call to InternalSetDispatcher.
So, the fix is as easy as turning on stack frames with the {$stackframes on} compiler directive and rebuilding.
Thanks Mason for your help in getting me to this answer. :)
Edit 2012-08-08: If you're keen on using this, you might want to check out the implementation in the Delphi Sping Framework. I haven't tested it, but it looks like it handles different calling conventions better than this code.
Edit: As requested, my interpretation of Alan's code is below. On top of needing stack frames turned on, I also needed to have optimization turned on at the project level for this to work:
unit MulticastEvent;
interface
uses
Classes, SysUtils, Generics.Collections, ObjAuto, TypInfo;
type
// you MUST also have optimization turned on in your project options for this
// to work! Not sure why.
{$stackframes on}
{$ifopt O-}
{$message Fatal 'optimisation _must_ be turned on for this unit to work!'}
{$endif}
TMulticastEvent = class
strict protected
type TEvent = procedure of object;
strict private
FHandlers: TList<TMethod>;
FInternalDispatcher: TMethod;
procedure InternalInvoke(Params: PParameters; StackSize: Integer);
procedure SetDispatcher(var AMethod: TMethod; ATypeData: PTypeData);
procedure Add(const AMethod: TEvent); overload;
procedure Remove(const AMethod: TEvent); overload;
function IndexOf(const AMethod: TEvent): Integer; overload;
protected
procedure InternalAdd;
procedure InternalRemove;
procedure InternalIndexOf;
procedure InternalSetDispatcher;
public
constructor Create;
destructor Destroy; override;
end;
TMulticastEvent<T> = class(TMulticastEvent)
strict private
FInvoke: T;
procedure SetEventDispatcher(var ADispatcher: T; ATypeData: PTypeData);
public
constructor Create;
procedure Add(const AMethod: T); overload;
procedure Remove(const AMethod: T); overload;
function IndexOf(const AMethod: T): Integer; overload;
property Invoke: T read FInvoke;
end;
implementation
{ TMulticastEvent }
procedure TMulticastEvent.Add(const AMethod: TEvent);
begin
FHandlers.Add(TMethod(AMethod))
end;
constructor TMulticastEvent.Create;
begin
inherited;
FHandlers := TList<TMethod>.Create;
end;
destructor TMulticastEvent.Destroy;
begin
ReleaseMethodPointer(FInternalDispatcher);
FreeAndNil(FHandlers);
inherited;
end;
function TMulticastEvent.IndexOf(const AMethod: TEvent): Integer;
begin
result := FHandlers.IndexOf(TMethod(AMethod));
end;
procedure TMulticastEvent.InternalAdd;
asm
XCHG EAX,[ESP]
POP EAX
POP EBP
JMP Add
end;
procedure TMulticastEvent.InternalIndexOf;
asm
XCHG EAX,[ESP]
POP EAX
POP EBP
JMP IndexOf
end;
procedure TMulticastEvent.InternalInvoke(Params: PParameters; StackSize: Integer);
var
LMethod: TMethod;
begin
for LMethod in FHandlers do
begin
// Check to see if there is anything on the stack.
if StackSize > 0 then
asm
// if there are items on the stack, allocate the space there and
// move that data over.
MOV ECX,StackSize
SUB ESP,ECX
MOV EDX,ESP
MOV EAX,Params
LEA EAX,[EAX].TParameters.Stack[8]
CALL System.Move
end;
asm
// Now we need to load up the registers. EDX and ECX may have some data
// so load them on up.
MOV EAX,Params
MOV EDX,[EAX].TParameters.Registers.DWORD[0]
MOV ECX,[EAX].TParameters.Registers.DWORD[4]
// EAX is always "Self" and it changes on a per method pointer instance, so
// grab it out of the method data.
MOV EAX,LMethod.Data
// Now we call the method. This depends on the fact that the called method
// will clean up the stack if we did any manipulations above.
CALL LMethod.Code
end;
end;
end;
procedure TMulticastEvent.InternalRemove;
asm
XCHG EAX,[ESP]
POP EAX
POP EBP
JMP Remove
end;
procedure TMulticastEvent.InternalSetDispatcher;
asm
XCHG EAX,[ESP]
POP EAX
POP EBP
JMP SetDispatcher;
end;
procedure TMulticastEvent.Remove(const AMethod: TEvent);
begin
FHandlers.Remove(TMethod(AMethod));
end;
procedure TMulticastEvent.SetDispatcher(var AMethod: TMethod;
ATypeData: PTypeData);
begin
if Assigned(FInternalDispatcher.Code) and Assigned(FInternalDispatcher.Data) then
ReleaseMethodPointer(FInternalDispatcher);
FInternalDispatcher := CreateMethodPointer(InternalInvoke, ATypeData);
AMethod := FInternalDispatcher;
end;
{ TMulticastEvent<T> }
procedure TMulticastEvent<T>.Add(const AMethod: T);
begin
InternalAdd;
end;
constructor TMulticastEvent<T>.Create;
var
MethInfo: PTypeInfo;
TypeData: PTypeData;
begin
MethInfo := TypeInfo(T);
TypeData := GetTypeData(MethInfo);
inherited Create;
Assert(MethInfo.Kind = tkMethod, 'T must be a method pointer type');
SetEventDispatcher(FInvoke, TypeData);
end;
function TMulticastEvent<T>.IndexOf(const AMethod: T): Integer;
begin
InternalIndexOf;
end;
procedure TMulticastEvent<T>.Remove(const AMethod: T);
begin
InternalRemove;
end;
procedure TMulticastEvent<T>.SetEventDispatcher(var ADispatcher: T;
ATypeData: PTypeData);
begin
InternalSetDispatcher;
end;
end.
A:
From the blog post:
What this function does is removes
itself and the immediate caller from
the call chain and directly transfers
control to the corresponding "unsafe"
method while retaining the passed in
parameter(s).
The code is eliminating the stack frame for InternalAdd, which only has one parameter, Self. It has no affect on the event you passed in, and so it's safe to copy for any other function with only one parameter and the register calling convention.
EDIT: In response to the comment, there's a point you're missing. When you wrote, "I know what the code is doing (cleaning up the stack frame from the parent call)," you were mistaken. It does not touch the parent call. It's not cleaning up the stack frame from Add, it's cleaning up the stack frame from the current call, InternalAdd.
Here's a bit of basic OO theory, since you seem to be a little confused on this point, which I'll admit is a little confusing. Add doesn't really have one parameter, and SetEventDispatcher doesn't have two. They actually have two and three, respectively. The first parameter of any method call that's not declared static is Self, and it's added invisibly by the compiler. So the three Internal functions each have one parameter. That's what I meant when I wrote that.
What Allen's code is doing is working around a compiler limitation. Every event is a method pointer, but there's no "method constraint" for generics, so the compiler doesn't know that T is always going to be an 8-byte record that can be cast to a TMethod. (In fact, it doesn't have to be. You could create a TMulticastEvent<byte> if you really wanted to break your program in new and interesting ways.) The internal methods use assembly to manually emulate a typecast by stripping themselves out of the call stack completely and JMPing (basically a GOTO) to the appropriate method, leaving it with the same parameter list as the function that called it had.
So when you see
procedure TMulticastEvent.Add(const AMethod: T);
begin
InternalAdd;
end;
what it's doing is equivalent to the following, if it would compile:
procedure TMulticastEvent.Add(const AMethod: T);
begin
Add(TEvent(AMethod));
end;
Your InternalSetDispatcher will want to do exactly the same thing: strip its own one-parameter call out, and then jump to SetDispatcher with exactly the same parameter list as the calling method, SetEventDispatcher, had. It doesn't matter what parameters the calling function has, or the function that it's jumping to. What does matter (and this is critical!) is that SetEventDispatcher and SetDispatcher have the same call signature as each other.
So yes, the hypothetical code you posted will work just fine and it won't corrupt the call stack.
|
[
"stackoverflow",
"0045361564.txt"
] | Q:
Random Aframe environment
I want to have random scenarios every time I enter the website.
So I'm trying to return the pickscenario variable from the js script to aframe in this line <a-entity environment="preset: pickscenario"></a-entity>
Here's the code:
<a-entity environment="preset: pickscenario"></a-entity>
<script>
var scenarios = ['ocean', 'universe', 'forest'];
var pickscenario = scenarios[Math.floor(Math.random()*scenarios.length)];
return pickscenario;
</script>
I bet this is quite simple but I haven't figure it out yet.
A:
For scripts to take effect it is advised to write components, like this:
<script type="text/javascript">
AFRAME.registerComponent('randomscenario', {
init: function(){
var scenarios = ['ocean', 'universe', 'forest'];
var pickscenario = scenarios[Math.floor(Math.random()*scenarios.length)];
this.el.setAttribute('environment', { preset: pickscenario});
}
});
</script>
And then in the html:
<a-entity randomscenario></a-entity>
The init function is called when the scene is loaded.
|
[
"stackoverflow",
"0037564261.txt"
] | Q:
Single NSFetchRequest for parent and child objects
I have a list of data with parent-child hierarchy up to three levels. For example
ItemA (Grandparent)
ItemB1 (Parent)
ItemC1 (Child)
ItemC2 (Child)
ItemC3 (Child)
ItemB2 (Parent)
ItemC4 (Child)
...
All of the items are located in a single NSArrayController and I want to filter the array by using NSFetchRequest which will result child items and their parents.
For example, if my query matches to ItemC1 and ItemC3 the filtered result should be
ItemA (Grandparent)
ItemB1 (Parent)
ItemC1 (Child)
ItemC3 (Child)
All items have parent and children(1-N) properties in order to track the relationships.
Any suggestions will be appreciated.
A:
I have started to use NSOutlineView and NSTreeController in order to create a parent-child hierarchy. I couldn't find another way of grouping and filtering items by using a single dimensioned NSArrayController.
|
[
"stackoverflow",
"0041552421.txt"
] | Q:
EXCEPTION: Cannot read property 'addInstrumentToStage' of undefined
I have an angular2 app with typescript, and i am using ng2-dragula.
Here is the constructor:
constructor( private dragulaService: DragulaService,
private audioFinderService:AudioFinderService) {}
and then ngOnInit():
public ngOnInit(): any {
this.dragulaService.setOptions('second-bag', {
moves: function (el, container, handle) {
var file=this.audioFinderService.audioGetter();
this.audioFinderService.removeFromStage();
and in the line this.audioFinderService.audioGetter(); it complains that:
error_handler.js:50 EXCEPTION: Cannot read property 'audioGetter' of undefined
i tried to make a dependency injection in the constructor, but it seems that, it does not recognize the audioFinderService
Here is the AudioFinderService
@Injectable()
export class AudioFinderService {
constructor(private playService:SelectedInstrumentsService,private http:Http) { }
}
The only wierd thing about AudioFinderService is that, it is injecting another service. So, do you think, nested dependency injection is failed?
A:
Change
moves: function (el, container, handle) {//<-- 'this' will refer to the function object
var file=this.audioFinderService.audioGetter();
this.audioFinderService.removeFromStage();
to
moves: (el, container, handle)=> {//<-- 'this' will refer to the page
var file=this.audioFinderService.audioGetter();
this.audioFinderService.removeFromStage();
The problem is when you are using function instead of fat arrow synthax, you are losing the this that is refering to the page.
I suggest you to take a look at this great answer on how to refer to the correct this: How to access the correct `this` inside a callback?
|
[
"stackoverflow",
"0057824123.txt"
] | Q:
How do gColab have an access to a gDrive directory (not a file)
I would like to have an access to a gDrive directory from gColab notebook.
I already found a solution to read a file but not a directory. Indeed, for my application (transfert learning with PascalVOC07 images database), I have a code which need to browse a directory stored in my gDrive.
This directory has a lot of files, sub-directories, ... and my code is made to calculate with only the address (directory) of the complete database.
# Installation de PyDrive
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
#Autoriation de connexion
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
# drive : accès à gDrive
drive = GoogleDrive(gauth)
# Création du lien vers le module PascalVOCDataGenerator qui est stocké sur gDrive
personal_module = drive.CreateFile({'id': '18OiN- HEzI8lfEWTYIViWXwybu18S2ibx'})
# Chargement du module sur gColab
personal_module.GetContentFile('data_gen.py')
from data_gen import PascalVOCDataGenerator
# id gDrive du répertoire des données PascalVOC07
data_dir = drive.CreateFile({'id': '10Lfr0nFtZB15H3PboABgTngb_kJ0dBjY'})
# Chargement du module sur gColab
data_dir.GetContentFile('/')
data_generator_train = PascalVOCDataGenerator('trainval', data_dir)
So, with this code, it's possible to upload my specific module "PascalVOCDataGenerator" from gDrive to gColab and import it.
One of its parameters is the directory of the databasa : data_dir.
But error "No downloadLink/exportLinks for mimetype found in metadata" !!!
A:
No need to use PyDrive. A simpler approach is to mount your entire Drive as a FUSE filesystem using this snippet:
from google.colab import drive
drive.mount('/content/drive')
This is also available via the file browser in the left-hand pane.
|
[
"electronics.stackexchange",
"0000187835.txt"
] | Q:
Induction motor electrical and mechanical angle
How do we reach the relation between mechanical angle(\$\theta m\$) and electrical angle(\$\theta e\$) as: \$\theta e = (P/2)\theta m\$ ?
; where P is the number of poles.
A:
In a multipolar electrical machine (motor or generator), relationship between the mechanical angle and electrical angle is given by:
Electrical angle = (P/2) x Mechanical angle
where: P = Number of poles
So:
In a two-pole motor (P = 2): Electrical Angle = Mechanical Angle
In a four-pole motor (P = 4): Electrical Angle = 2 times Mechanical Angle
In a six-pole motor (P = 6): Electrical Angle = 3 times Mechanical Angle
etc.
Source : http://www.researchgate.net/post/Is_there_a_relation_between_the_electrical_and_mechanical_angle
Hope this helps
A:
How do we reach the relation between mechanical angle(θm) and
electrical angle(\$θ_e\$) as: \$θ_e\$=(P/2)\$θ_m\$ ? ; where P is the number of
poles.
An electrical machine is simply an electrical<>mechanical energy conversion device which utilises magnetic fields as the exchange medium.
When an electrical machine is operating as a motor, the idea is to create a traveling, rotating magnetic field, via the stator and "hope" 1 that this moving flux attracts the rotor (be it via an equivalent magnetic field or via an affinity to reduce the reluctance)
So a stator:
A 3 phase, single pole pair topology. If the stator was now unrolled
you can now see that for 360degree electrical we have equally achieved 360degree mechanical.
As the number of pole-pairs is increased, the point at which an electrical cycle is completed becomes a fraction of the main mechanical cycle & this factor is the pole-pair count
1 hope is used because there are some prerequisites that must be met that are machine specific. freq, voltage, load, supply ...
|
[
"stackoverflow",
"0057912461.txt"
] | Q:
Kotlin code - how does FlatMap work here?
The following is code from a LeetCode solution.
This is the description:
Given an array A of non-negative integers, half of the integers in A are odd, and half of the integers are even.
Sort the array so that whenever A[i] is odd, i is odd; and whenever A[i] is even, i is even.
I managed to write code that worked but mine was almost like Java but in Kotlin (a common problem - I know).
I found this code in the comments:
fun sortArrayByParityII(A: IntArray): IntArray {
val even = A.filter { it % 2 == 0 }
val odd = A.filter { it % 2 == 1 }
return even.zip(odd).flatMap { listOf(it.first, it.second) }.toIntArray()
}
I know that the first couple of line do. They simple filter the array into even and odd arrays.
I even understand (after looking up) what the "zip" does.
What I can't figure out is what this does:
flatMap { listOf(it.first, it.second) }
A:
Let's look step by step:
fun main() {
val list = (1..10).toList()
val even = list.filter { it % 2 == 0 } // [2, 4, 6, 8, 10]
val odd = list.filter { it % 2 == 1 } // [1, 3, 5, 7, 9]
val zipped = even.zip(odd) // [(2, 1), (4, 3), (6, 5), (8, 7), (10, 9)]
val flatten = zipped.flatMap { listOf(it.first, it.second) } // [2, 1, 4, 3, 6, 5, 8, 7, 10, 9]
}
flatMap takes a function which returns a list and inserts elements of this list in to initial list. So [(2, 1), (4, 3)] becomes [2, 1, 4, 3]
|
[
"stackoverflow",
"0039274333.txt"
] | Q:
Extract one column as rows with R, preserving other columns
What I have:
I have a data frame that looks like this:
sequence foo model output real
1 3 a 12 12
1 3 b 29 12
1 3 c 10 12
1 3 d 38 12
1 3 e 10 12
2 3 a 38 15
2 3 b 10 15
2 3 c 29 15
2 3 d 56 15
2 3 e 10 15
Created by:
d.test = data.frame(
sequence = c(1, 1, 1, 1, 1, 2, 2, 2, 2, 2),
foo = c(3, 3, 3, 3, 3, 3, 3, 3, 3, 3),
model = c("a", "b", "c", "d", "e", "a", "b", "c", "d", "e"),
output = c(12, 29, 10, 38, 10, 38, 10, 29, 56, 10),
real = c(12, 12, 12, 12, 12, 15, 15, 15, 15, 15)
)
The model predicts an output for every given sequence, but the real output is also recorded along every sequence.
What I need:
I would like to transform the data such that real becomes a "model" itself, that is:
sequence foo model output
1 3 a 12
1 3 b 29
1 3 c 10
1 3 d 38
1 3 e 10
1 3 real 12
2 3 a 38
2 3 b 10
2 3 c 29
2 3 d 56
2 3 e 10
2 3 real 15
How can I achieve that using dplyr, tidyr and their cousins?
Note that for a “nice” solution, one should not have to:
Manually enter column indices
Manually specify all the columns like foo which are not of interest
What I've tried:
I tried the following, but it feels clumsy:
unique(
melt(d.test,
id.vars = c("sequence", "foo"),
measure.vars = c("real"),
variable.name = "model",
value.name = "output"
)
)
Now I have to remove the real column from the original data frame and append the rows of what I just did. It's not a nice solution because apart from the foo column there may be many more columns that I'd like to preserve, and then I'd have to specify them as id.vars.
A:
I'd use data.table:
library(data.table)
setDT(d.test)
d.test[,
rbind(.SD, .SD[1L][, `:=`(model = "real", output = real[1L])])
, by=sequence][, real := NULL][]
If I had to use the 'verse:
d.real = d.test %>% distinct(sequence) %>%
mutate(model = "real", output = real) %>% select(-real)
d = d.test %>% select(-real)
And then stack them:
bind_rows(d, d.real)
If the ordering is important, add %>% arrange(sequence).
Comment. The problem in the OP originates with untidy data. Reading Hadley's paper on the subject would probably be helpful if you don't know what I mean.
A:
The trick is to widen the already-long data and then convert it back into long form, making sure to include the real column in the reshaping.
library(dplyr)
library(tidyr)
d.test %>%
spread(model, output) %>%
gather(model, output, -sequence, -foo) %>%
arrange(sequence, model)
#> sequence foo model output
#> 1 1 3 a 12
#> 2 1 3 b 29
#> 3 1 3 c 10
#> 4 1 3 d 38
#> 5 1 3 e 10
#> 6 1 3 real 12
#> 7 2 3 a 38
#> 8 2 3 b 10
#> 9 2 3 c 29
#> 10 2 3 d 56
#> 11 2 3 e 10
#> 12 2 3 real 15
spread is the tidyr function for widening long data. It takes a data-frame, the name of a column of keys (variable names), and the name of a column of values, and spreads the keys out over several columns. This is how the data looked after spreading the model-output pairs into several columns.
# Convert to wide-format so there is one real per row
d.test.wide <- d.test %>%
spread(model, output)
d.test.wide
#> sequence foo real a b c d e
#> 1 1 3 12 12 29 10 38 10
#> 2 2 3 15 38 10 29 56 10
gather is the tidyr function for melting data. We use dplyr's column-selection syntax, and we tell it gather all of the columns except the identifiers sequence and foo, storing the keys in a model column and the values in output column.
We could also explicitly select the columns to gather: d.test.wide %>% gather(model, output, real, a:e). The leftover unselected columns will be used as identifiers.
|
[
"stackoverflow",
"0003169193.txt"
] | Q:
Oracle query on String
I want to perform my query on a string in this format, xxxx/xxxx
where x is a number. when I perform it using the Oracle interface or my C# application I get this error:
ORA-00904: "xxxx/xxxx": invalid identifier
I performed this query:
SELECT * FROM "My Table" WHERE Field="xxxx/xxxx"
A:
You should use single quotes aroung SQL character literals like xxxx/xxxx:
SELECT * FROM "My Table" WHERE Field='xxxx/xxxx'
|
[
"bicycles.stackexchange",
"0000008497.txt"
] | Q:
Are fixed or floating SPD-SL cleats most suitable for a long commute?
After much procrastination, I've finally jumped in and bought myself a road bike, under the cycle2work scheme we have here in the UK.
I opted for a Boardman Road Race, as it seems to be the most affordable entry level road racer. Since my commute is 20 miles one way (aiming for twice a week to start with) I've been advised to invest in some SPD-SL pedals.
I've never used a road bike, or SPD-SLs for that matter, however from what I can tell there are two types of cleats, floating (which give a little movement sideways) and fixed (no sideways movement).
For such a commute, which type would be most suited? I'm thinking that floating probably for the extra range of movement which could be more comfortable for a longer commute.
Is there anything in it? Or is it just down to user preference?
p.s. Any reason these pedals/cleats would not be compatible with the above bike? Or are pedals universally OK
A:
Fixed position cleats, or 0 degree float cleats, require far greater precision about cleat setup on the shoe. Failure to get the setup right will mean pain, and can mean injury.
That is also true of floating cleats. Most pedals come with cleats that have between 4.5 and 9 degrees of float built in. I don't know of any pedal which has a 0 degree cleat, stock. You would have to buy it separately.
For your purpose, there is no benefit to a fixed position cleat. They are primarily used for track racing and for sprinting specialists in racing.
Given that, floating cleats are most suitable for a commuter's bike.
A:
Although you have opted for a 'proper' road bike, you may want to consider the mountain bike style SPDs. Some of the MTB shoes come close to racing shoes in terms of stiffness, and there are certainly a range of good quality pedals on the market. There are disadvantages (the power from your foot is not spread over as large a pedal area) but the advantage is that you can buy shoes (or an additional pair of shoes) with recessed cleats (or a double sided pedal with SPDs one side and a platform the other) enabling you to use the commuting bike to nip into town shopping at lunchtime. Wandering around shops in cleated racing shoes is no fun.
A:
I prefer floating, mostly because they're easier to get in and out of. The extra half second at every traffic light, plus the problem that I'm not really moving or in good control of the bike for that time, means I prefer the easy-in option. If you are used to the low-float cleats then that may not be an issue, but I see very few cyclists in that category on my commute (most are the "faff about while wobbling across the intersection" type).
Almost all pedals have the same thread (some BMX and kids bikes use a visibly smaller thread, I've seen a Shimano one with a huge thread). Note that there's a left hand and a right hand thread. Don't get that wrong or you'll probably need new cranks. So yes, the pedals shown will work. The basic test is: if you can't turn the pedal into the crank with your fingers you've done it wrong. Clean the threads, check the direction, then try again. Only use a tool for the last 1/2 turn of tightening.
|
[
"stackoverflow",
"0022124870.txt"
] | Q:
How does git log work across branches?
I'm using a github repository and it has a master branch and a demo branch. All was good and both the master version and the demo version of the code are in use (master on a staging site, and the demo on a demo site). A live site runs on a tagged commit.
While making changes, I messed something up which did not reveal itself for a while so I needed to start looking through old commits to see how I introduced the problem.
On Github I saw commits for the demo branch as follows:
Changed logos to xxx ones
8c4a3eab22 Browse code
pwhipp authored 3 days ago
Feb 04, 2014
Paul Whipp
Changed archetype age_default to default to zero (and set all null va… …
6e4c9e8864 Browse code
pwhipp authored a month ago
Feb 03, 2014
Paul Whipp
Added demo.xxx allowed domain for RED
2f72e3b05a Browse code
pwhipp authored a month ago
So on a local repo, pulled up to date, I do "git checkout 8c4a3eab22". Then when I invoke git log locally, I see:
(red)~/wk/red $ git status
# HEAD detached at 8c4a3ea
(red)~/wk/red $ git log
commit 8c4a3eab22dc2ce9708c9aae00751e558ae81dd3
Author: pwhipp <[email protected]>
Date: Thu Feb 27 10:55:21 2014 +1000
Changed logos to xxx ones
commit 2f72e3b05a005738d77ed12be475634aadf76b49
Author: pwhipp <[email protected]>
Date: Mon Feb 3 10:58:08 2014 +1000
Added demo.xxx allowed domain for RED
Why is 6e4c9e8864 not shown by git log? It exists (I can check it out) but the differences between it and 8c4a3eab22 seem far greater than those indicated when I browse 6e4c9e8864 on github so I'm thinking there may be other commits I'm not seeing.
Do I need to RTFM somewhere to understand how the commits are being reported in the log call across the different branches?
A:
Thanks to the comments and Conner's answer, I think I have this sorted:
In short, everything is correct but potentially confusing because of the difference between the chronological order of the commits and the order that merges have brought the commits into use in the demo branch. If you need to just list commits that represent valid states in your current branch you must use "git log --first-parent ....".
The long:
The current demo branch has been merged with master after 8c4 so 6e4 shows in its correct position in the log when looking at the full log history for the demo branch on github.
When I checkout the 8c4 locally, I'm looking at a demo branch state where 6e4 is NOT yet involved because the merge was after 8c4 so 6e4 does not appear in the log because 6e4 is only in the master branch (even though it was done after 2f7).
I can check 8c4 out because it is a state of the master branch. However, if I do, I am looking at a state of the master branch NOT the demo branch. There is no state of the demo branch where 8c4 was at the head of that branch.
The clearest and best picture for resolving this problem came from gitg (which is in the ubuntu repository "sudo apt-get install gitg" got it for me). Once I'd used that, I found the text output from "git log --oneline --graph" easier to understand).
This is only really confusing when you try to step back through a branch history. If you try to use "git log" you are seeing a list of the commits that affect your current code base but they are not necessarily states that existed in the branch you are on.
If, like me, you need to step back through each state in the life of your branch then (thanks to this question) you need to use the --first-parent parameter to git-log e.g.
(red)~/wk/red $ git log --first-parent --oneline
...
aa83af6 Merge branch 'master' into demo
...
8c4a3ea Changed logos to xxx ones
2f72e3b Added demo.xxx.com allowed domain for RED
...
|
[
"stackoverflow",
"0019803577.txt"
] | Q:
Is this "echo" necessary?
i've recently acquired a copy of Programming PHP, 3rd edtion, and on page 10 i saw an "echo" that brought me doubt.
<html>
<head>
<title>Personalized Greeting Form</title>
</head>
<body>
<?php if (!empty($_POST['name'])) {
echo "Greetings, {$_POST['name']}, and welcome.";
} ?>
<form action="<? echo php $_SERVER['PHP_SELF']; ?>" method="post">
Enter your name: <input type="text" name="name" />
<input type="submit" />
</form>
</body>
is the "echo" on the form tag necesssary? i've tried without it and it seems to work perfectly but i'm not sure if it's really important to keep that "echo" there...
if someone knows this, i'd appreciate!
thanks guys!
A:
It depends on HTML version that you use.
Form action defaults to current URL.
According to standards:
in HTML 4 it's required and must be non-empty, so you need to use echo or action="#" in order for the form to be valid
in HTML 5, it's not required, but must be non-empty if present, so in that case you shouldn't define actin at all and let the defaults work
|
[
"math.stackexchange",
"0000417249.txt"
] | Q:
Interpretation of the variational principle for the Ritz approximation, solid Mechanics
Below $U$ and $V$ are recpectively the internal and external energy components of a given structural element:
$$U+V=W$$
Expressing $U$ in terms of the strains $\varepsilon$ and the material consitutive matrix $F$:
$$\frac12\int_V{\varepsilon^TF\varepsilon}dV + V=W$$
Representing the strains by $\varepsilon=Bu$, where $B$ is a matrix containing the kinematics derivatives and $u$ is the displacement vector:
$$\frac12\int_V{u^TB^TFBu}dV + V=W$$
Approximating $u$ using a set of Ritz trial functions, so that $u=g\{c\}$, with $c$ defined as Ritz constants:
$$\frac12\{c\}^T\int_V{g^TB^TFBg}dV\{c\} + K_2 \{c\}=W$$
$$\frac12\{c\}^TK_1\{c\} + K_2 \{c\}=W$$
$$\frac12\{c\}^TK_1\{c\} + K_2 \{c\}=W$$
$$\frac12\left(\{c\}^TK_1 + K_2 \right)\{c\}=W$$
The principle of total virtual work states that when a virtual change in the displacement field takes place there will be a corresponding change in the strain state (internal energy) so that the virtual change in the total work is zero:
(1) $$\frac12\delta\left(\left(\{c\}^TK_1 + K_2 \right)\{c\}\right)=\delta W=0$$
Here it comes the question, how to apply the variational derivative. In my understanding it is:
$$\frac12\left(\{\delta c\}^TK_1 + K_2 \right)\{c\} + \frac12\left(\{c\}^TK_1 + K_2 \right)\{\delta c\}=0$$
But I don't know how to achieve the known answer from this equation. The known answer is, for any set of Ritz constants $\{c\}$:
(2)$$K_1\{c\}+K_2=0$$
Which we solve to find $\{c\}$.
I would really appreciate if one could explain the passage between (1) and (2).
A:
You have a small mistake
$$\frac12\{c\}^TK_1\{c\} + K_2 \{c\}=W$$
$$\bigg(\frac12\{c\}^TK_1\ + K_2\bigg) \{c\}=W$$
and
$$\delta\Bigg( \bigg(\frac12\{c\}^TK_1\ + K_2\bigg) \{c\}\Bigg)=\delta W=0$$
$$\delta \bigg(\frac12\{c\}^TK_1\ + K_2\bigg) \{c\}+ \bigg(\frac12\{c\}^TK_1\ + K_2\bigg)\delta \{c\}=\delta W=0$$
$$ \bigg(\frac12\{\delta c\}^TK_1\ \bigg) \{c\}+\bigg(\frac12\{ c\}^TK_1\ + K_2\bigg) \{\delta c\}=\delta W=0$$
$$ \frac12\{\delta c\}^TK_1\ \{c\}+\frac12\{ c\}^TK_1\{\delta c\}\ + K_2 \{\delta c\}=\delta W=0$$
by assuming that $K_1$ is symmetric
$$ \frac12\{c\}^TK_1\ \{\delta c\}+\frac12\{ c\}^TK_1\{\delta c\}\ + K_2 \{\delta c\}=\delta W=0$$
$$ \{c\}^TK_1\ \{\delta c\} + K_2 \{\delta c\}=\delta W=0$$
$$ \bigg(\{c\}^TK_1\ + K_2\bigg) \{\delta c\}=\delta W=0$$
$$\Rightarrow \{c\}^TK_1\ + K_2=0$$
since we assumed that $K_1$ is symmetric it can also be written as
$$\Rightarrow K_1\{c\} + K_2=0$$
|
[
"puzzling.stackexchange",
"0000074890.txt"
] | Q:
Sudoku false positive (wrong move)
I'm new to Sudoku puzzles.
I tried solving the one above but my last move was flagged as a wrong move(highlighted red).
I believe it is a false positive since the $6$(highlighted red) I inserted is unique on the rows and columns as well as within the small square.
Can someone justify this for me?
A:
Others have already said this, but I'll try to put it in as clear words as possible:
When solving Sudoku puzzles, you don't put the numbers where they might be, you only put them where they must certainly be. Deduce, eliminate possibilities, find restrictions on options, but only when you are certain, put the number in.
Or even more clearly: You never have to guess in sudoku.
(Unless you are playing some ultra-hard otherwise unsolvable difficulty levels.)
Happy sudokuing!
A:
Based on the current values above, within the center set, 6 can only be placed in either the right center or right bottom locations. However, you are not far along enough to 100% determine which of those two locations are correct yet. Rather, through elimination and evaluation both are still feasible. Based on the fact that the application flagged the right center location, once you are far enough along it is likely that the right bottom location will be determinable as the correct placement.
Hope this helps!
If you'd like me to explain why the right bottom location is still valid for 6, let me know.
@ABcDexter made me realize something I forgot to mention, so I'll elaborate on what they stated. Maybe you already are aware but I'll mention it just in case: Just because a number looks like it can go there at the time, does not mean it will in the end. Remember, there is only 1 solution to any properly given Sudoku puzzle, meaning you must be certain (or have a lucky guess) as to where a number will be placed.
A:
If you solve the middle blocks, then something like this comes up which eliminates the $6$ from the cell you just entered.
Also, as Dorrulf mentioned, there are two possibilities for 6 in that 3x3 block and you need to be absolutely certain before putting a number in the cell.
|
[
"stackoverflow",
"0061133376.txt"
] | Q:
Batch scripting for loop with setting variables of particular string of the file
I am having a struggle with loop using "for" and to set the variable for later action.
So briefly explaining about a situation I am facing,
I have files named "HI_001_1813414.nii.gz" & "HI_001_1813414_T1.nii.gz" , and I want to set each two files' name as a variable, so that I can use it for later action.
Here is my batch script.
path=C:\Users\user\Desktop\dsi_studio_64
for /f "delims=" %%x in ('dir *_T1.nii.gz /b /d') do (
set fname=%%x
set name=%fname: ~-7%
set dwi_fname = %fname:_T1 =%
call dsi_studio.exe --action=reg --from=%%x --to=%dwi_fname% --output=%%x_norm.nii.gz --reg_type=0 > "%%x.log.txt"
)
The main problem here is: the variable I set shows only the last one (I checked it with echo).
A:
You cannot set values in for loops unless you run this command first:
SETLOCAL enabledelayedexpansion
Also you will need to refer to %var% as !var! while inside the loop.
|
[
"stackoverflow",
"0010096219.txt"
] | Q:
Android in Eclipse, Merge / Include nested XML non-layout resources
I have an XML index in the res/xml/ folder, and I would like it to include other xml files, so that when I parse R.xml.index, all the files are merged into a single resource.
I tried to adapt the include layout trick to xml parsing, so my index.xml looks like that :
<?xml version="1.0" encoding="utf-8"?>
<Index xmlns:android="http://schemas.android.com/apk/res/android">
<Sheet>
<include xml="o_2sq_m.xml"/>
<include xml="o_2sq_r.xml"/>
</Sheet>
<Sheet>
<include xml="o_sq_tr_m.xml"/>
<include xml="o_sq_tr_r.xml"/>
</Sheet>
</Index>
and the file o_2sq_m.xml, which is in the same folder as index.xml, looks like that:
<?xml version="1.0" encoding="UTF-8"?>
<Challenge xmlns:android="http://schemas.android.com/apk/res/android">
<Dimensions maxX="512" maxY="342" />
<Point id="1" x="94" y="101" color="0x00000000" />
...
</Challenge>
But when I parse index.xml with and XmlPullParser, I see in the debugger that it parses the include tags without having them unrolled, i.e. it does not access the tree of the file o_2sq_m.xml
What should I do to have android include the files within one other ?
A:
If you are not doing too much XML import (such as at create time), you could use getResources().getIdentifier(), which for an index like that (remove the .xml in the attribute)
<Index xmlns:android="http://schemas.android.com/apk/res/android">
<Sheet>
<include customAttr="o_2sq_m"/> ...
</Sheet>
and given that your file to be included is named o_2sq_m.xml, you could use the following code:
switch(tag) {
case "include" :
String xmlid = xpp.getAttributeValue(null, "customAttr");
Int xmlIncludedId = res.getIdentifier(xmlid, "xml", getPackageName());
if(xmlIncludedId != 0) {
// Here the xmlIncludedId can be used to import other XML files.
// E.g. getResources().getXml(xmlIncludedId) returns an XmlResourceParser
}
|
[
"stackoverflow",
"0009610237.txt"
] | Q:
Looking for speedups for A* search
I've got the following working A* code in C#:
static bool AStar(
IGraphNode start,
Func<IGraphNode, bool> check,
out List<IGraphNode> path)
{
// Closed list. Hashset because O(1).
var closed =
new HashSet<IGraphNode>();
// Binary heap which accepts multiple equivalent items.
var frontier =
new MultiHeap<IGraphNode>(
(a, b) =>
{ return Math.Sign(a.TotalDistance - b.TotalDistance); }
);
// Some way to know how many multiple equivalent items there are.
var references =
new Dictionary<IGraphNode, int>();
// Some way to know which parent a graph node has.
var parents =
new Dictionary<IGraphNode, IGraphNode>();
// One new graph node in the frontier,
frontier.Insert(start);
// Count the reference.
references[start] = 1;
IGraphNode current = start;
do
{
do
{
frontier.Get(out current);
// If it's in the closed list or
// there's other instances of it in the frontier,
// and there's still nodes left in the frontier,
// then that's not the best node.
} while (
(closed.Contains(current) ||
(--references[current]) > 0) &&
frontier.Count > 0
);
// If we have run out of options,
if (closed.Contains(current) && frontier.Count == 0)
{
// then there's no path.
path = null;
return false;
}
closed.Add(current);
foreach (var edge in current.Edges)
{
// If there's a chance of a better path
// to this node,
if (!closed.Contains(edge.End))
{
int count;
// If the frontier doesn't contain this node,
if (!references.TryGetValue(edge.End, out count) ||
count == 0)
{
// Initialize it and insert it.
edge.End.PathDistance =
current.PathDistance + edge.Distance;
edge.End.EstimatedDistance = CalcDistance(edge.End);
parents[edge.End] = current;
frontier.Insert(edge.End);
references[edge.End] = 1;
}
else
{
// If this path is better than the existing path,
if (current.PathDistance + edge.Distance <
edge.End.PathDistance)
{
// Use this path.
edge.End.PathDistance = current.PathDistance +
edge.Distance;
parents[edge.End] = current;
frontier.Insert(edge.End);
// Keeping track of multiples equivalent items.
++references[edge.End];
}
}
}
}
} while (!check(current) && frontier.Count > 0);
if (check(current))
{
path = new List<IGraphNode>();
path.Add(current);
while (current.PathDistance != 0)
{
current = parents[current];
path.Add(current);
}
path.Reverse();
return true;
}
// Yep, no path.
path = null;
return false;
}
How do I make it faster? No code samples, please; that's a challenge I've set myself.
Edit: To clarify, I'm looking for any advice, suggestions, links, etc. that apply to A* in general. The code is just an example. I asked for no code samples because they make it too easy to implement the technique(s) being described.
Thanks.
A:
Have you looked at this page or this page yet? They have plenty of helpful optimization tips as well as some great information on A* in general.
|
[
"electronics.stackexchange",
"0000517777.txt"
] | Q:
Can I use this op amp to scale 0-3.3V to 0-5v?
I recently came in to a voltage meter 0-5v. I want to use it with esp8266 as a humidity display for my weather information station.
the exp8266 model I'm using (wemos/nodMCU) is 3.3V, I can use PWM to set the needle, but it won't use the full range of the meter. I need to scale the voltage up to 0-5v.
So, after reading a bit I decide what I need is an op amp. I order one that seemed like it would work from the data sheet... but I don't really know how to set it up.
https://cdn-shop.adafruit.com/datasheets/tlv2462.pdf
Here is how I've hooked it up.
A:
meters of this size are around 1mA full-scale meaning 1k Ohm/volt.
This is class 2.5 or 2.5% accuracy shown in the fine print.
A primitive yet simple and elegant solution is to change the internal resistance to 66% of the total value of resistance that is full-scale current with a 1% resistor.
simulate this circuit – Schematic created using CircuitLab
Disassemble measure required current for full scale @ 3.3V and add to R2 as shown then reassemble in reverse.
A:
You can do something like this:
simulate this circuit – Schematic created using CircuitLab
R3 & C1 form a low-pass filter with cutoff f = 1/2piRC ~= 1.6Hz. Ceramic X7R is okay for this part.
R1 and R2 set the gain, and are chosen so that R2/(R1+R2) ~= 3.3V/5.0V, are
high enough not to significantly load OA1 output, and are standard E96 1% values.
If you replace R1 with R4+RV1 you can trim the output sensitivity.
I suggest using somewhat less than the 5V angle (maybe 90-95%) and trimming the full scale
in with the pot if you go this way.
OA1 should also have 100nF bypass capacitor across the power supply pins near the chip (not shown).
|
[
"stackoverflow",
"0048834990.txt"
] | Q:
Tomcat restart option in Intellij
Is there is button or short-cut to restart tomcat in Intellij ? There is restart tomcat option in Eclipse which is useful. I tried searching it online but couldn't get any.
A:
Ctrl + F10 will show you following window. If you choose "restart server" and "Don't ask again" you dont need to choose for the next time.
|
[
"stackoverflow",
"0038553930.txt"
] | Q:
iOS10 NotificationService Extension
The method:
didReceive(_ request: UNNotificationRequest, withContentHandler contentHandler:(UNNotificationContent) -> Void)
of iOS 10's NotificationService TARGET is not triggered automatically by iOS, even if the payload of my remote notification contains the attribute: "mutable-content" : 1.
Here you are the payload example:
{
"aps": {
"alert": {
"body": "body",
"subtitle": "subtitle",
"title": "title"
},
"mutable-content": 1
}
}
Is there any missing configuration or code do I have to implement in order to make it works ?
A:
You should run the application with the extension as a target, and choose the app you would like the extension to work with. Typically you will run the extension on the application in the containing project.
After running the Extension target, your app will load, and when you will send notifications with "mutable-content : 1", you will be able to step in debug mode.
|
[
"stackoverflow",
"0051215200.txt"
] | Q:
How to load parquet file into Snowflake database?
Is it possible to load parquet file directly into a snowflake?
If yes - how?
Thanks.
A:
Yes it is possible, and best done via S3. Note, the following assumes you have a MY_PARQUET_LOADER table, a STAGE_SCHEMA schema and an S3STAGE defined, and that your parquet files are on the bucket under the /path/ key/folder.
copy into STAGE_SCHEMA.MY_PARQUET_LOADER
from (
select
$1
,metadata$filename as metadata_filename
,metadata$file_row_number as metadata_file_row_number
,current_timestamp() as load_timestamp
from
@S3STAGE/path/)
pattern = '.*.parquet'
file_format = (
TYPE = 'PARQUET'
SNAPPY_COMPRESSION = TRUE )
ON_ERROR = 'SKIP_FILE_1%'
purge= TRUE;
where this exists:
create or replace TABLE MY_PARQUET_LOADER (
RAW VARIANT,
METADATA_FILENAME VARCHAR(16777216),
METADATA_FILE_ROW_NUMBER NUMBER(38,0),
LOAD_TIMESTAMP TIMESTAMP_LTZ(9)
) cluster by (METADATA_FILENAME);
Worthwhile to read the fine manual:
https://docs.snowflake.net/manuals/sql-reference/sql/copy-into-table.html
|
[
"rus.stackexchange",
"0000443606.txt"
] | Q:
Какой он, гоблин?
Кого могут назвать гоблином и насколько это слово распространено в живой речи и в литературе?
И какой он, гоблин? Просто уродливый или имеющий определенную внешность? Или у него плохой характер и поэтому он неприятен окружающим?
А может быть, это синоним слова урод? Хотя уродом у нас назовут и приличного человека, если он не похож на окружающих.
Примеры из литературы:
Сейчас я этого гоблина делитом и в баскет!
Из Жана получится отличный гоблин! У него и рост, и сложение, и выражение лица подходящие.
Вот уж кто настоящий гоблин. Ни капли грима не надо.
Он нахмурился и забубнил, как заправский гоблин.
A:
Много видел "гоблинов" в компьютерных играх, поверьте, они все разные.
Собственно, Вики все исчерпывающе разъясняет.
Внешность описывается по-разному, но достоверно одно, гоблины — одни
из самых уродливых созданий в европейской мифологии. Они
антропоморфны, но рост варьируется от фута до двух метров. Впрочем,
гоблины умеют превращаться в людей, но три элемента их внешности
остаются неизменными: длинные уши, страшные, похожие на кошачьи,
глаза, и длинные когти на руках. Уши гоблины прячут под шапку, когти —
в перчатки, а вот глаза им никак не скрыть, поэтому, по преданию,
узнать их можно по глазам.
https://ru.wikipedia.org/wiki/%D0%93%D0%BE%D0%B1%D0%BB%D0%B8%D0%BD%D1%8B
Но тут, имхо, другое. "Гоблин", когда им называют человека (а иного сейчас представить трудно вyе специального контекста), из просто слова становится эпитетом. И тут уже на первый план план выходит не столько внешняя характеристика, сколько сущность... Сущность их в том что они похожи на людей, но внутри кардинально от них отличаются. И с людьми скорее враждуют. Вот поэтому, пусть и несколько парадоксально звучит, я бы взял в качестве наиболее близкого частичного синонима не "урод", а "гомункулюс", "нелюдь" или изобретенное Стругацкими "люден".
До пары к ним из западноевропейского пантеона страшилок можно вспомнить "тролля". Если вынести за скобки современное значение этого слова, то тролли - гоблинские братья по судьбе. Их тоже никто не видел - и все представляют по-разному. Отношения с троллями у человека несколько иные, чем с гоблинами, но в целом здесь тот же случай. Только вместо уродства - глупость и неуклюжесть. А в остальном тролли такие же "нелюди", как и гоблины.
Вот если вспоминать еще и эльфов, гномов, дворфов и прочую живность западноевропейских лесов и болот, то ситуация будет несколько иной. Все эти персонажи куда ближе к людям - и часто оказываются их союзниками. Их "нелюдями" ну никак не назовешь. И что удивительно, их внешний вид описан куда лучше и не так сильно отличается от источнику к источнику.
(++)
В связи с появлением вопросов уточнил по английской wiki в отношении гоблинов, гномов, дворфов, эльфов и троллей.
Все сказанное мной - в силе.
https://en.wikipedia.org/wiki/Gnome
https://en.wikipedia.org/wiki/Dwarf_(mythology)
(тут даже и говорить нечего, что это совершенно разные кричи)
https://en.wikipedia.org/wiki/Troll
Совершенно по-разному описывается тролль в представлениях разных этносов.
https://en.wikipedia.org/wiki/Elf
Ну здесь тоже есть упоминания об отличиях в представлениях у разных этносов, но я, признаться, принципиальных среди них не заметил.
Ну и собственно гоблины.
https://en.wikipedia.org/wiki/Goblin
Конечно, англоговорящие не знают слова "нелюдь" и тем более "люден", но в остальном картина вполне однозначная.
|
[
"stackoverflow",
"0042724580.txt"
] | Q:
Navigation drawer is not closing even after closeDrawers()
I am creating a app with navigation drawer which is used to navigate through activities.
Here's my code or drawer
private void initInstances() {
getSupportActionBar().setHomeButtonEnabled(true);
getSupportActionBar().setDisplayHomeAsUpEnabled(true);
drawerLayout = (DrawerLayout) findViewById(R.id.drawerLayout);
drawerToggle = new ActionBarDrawerToggle(busr.this, drawerLayout, R.string.hello_world, R.string.hello_world);
drawerLayout.setDrawerListener(drawerToggle);
drawerLayout.closeDrawers();
navigation = (NavigationView) findViewById(R.id.navigation_view);
navigation.setNavigationItemSelectedListener(new NavigationView.OnNavigationItemSelectedListener() {
@Override
public boolean onNavigationItemSelected(MenuItem menuItem) {
int id = menuItem.getItemId();
switch (id) {
case R.id.navigation_item_1:
startActivity(new Intent(busr.this, MainActivity.class));
break;
case R.id.navigation_item_2:
startActivity(new Intent(busr.this, aff.class));
break;
case R.id.navigation_item_3:
startActivity(new Intent(busr.this, webs.class));
break;
case R.id.navigation_item_4:
startActivity(new Intent(busr.this, admnActivity.class));
break;
case R.id.navigation_item_5:
startActivity(new Intent(busr.this, busr.class));
break;
case R.id.navigation_item_6:
startActivity(new Intent(busr.this, trng.class));
break;
case R.id.navigation_item_7:
startActivity(new Intent(busr.this, prospct.class));
break;
case R.id.navigation_item_8:
startActivity(new Intent(busr.this, erp.class));
break;
case R.id.navigation_item_9:
startActivity(new Intent(busr.this, result.class));
break;
}
return false;
}
});
A:
try this: For closing the drawerlayout
case R.id.navigation_item_1:
if (drawerLayout.isDrawerOpen(Gravity.START))
drawerLayout.closeDrawer(Gravity.START);
startActivity(new Intent(busr.this, MainActivity.class));
break;
or
case R.id.navigation_item_1:
drawerLayout.closeDrawers();
startActivity(new Intent(busr.this, MainActivity.class));
break;
|
[
"stackoverflow",
"0032792286.txt"
] | Q:
How to draw without replacement to fill in data set
I am generating a data set where I first want to randomly draw a number for each observation from a discrete distribution, fill in var1 with these numbers. Next, I want to draw another number from the distribution for each row, but the catch is that the number in var1 for this observation is not eligible to be drawn anymore. I want to repeat this a relatively large number of times.
To hopefully make this make more sense, suppose that I start with:
id
1
2
3
...
999
1000
Suppose that the distribution I have is ["A", "B", "C", "D", "E"] that happen with probability [.2, .3, .1, .15, .25].
I would first like to randomly draw from this distribution to fill in var. Suppose that the result of this is:
id var1
1 E
2 E
3 C
...
999 B
1000 A
Now E is not eligible to be drawn for observations 1 and 2. C, B, and A are ineligible for observations 3, 999, and 1000, respectively.
After all the columns are filled in, we may end up with this:
id var1 var2 var3 var4 var5
1 E C B A D
2 E A B D C
3 C B A E D
...
999 B D C A E
1000 A E B C D
I am not sure of how to approach this in Stata. But one way to fill in var1 is to do something like:
gen random1 = runiform()
replace var1 = "A" if random1<.2
replace var1 = "B" if random1>=.2 & random1<.5
etc....
Note that sticking with the (scaled) probabilities after creating var1 is desirable, but is not required for me.
A:
Here's a solution that works in long form to select from the distribution. As values are selected, they are flagged as done and the next selection is made from the groups that contain the remaining values. Probabilities are scaled at each pass.
version 14
set seed 3241234
* Example generated by -dataex-. To install: ssc install dataex
clear
input byte ip str1 y double p
1 "A" .2
2 "B" .3
3 "C" .1
4 "D" .15
5 "E" .25
end
local nval = _N
* the following should be true
isid y
expand 1000
bysort y: gen id = _n
sort id ip
gen done = 0
forvalues i = 1/`nval' {
// scale probabilities
bysort id done (ip): gen double ptot = sum(p) // this is a running sum
by id done: gen double phigh = sum(p / ptot[_N])
by id done: gen double plow = cond(_n == 1, 0, phigh[_n-1])
// random number in the range of (0,1) for the group
bysort id done (ip): gen double x = runiform()
// pick from the not done group; choose first x to represent group
by id done: gen pick = !done & inrange(x[1], plow, phigh)
// put the picked obs at the end and create the new var
bysort id (pick ip): gen v`i' = y[_N]
// we are done for the obs that was picked
bysort id: replace done = 1 if _n == _N
drop x pick ptot phigh plow
}
bysort id: keep if _n == 1
|
[
"stackoverflow",
"0001086159.txt"
] | Q:
Trace listener - creating memory overflow
I am using tracelistener in a multithreaded application to log message remotely, but the appllication creates memory overflow.
For testing I created 10,000 threads, and tried to log messages using TraceData function.
Does .Net framework create an object for every call to TraceData, which result in memory overflow?
A:
10,000 threads: each will have a (default) 1MB stack space allocated. Therefore they will need 10GB RAM, which is impossible on a 32bit process (and likely to break total available RAM/Page on 64bit).
Nothing to do with tracing.
Additional: Great new article on thread (and process) limits on Windows, by Mark Russinovich. Please note the final paragraph. "Pushing the Limits of Windows: Processes and Threads"
|
[
"mathoverflow",
"0000272698.txt"
] | Q:
What are the points of simple algebraic groups over extensions of $\mathbb{F}_1$?
The "field with one element" $\mathbb{F}_1$ is, of course, a very speculative object. Nevertheless, some things about it seem to be generally agreed, even if the theory underpinning them is not; in particular:
$\mathbb{F}_1$ seems to have a unique extension of degree $m$, generally written $\mathbb{F}_{1^m}$ (even if this is slightly silly), which is thought to be in some sense generated by the $m$-th roots of unity. See, e.g., here and here and the references therein.
If $G$ is a (split, =Chevalley) semisimple linear algebraic group, then $G$ is in fact "defined over $\mathbb{F}_1$", and $G(\mathbb{F}_1)$ should be the Weyl group $\mathcal{W}(G)$ of $G$. For example, $\mathit{SL}_n(\mathbb{F}_1)$ should be the symmetric group $\mathfrak{S}_n$. (See also this recent question.) I think this is, in fact, the sort of analogy which led Tits to suggest the idea of a field with one element in the first place.
These two ideas taken together suggest the following question:
What would be the points of a semisimple (or even reductive) linear algebraic group $G$ over the degree $m$ extension $\mathbb{F}_{1^m}$ of $\mathbb{F}_1$?
My intuition is that $\mathit{GL}_n(\mathbb{F}_{1^m})$ should be the generalized symmetric group $\mu_m\wr\mathfrak{S}_n$ (consisting of generalized permutation matrices whose nonzero entries are in the cyclic group $\mu_m$ of $m$-th roots of unity); and of course, the adjoint $\mathit{PGL}_n(\mathbb{F}_{1^m})$ should be the quotient by the central (diagonal) $\mu_m$; what $\mathit{SL}_n(\mathbb{F}_{1^m})$ should be is already less clear to me (maybe generalized permutation matrices of determinant $\pm1$ when $m$ is odd, and $+1$ when $m$ is even? or do we ignore the signature of the permutation altogether?). But certainly, the answer for general $m$ (contrary to $m=1$) will depend on whether $G$ is adjoint or simply connected (or somewhere in between).
I also expect the order of $G(\mathbb{F}_{1^m})$ to be $m^r$ times the order of $G(\mathbb{F}_1) = \mathcal{W}(G)$, where $r$ is the rank. And there should certainly be natural arrows $G(\mathbb{F}_{1^m}) \to G(\mathbb{F}_{1^{m'}})$ when $m|m'$. (Perhaps the conjugacy classes of the inductive limit can be described using some sort of Kac coordinates?)
Anyway, since the question is rather speculative, I think I should provide guidelines on what I consider an answer should satisfy:
The answer need not follow from a general theory of $\mathbb{F}_1$. On the other hand, it should be generally compatible with the various bits of intuition outlined above (or else argue why they're wrong).
More importantly, the answer should be "uniform" in $G$: that is, $G(\mathbb{F}_{1^m})$ should be constructed from some combinatorial data representing $G$ (root system, Chevalley basis…), not on a case-by-case basis.
(An even wilder question would be if we can give meaning to ${^2}A_n(\mathbb{F}_{1^m})$ and ${^2}D_n(\mathbb{F}_{1^m})$ and ${^2}E_6(\mathbb{F}_{1^m})$ when $m$ is even, and ${^3}D_4(\mathbb{F}_{1^m})$ when $3|m$.)
A:
Not an answer - just opposite - some additional requirments which
good answer may satisfy and further comments.
0) Parabolic subgroups should also be defined.
I mean good definition should also come with definition of parabolic subgroups.
For example for $S_n$, it seems natural to consider $S_{d_1}\times ... \times S_{d_k}$ as parabolics.
1)
Element count should be compatible with "known" (= widely agreed)
zeta-functions of P^n (and also Grassmanians, Flags).
I mean one can define P^n, Grassmanians, Flags in a standard way just by quotients of G/Parabolic. So one should get number of points for such manifold over $F_{1^l}$, hence one can write a standard Weil's zeta function.
For example for S_n for Grassamnian $|S_n/(S_k\times S_{n-k}) | = n!/(k!(n-k)!)$.
Requirment: zeta should agree with the "known" one (for example s(s-1)...(s-n+1) for P^n).
2)
Structure of representation theory of such groups should be similar to G(F_q).
For example for GL(F_1^l) one may expect that similar structure as in Zelevinsky theorem (see MO272686), i.e. all representations (for all "n" GL(n)) organized into Hopf algebra using induction and restriction should be isomorphic to several copies of the same Hopf algebra for S_n, each copy for each cuspidal representation.
Interesting question how many cuspidals will one have for fixed GL(n,F_1^l) ?
3) Alvis-Carter duality should work.
It is defind as roughly speaking as follows - take character - restrict it to parabolic and than back induction, with summation over all parabolics with appropriate sign.
The statement is that: it has order two and isometry on generalized characters.
Steinberg representation is dual to trivial. For S_n it is transposition
of Young diagram (if I rember correctly).
4) Frobenius action and Lang-Steiberg theorem.
One knows that for F_q there is Frobenius and Lang-Steinberg theorem (e.g. G.Hiss page 15)
holds that map $G^{-1}F(G)$ is surjection.
How to define Frobenius ?
5) "Good" bijection between irreducible representation and conjugacy classes (toy Langldands correspondence).
Similar to MO270916 one may expect "good" bijection between conjugacy classes - which might be considered as a toy model for local Langlands correspondence for field with one element.
For example for S_n there is well-known "good" bijection between the two sets via Young diagrams.
OP made intersting proposal for an $GL(F_{1^n})$ - monomial matrices
with roots of unity entries.
It would be very intersting to check whether the properties above holds true
for such proposal.
What is most unclear for me - what should be Frobenius map ?
|
[
"tex.stackexchange",
"0000020123.txt"
] | Q:
Why is the ledmac line number misaligned with the final item?
Here's a minimal document showing the problem:
\documentclass{article}
\usepackage{ledmac}
\begin{document}
\firstlinenum{1}
\linenumincrement{1}
\beginnumbering
\pstart
\begin{itemize}
\item This item aligns with the line number...
\item ... but the line number is far below this item.
\end{itemize}
\pend
\endnumbering
\end{document}
The document turns out like this:
1 · This item aligns with the line number...
· ... but the line number is far below this item.
2
Note that:
a) it's always the last item in a list,
b) the line is aligned if you add a newline (\\) at the end of the last item (but you get another line number, which looks odd),
c) the line is aligned if you add text between \end{itemize} and \pend,
d) and the line is not aligned if you simply add text after \pend.
So why does this happen and how can I fix the problem?
A:
You can use the
\beginnumbering
\pstart
...
\pend
\endnumbering
block inside the itemize environment:
\documentclass{article}
\usepackage{ledmac}
\begin{document}
\firstlinenum{1}
\linenumincrement{1}
\begin{itemize}
\beginnumbering
\pstart
\item This item aligns with the line number...
\item ... but the line number is far below this item.
\pend
\endnumbering
\end{itemize}
\end{document}
EDIT: I am astonished; after reading the comments to my answer, I was doing some tests and now the following example code (which is basically the same as the example code in the question) compiles OK and behaves as expected!
\documentclass{article}
\usepackage{ledmac}
\begin{document}
text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text
\firstlinenum{1}
\linenumincrement{1}
\beginnumbering
\pstart
\begin{itemize}
\item This item aligns with the line number...
\item This item also aligns with the line number...
\end{itemize}
\pend
\endnumbering
\end{document}
Here's the result:
I'll investigate what is going on!
EDIT2: Finally I discovered what's going on, but I don't know the reason behind the problem... it seems like a bug in ledmac. The problem is related to the length of the text in each item. You can see this with the following example:
\documentclass{article}
\usepackage{ledmac}
\begin{document}
\firstlinenum{1}
\linenumincrement{1}
\beginnumbering
\pstart
\begin{itemize}
\item This item aligns with the line number...
\item This item also aligns ...
\item This doesn't ... % but it will align if you add this text
\end{itemize}
\pend
\endnumbering
\end{document}
Compile the example as it is and you will see the strange behaviour described in the question; here's an image of the result:
Now, delete the comment character and compile again and you will get the expected result:
The same strange results will be observed if enough text is added to the second item of the example code in the question.
I think you should write a note to the package creators.
|
[
"stackoverflow",
"0049640631.txt"
] | Q:
react native margin between keyboard and input
how can i put margin between input and keyboard? So that i can see the border Bottom. And how i can change the color of the blink element in input field?
This is wrong
Keyboard hidden
A:
It's all in the react-native docs:
Keyboard issue: https://facebook.github.io/react-native/docs/keyboardavoidingview.html
Cursor Color changing: https://facebook.github.io/react-native/docs/textinput.html#selectioncolor
|
[
"stackoverflow",
"0046148167.txt"
] | Q:
Spark Scala CSV Input to Nested Json
This is how my input data looks like,
20170101,2024270,1000,1000,1000,1000,1000,1000,1000,2000,2000
20170101,2024333,1000,1000,1000,1000,1000,1000,1000,2000,2000
20170101,2023709,1000,1000,1000,1000,1000,1000,1000,2000,2000
20170201,1234709,1000,1000,1000,1000,1000,1000,1000,2000,2000
And i want to convert the same to an keyValue RDD, where key is an Integer and Value is an JSON object and the purpose is to write the same to ElasticSearch
(
2024270, {
"metrics": {
"date" : 20170201,
"style_id" : 1234709,
"revenue" : 1000,
"list_count" : 1000,
"pdp_count" : 1000,
"add_to_cart_count" : 1000
}
}
)
In Python, I am able to do the same using the below piece of code,
metrics_rdd = sc.textFile('s3://myntra/scm-inbound/fifa/poc/size_curve_date_range_old/*').map(format_metrics)
def format_metrics(line):
tokens = line.split('^')
try:
return (tokens[1], {
'metrics': {
'date': tokens[0],
'mrp': float(tokens[2]),
'revenue': float(tokens[3]),
'quantity': int(tokens[4]),
'product_discount': float(tokens[5]),
'coupon_discount': float(tokens[6]),
'total_discount': float(tokens[7]),
'list_count': int(tokens[8]),
'add_to_cart_count': int(tokens[9]),
'pdp_count': int(tokens[10])
}
}) if len(tokens) > 1 else ('', dict())
But am not able to figure it out how to achieve the same in Scala and am a newbie to Scala, I managed to get the below output, but not able to wrap the JSON into "metrics" block, any pointers would be really helpful ?
ordersDF.withColumn("key", $"style_id")
.withColumn("json", to_json(struct($"date", $"style_id", $"mrp")))
.select("key", "json")
.show(false)
// Exiting paste mode, now interpreting.
+-------+-------------------------------------------------+
|key |json |
+-------+-------------------------------------------------+
|2024270|{"date":20170101,"style_id":2024270,"mrp":1000.0}|
|2024333|{"date":20170101,"style_id":2024333,"mrp":1000.0}|
|2023709|{"date":20170101,"style_id":2023709,"mrp":1000.0}|
|1234709|{"date":20170201,"style_id":1234709,"mrp":1000.0}|
+-------+-------------------------------------------------+
A:
I tried what @philantrovert has suggested and it worked.
scala> val ordersDF = spark.read.schema(revenue_schema).format("csv").load("s3://myntra/scm-inbound/fifa/pocs/smallMetrics.csv")
ordersDF: org.apache.spark.sql.DataFrame = [date: int, style_id: int ... 9 more fields]
scala> :paste
// Entering paste mode (ctrl-D to finish)
ordersDF.withColumn("key", $"style_id")
.withColumn("metrics", to_json(struct($"date", $"style_id", $"mrp")))
.select("key", "metrics")
.toJSON
.show(false)
// Exiting paste mode, now interpreting.
+-----------------------------------------------------------------------------------+
|value |
+-----------------------------------------------------------------------------------+
|{"key":2024270,"metrics":"{\"date\":20170101,\"style_id\":2024270,\"mrp\":1000.0}"}|
|{"key":2024333,"metrics":"{\"date\":20170101,\"style_id\":2024333,\"mrp\":1000.0}"}|
|{"key":2023709,"metrics":"{\"date\":20170101,\"style_id\":2023709,\"mrp\":1000.0}"}|
|{"key":1234709,"metrics":"{\"date\":20170201,\"style_id\":1234709,\"mrp\":1000.0}"}|
+-----------------------------------------------------------------------------------+
I have also tried an other way using Json4s library and that also worked,
def convertRowToJSON(row: Row) = {
val json =
("metrics" ->
("date" -> row(1).toString) ~
("style_id" -> row.getInt(1)) ~
("mrp" -> row.getFloat(2)) ~
("revenue" -> row.getFloat(3)) ~
("quantity" -> row.getInt(1)) ~
("product_discount" -> row.getFloat(3)) ~
("coupon_discount" -> row.getFloat(3)) ~
("total_discount" -> row.getFloat(3)) ~
("list_count" -> row.getInt(1)) ~
("add_to_cart_count" -> row.getInt(1)) ~
("pdp_count" -> row.getInt(1))
)
(row.getInt(1),compact(render(json)).toString)
}
scala> val ordersDF = spark.read.schema(revenue_schema).format("csv").load("s3://myntra/scm-inbound/fifa/pocs/smallMetrics.csv").map(convertRowToJSON)
ordersDF: org.apache.spark.sql.Dataset[(Int, String)] = [_1: int, _2: string]
scala> ordersDF.show(false)
+-------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|_1 |_2 |
+-------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|2024270|{"metrics":{"date":"2024270","style_id":2024270,"mrp":1000.0,"revenue":1000.0,"quantity":2024270,"product_discount":1000.0,"coupon_discount":1000.0,"total_discount":1000.0,"list_count":2024270,"add_to_cart_count":2024270,"pdp_count":2024270}}|
|2024333|{"metrics":{"date":"2024333","style_id":2024333,"mrp":1000.0,"revenue":1000.0,"quantity":2024333,"product_discount":1000.0,"coupon_discount":1000.0,"total_discount":1000.0,"list_count":2024333,"add_to_cart_count":2024333,"pdp_count":2024333}}|
|2023709|{"metrics":{"date":"2023709","style_id":2023709,"mrp":1000.0,"revenue":1000.0,"quantity":2023709,"product_discount":1000.0,"coupon_discount":1000.0,"total_discount":1000.0,"list_count":2023709,"add_to_cart_count":2023709,"pdp_count":2023709}}|
|1234709|{"metrics":{"date":"1234709","style_id":1234709,"mrp":1000.0,"revenue":1000.0,"quantity":1234709,"product_discount":1000.0,"coupon_discount":1000.0,"total_discount":1000.0,"list_count":1234709,"add_to_cart_count":1234709,"pdp_count":1234709}}|
+-------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
[
"stackoverflow",
"0048242528.txt"
] | Q:
Delay in the Finally block after try block completion
I am trying to run the below concurrency code:
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
public class TestThread {
public static void main(final String[] arguments) throws
InterruptedException {
ExecutorService executor = Executors.newSingleThreadExecutor();
try {
executor.submit(new Task());
System.out.println("Shutdown executor");
executor.shutdown();
executor.awaitTermination(5, TimeUnit.SECONDS);
} catch (InterruptedException e) {
System.err.println("tasks interrupted");
} finally {
if (!executor.isTerminated()) {
System.err.println("cancel non-finished tasks");
}
executor.shutdownNow();
System.out.println("shutdown finished");
}
}
static class Task implements Runnable {
public void run() {
try {
int duration = 6;
System.out.println("Running Task!");
TimeUnit.SECONDS.sleep(duration);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
and the output is :
Shutdown executor
Running Task!
shutdown finished
cancel non-finished tasks
java.lang.InterruptedException: sleep interrupted
According to the output it seems finally block is skipped and code after the finally block is executed first and later the finally block is executed. Doesn't this contradict the normal flow of the code which is finally gets executed after the try/catch block is complete?
EDIT : Tried
System.out.flush();
System.err.flush();
after every print as per one of the suggestions but still the same output.
EDIT:
I am using an online Compiler.
A:
I suppose that you expect that these two traces be reversed as these are declared in the reverse order :
shutdown finished
cancel non-finished tasks
I think that the issue comes from the mixing of System.err and System.out.
These are not the same streams. So both their flushing and their displaying may be performed at distinct times.
According to the application/system that displays the output (IDE, OS command line, online compiler/executor), at least 2 things may create an ordering issue :
the autoflush of these streams may be or not enabled
the displaying order of traces may not be synchronized between these two streams in the "output/console" of the application/system.
As workaround to display outputs according to the timeline :
flush the streams (System.out.flush() and System.err.flush()) after each print() invocation. It may work but no guarantee as the application/system that writes the output may not synchronize the displaying of these two streams through the timeline.
endeavor to use only System.out and use System.err only for error situations where the program will exit.
It will reduce interleave possibilities.
if the last idea is not suitable because it matters to distinguish clearly the two kinds of output, use a logging library (Logback or Log4j2 preferably with SLF4J in facade) that allows to trace information with precision (date, time, severity level, ...) and to read them in the actual order of the program flow timeline.
Here is the same code that uses only System.out :
I added //1, //2,... comment for the expected order of the output.
public class TestThread {
public static void main(final String[] arguments) throws InterruptedException {
ExecutorService executor = Executors.newSingleThreadExecutor();
try {
executor.submit(new Task());
System.out.println("Shutdown executor"); // 1
executor.shutdown();
executor.awaitTermination(5, TimeUnit.SECONDS);
} catch (InterruptedException e) {
System.out.println("tasks interrupted"); // not invoked because caught in Task.run()
} finally {
if (!executor.isTerminated()) {
System.out.println("cancel non-finished tasks"); // 3
}
executor.shutdownNow();
System.out.println("shutdown finished"); // 4
}
}
static class Task implements Runnable {
public void run() {
try {
int duration = 6;
System.out.println("Running Task!"); // 2
TimeUnit.SECONDS.sleep(duration);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
The output is now as expected and it should be the same wherever the program is run : command line, IDE, ...
Shutdown executor
Running Task!
cancel non-finished tasks
shutdown finished
java.lang.InterruptedException: sleep interrupted
Note that :
java.lang.InterruptedException: sleep interrupted
may still differ in the order as it is written by Throwable.printStackTrace() that relies on system.err.
|
[
"stackoverflow",
"0012918243.txt"
] | Q:
.NET SDK will it work with VB
I am somewhat new to .NET I have a quick question. Amazon web services has .NET SDK mentions that it has a C# code examples. I just want to confirm, will the SDK work if the language used is VB? Thank you.
Edit: The unanimous answer seems to be yes, but I am not sure how to do it. When I try to create a new project, I am only given the option of creating a 'AWS' project under Visual C#.
A:
Yes it will work. C# compiles into MSIL which runs on the CLR, which VB.NET does the same.
|
[
"stackoverflow",
"0037963542.txt"
] | Q:
Excel HYPERLINK function doesn't work
I've got a spreadsheet that includes Google Map links, but the links don't work. They're generated with this function:
=HYPERLINK("http://www.google.com/maps/place/" & F19 & "," & F20)
...where F19 and F20 contain lat and long coordinates in +/- format. It appears to work fine, produces a link and everything, but clicking on them just pops up a "An unexpected error has occurred" error.
I've googled the issue a bit, but all the solutions I've found seem to refer to earlier versions of Excel and revolve around registering old DLLs that can't be found on this system. Links automatically generated by just entering an URL in a cell also pop up the error, but simultaneously open the link properly.
Any advice?
A:
I am following the following steps to Link Google Map URL's From the Excel Worksheet.
A. Analysis of URL
For example Google Map URL for India is :
https://www.google.co.in/maps/place/India/@20.1505368,64.4808042,4z/data=!3m1!4b1!4m5!3m4!1s0x30635ff06b92b791:0xd78c4fa1854213a6!8m2!3d20.593684!4d78.96288?hl=en
It has following distinct parts.
a) https://www.google.co.in/maps/place/
b) Place whose map is required like "India"
c) Location Identifier after @ For India it is
20.1505368,64.4808042,4z
d) Location Data Identifier after "/data=" For India it is
!3m1!4b1!4m5!3m4!1s0x30635ff06b92b791:0xd78c4fa1854213a6!8m2!3d20.593684!4d78.96288?hl=en
Now we put the respective parts in A4,B4,C4,D4 respectively.
We use the following formula in E4 to concatenate the strings
together.
=A4 & B4 & "/@" & C4 & "/data=" & D4
Finally another cell for example in E8 we use the HYPERLINK
Formula.
=HYPERLINK(E4,"India ")
Now it is a clickable link and we can open India's Map by clicking on the link.
Finally snapshot pictorially depicts it.
EDIT 28-06-2016
Regarding problems raised in the question my viewpoint is mentioned against each point.
Links automatically generated by just entering an URL in a cell also
pop up the error, but simultaneously open the link properly.
Regarding problem occurring when full hyperlink is entered. Other checks are to made. Is it a casual phenomenon even after clearing the cache and restarting the system or a regular one. If it persists on a new instance of excel after clearing cache( I use CCleaner free version) then problem may please be taken up with Microsoft Community
...where F19 and F20 contain lat and long coordinates in +/- format.
It appears to work fine, produces a link and everything, but clicking
on them just pops up a "An unexpected error has occurred" error.
This problem could have linkage with the problem mentioned above. It can also be caused by a minor syntax problem relating to concatenation of strings. I faced similar problems relating to proper concatenation of strings of the URL and could solve it after a number of attempts in syntax correcting. Finally I found that constructing URL is easier and without much of syntax problems, If I put various elements of the URL in the cells of the worksheet and from their values construct the final URL.
|
[
"stackoverflow",
"0012001053.txt"
] | Q:
How to concatenate 2 strings when the first is defined in a preprocessor directive and the second is constant in C++?
Here's what I have:
#define STRING "string 1"
string string2 = STRING + "string3";
This is wrong. What is the solution? Is the problem that string2 must be constant or what else, and why?
A:
#define STRING "string 1"
std::string string2 = STRING "string3";
Concatenation of adjacent string litterals isn't a feature of the preprocessor, it is a feature of the C/C++
|